The L3-AI conference brings together speakers from all over the world who are experts in the conversational AI community. During the conference, they’ll be sharing their work building truly interactive AI assistants.
But before we kick off L3-AI on June 18th, we want to give you a chance to get to know some of our speakers by hosting a series of Ask Me Anything (AMA) sessions in the forum.
How does it work?
On Thursday, June 4, we’ll open this thread to pre-submitted questions. Once we open the thread, you’re free to ask our speaker anything (especially as it relates to conversational interfaces and NLU ). On Mon. June 8, 6am-7am PDT/3pm-4pm CEST, Alan will be available live for one hour to answer both presubmitted and live questions in this forum thread. Be sure to react to other questions you’re interested in, so speakers can see which questions have the most community interest At the end of the AMA, we’ll close the thread, but you can catch Alan again at L3-AI!
About Alan Nichol:
Alan Nichol is co-founder and CTO of Rasa. He holds a PhD in machine learning from the University of Cambridge and has years of experience building AI-powered products.
Hi Alan! You’ve made the leap from academic research into the world of startups - I’m curious to know what are some things you’ve learned along the way about real-world deployment of conversational AI?
hey @amn41
Can use workflows engines for rasa ? sometimes stories cant do the work as expected since the bot keep jumping from a story to another , so in my case i need the bot to stick into one and only story for now am working with forms and its doing well, but as improvement some of my colleagues proposed me to use work flows engines for this case.
What’s your experience dealing with non-technical managers and communicating to them about the challenges and product-technology fit when it comes to ML/AI?
The Date Picker in RASA is designed specifically for Slack connection. Why can’t this be as general feature in RASA? Where can i find all the upcoming features and releases? What the roadmap for next 3 months?
Would humanised bot characters with clear personality traits + matching nuanced dialogue (domain/client defined) be a major value-added in the Rasa CDD process?
If the answer is Y
How would you recommend implementing, sustaining and optimising this persona-driven approach in the RASA CDD
Hi @amn41!
I am interning at a healthcare startup and helping them build chatbots for their customers.
What advice or suggestions will you give to someone who wants to work or is currently working on projects related to risky and high precision domains like healthcare in terms of building a chatbot ?
What should be the do’s and don’ts in terms of building a RASA chatbot for health care domain in general ?
P.S. Rasa has been pretty good for applying my skills in my internship and I’m pretty excited once I too will be able to contribute to this amazing community.
Funnily enough my academic research had nothing to do with conversational AI at all. I was working in physics, applying machine learning to the prediction of interatomic forces.
Alex and I founded a search company called treev while I was still working on my PhD, which got me interested in language and eventually into dialogue.
I’d say the biggest thing I’ve kept is my love of simple, tough-to-beat baselines. But the biggest difference I would say is that my standards of “proof” have changed. In startups, if you want to see if something has an impact, talking to 3-5 users is plenty. You don’t need to prove anything to the standards of peer review. Just test it out, see if there’s a BIG signal. If there isn’t, ditch it and move on.
hi @cristianmtr ! great question and one that comes up a lot. In fact, the reason we first introduced the concept of 5 levels of AI assistants was to help non-developers understand why there is more to chatbots than just NLU. (btw, I’m presenting an update to the 5 levels at L3-AI).
I think teams now are much more sophisticated than they were 2 years ago, and we see a bunch of different personas getting involved. We’ve recently started using the name Conversation-Driven Development to describe how we see the most successful teams working together.
I think one of the key things that experienced teams understand is that none of the hard problems in conversational AI show up when you are prototyping, they all come when you try to prepare for production. So it’s easy to build tools for making chatbot prototypes (and there are many vendors out there), but it’s much harder to build tools for scale and production.
I see more and more experienced teams these days, who have learned these lessons and know what to look for in a product. But first-timers still fall into the same traps. I think as experienced developers we owe it to them to explain these points better.
If you have ideas and suggestions, I would love to discuss them!
hi @lkrishnaprasad ! We unfortunately can’t make this a general feature because the messaging platform has to provide this UI element for us to trigger it. Some channels have custom features, for example Messenger has quick replies and carousels. I agree date pickers are really nice but unfortunately not every channel provides them.
hey @amn41, well workflows engines will help if you have a list of actions like for example registration as or to track a demand generally used with long flows thats the point of using a work engines as i know i might be wrong. as workflows engines theres for example Drools or Camunda.
Hi @Da_Humaniser ! It’s an interesting question, there is no right answer so I will offer an opinion. I think the ultimate goal of AI assistants is to be ever more helpful, not to be every more human. It’s an important distinction to make.
I also believe that bots should always self-identify as bots, from an ethical standpoint.
Keeping those two in mind, your assistant will always have some kind of personality. Even if you think you are designing it to be ‘neutral’, people from other cultures might have a very different reaction to the same words. Every word you include in your responses matters, and all sorts of contextual details will influence the perceived personality of your assistant. Think of how quickly people get a response, the visual look and feel of the messaging channel where you have deployed your bot, or of course the prosody / vocalisation if you are using a voice channel!
I’m not an expert on this by any means, but it’s an important area to understand and keep working on.
I have seen a number of successful uses of conversational AI in the healthcare space. The team at dialogue are super advanced, also Synaptitude and Tia , a company working on women’s health.
Very recently there have been a number of projects (including from the WHO) using Rasa in the context of COVID-19, see here.
We also have a sample project called the medicare locator which you might find interesting.
I think in the health care domain it is especially important to get a lot of advice from clinicians and other specialists who know best, and to involve them closely in the whole process from sharing your first prototypes with test users through to production. Of course in health care it is especially important to follow the relevant ethical standards set by the profession, including how you handle data and when you involve a human in the loop.
Glad to hear you are learning a lot by using Rasa! Keep it up
I’ve seen a couple of projects where companies had a large library of processes implemented in one of these tools, and then worked on ways to integrate them by customizing Rasa Open Source or writing a conversion script.
Two comments I would add to this
generally, implementing the business logic is not the hard part of building an AI assistant. The hard part is dealing with all the ways that users deviate from the happy path
if you have long flows, you should probably consider whether a conversational AI experience is really the best one, because having a conversation with 50 questions can be quite tedious.