Conversational AI User Adoption

User adoption for a disruptive product is hard if not intuitive, simple, 10x from the old experience, has great value addition. Conversational AI from a technical aspect, is itself a hard problem. User adoption is just another mess.

I would like to talk to a bot only if i have a question/work. This sets the premise for this topic as amusement is not a use-case to build today’s conversational AI. Now, if the only thing i want from my bot is to work for me, that should better be effective to provide a decent experience. In some use-cases (like mine), there are major cases where bot fails to answer and results in a human-handover. These cases over time will result in major -to-> minor -to-> edge cases (unresolved queries by the bot). The idea is how fast can we get to the point where bot drops off only for the edge cases or minimize the number of edge cases.

I have this question for Rasa Core team that how much developer/user adoption they can see with Sara Bot (if they had planned to) vs Community forums. One challenge is that demo bot is just one-to-one conversation and others can’t contribute. Fair point. Are you putting back the useful data from forums to train Sara regularly (helpful if you can share the data acquisition/warehouse strategy and the training interval). That brings me to a real world problem i’m hitting on. I know it’s far from reality given present capability of NLP/ML/DL. Your approach on building a production grade conversational AI would help to solve a real world problem.

Any prediction/approach on how far are we from seeing conversational AI in action for group chat (Multiuser conversation which is sort of more natural scenario as it is really horrifying if one keeps talking to a bot for forever), learning from each user’s conversation/context and come up with answers/insights for those conversations which were already answered by a human.

Suggestion - We can have a community focussed slack channel and have Sara bot there, this way we can delve deeper into a discussion and Sara bot would become a benchmark of multiuser context aware conversational AI.

For example, in current scenario, for a business problem, there are experts (who know-it-all or claim to) and there are end-users/customers. User will call/chat with an expert/experts (there can be one or more subject matter experts depending upon the breadth of the problem domain) in a present/natural way. One of the idea of conversational AI is to mimic these experts to minimize their workload. Now if we keep experts out of the loop then it feels like its not a scalable approach. Ideal scenario would be multiple user talking to each other, whenever somebody asks a domain-specific question, agent/bot pops up with right answer. If the answer(by bot) wasn’t right or got corrected by an expert, it has the ability to learn from the human-in-the-loop.

In my case, there is an end user interacting with a bot for its queries, more than half of which can not be answered by the bot today. The conversation is handed-off to the human and resolved there. But the bot is unaware of human-human conversation or does not have the ability to learn from it (post successful resolution). This would be a great deal if we can automate this to minimize for similar future user-requests.

An engaging discussion would be much appreciated. The idea is to bring another human (the one who knows-it-all ) in the loop for a human-bot interaction. This might sound like a developed AI solution which is asked to read a book and score 10/10 for any level of difficulty. But i guess after seeing the approach and developments of Rasa, something like this is possible in coming years.

This topic might fall into what Rasa @alexweidauer calls level-5 autonomous assistants with deep learning based NLG (God knows what it will be called 10 years down the line) and would not be something that can be expected from level-3’s not-so-smart templates/language generation.

P.S. I still think these are very early days (Premature) of conversational AI. The real breakthrough is yet to come. Hope the community keeps getting updates about product roadmap ( The way tech enhancements are managed, really well +1) All thanks to the team :slight_smile:

@Saurabh619 Thanks for your message! A couple of thoughts:

  • Use cases: We definitely see a lot of different use cases across many different industries (e.g. healthcare, insurance, banking) and business functions (e.g. customer service, internal processes, sales + marketing). What most of them have in common is that they are “task-oriented” - i.e. as you said, “work for you”. There are some great examples out there as well though for more open ended dialogues like
  • How to train: As much as we think it would be super exciting for the assistant to “learn itself”, for most use cases you actually don’t even want that. So far, you’d probably want the assistant to be able to generalize to handle new, similar situations but not auto-train. Instead, through labelling training data, you’re able to have a high quality in conversations. You also would give your human agents in customer service a “training” before they start talking to real customers.
  • Human handoff / human in the loop: We agree that assistants are still in their early days (however, are already generating significant ROI for businesses) and that’s why having a good fallback policy is key. Human handoff is a great way to start bootstrapping an assistant with only a few supported user goals to many. Another way to ensure high quality would be to have a “human in the loop” - so basically letting humans check every assistant response before it goes out.

For Sara, we don’t have a human handoff but actually analyse the “failed” conversations to improve it over time.

AI is one of the most used languages and all the important industries own this technology, netgear r6700 review helped me to get the solution of this so far.