I have this scenario that having a unique and generalist bot would imply on something like having hundreds of intents/classes, and I can imagine that this will lead into a need of tons of curated data to provide a good solution avoiding misclassification on intents with somewhat high similarity on its tokens.
That being said, seems like a set of specialist bots/multiple agent architecture would solve my problem well. As RASA Core still doesn’t have a explicit support for this kind of architecture, I would love to know how people are dealing with this kind of solution… like, when does your bot calls the another one, and how is this represented in the stories?
Does anyone here have some code snippets or any paper/literature on this? Any help on this would be very much appreciated.
i saw there is an experimental feature about Training Data Importers with multi project data importers to allow clean organisation of your training data (Training Data Importers)
tbh, i have heard this multi agent system many many times by various vendors and truth be told, this doesn’t work. Someone came up with it without taking into consideration any scalability aspect. We have tested with over 100 intents, and one model worked absolutely fine. If your problem is related to creating a knowledge base, there is a section for that also
You don’t want one predictive model calling another predictive model reducing your chances to get the correct answer even further. Ideally you should try to create one classifier(NLU) and split stories but train with the same architectures.
If it is absolutely neccessary to split the models into many, use different containers. train them with their lifecycle management, and use a shared tracker store perhaps
I see. Thank you for the answer, it’s good to know about your experience with so many intents.
Me and my team are still prototyping the intents constructing a intuitive syntax that makes any annotator with little to no knowledge about the syntax itself learn quickly how to do accurate annotations. Problem is that the proposed syntax implies in this huge cardinality for a classifier, and I wonder about the huge quantity of instances needed for each one of the, like, 300 intents. Designing specialist agents would be a proxy to avoid a situation that would be an ‘overkill’ on annotation.
So, maybe a valid strategy on RASA would be something like: a client is solving it’s ‘fraud’ issues using the bot and then, finishing it, he wishes to work with ‘support’ questions… maybe stories with custom actions pointing to the other bot when an out of scope intent is detected in a particular agent. Idk… This architecture would be a pain to implement but still…
Problem is the shared context, how do you keep track of what has been asked before and what needs to be asked now amongst the many bots you have deployed.
if you want to separate these bots, then maybe use a shared tracker store between them. We have tried this once wtih multilingual bot with a shared tracker store, it was quite interesting to predict actions based on that what was asked before in a different language. but this implicates that you have trained a single core model and different nlu models
You can try the metabot concept( have one NLU model that does topic classification which then routes to a chatbot(that takes another NLU model and predicts an action)
Hey Julio, we are actually trying to accomplish the same architecture pattern with one generalist agent to many specialized agents. Was wondering where you landed, and if you have any lessons learned from any approaches you took?
Sorry for the late response. I didn’t advance using these multi-agent architecture, but it would be quite ideal to have something more well defined on Rasa (on my case at least)… I am still working on a generalist prototype. I didn’t check the Rasa last releases to see if there are new stuff about multiple agents… have you found anything interesting?