What's the best way to develop Alexa-like multiple skills in Rasa platform?

For example, I want to develop skills such as weather inquiry, music search, restaurant search, voice command etc. For each skills, they have their own training data. Currently, in Rasa platform all examples are designed for one skill, i.e. multiple intents in a skill/task. To develop multiple-skill robot, how can I create and use models for different skills?

Or, should I just mixed all training data together and treat all skills conceptually as one super skill? I am afraid the quality may not be good if all data are mixed together.

Any suggestions on how to go about this? Thanks for the community.

3 Likes

i have the same question.

yes, are you right @lingvisa .i also have same question.

I have been investigating the same problem. The way I see the problem is that, as designed, Rasa’s entity extraction is non-contextual. It tries to recognize entities, and then it classifies an utterance to an intent.

The drawback is that EVERYTHING about human language is contextual. Think about all those times you’ve been confused about something someone said in a conversation until they provided the context.

“Oh, you were referring back to that movie we watched last week. Now I understand.”

Another user in a different forum on this site has done a POC of using micro-services. (Providing conversation context to the NLU using microservices) Conceptually, the system would be set up to first classify the utterance to an intent, and then forward it to a Rasa server instance that was trained on data only for that intent. Essentially, the system would be putting the utterance in context before analyzing it.

The micro-service idea looks achievable, but a maintenance nightmare. I’d like to find an easier way to achieve the same goal.

Hi, Shotgun167, thank you for you sharing your experience. My question is more about a high-level issue, i.e. what’s the strategies to develop multiple-skill robot in Rasa platform? Some thoughts or advices would be really appreciated.

Yeah, that’s kind of what I was trying to get at, but didn’t state very well. From my limited experience Rasa, in it’s current state, is not designed well for a multi-skill robot. An intent has an implied/assumed context and the context is encapsulated in the model. Rasa glosses over it all by lumping all intents into a single model, and therefore:

The strategy question then simplifies to one of how do we route a question/skill/command/intent/etc to a model that encapsulates a context that can interpret it properly.

I’m suggesting that Rasa needs a meta interpreter and a router to choose one of several models to do the understanding. Is there a way to accomplish this without a modification to Rasa, or running a multitude of Rasa instances?

1 Like

I think there is some slight confusion here. You can absolutely use rasa_nlu for multiple intents (What you’re calling skills). Our team has had success with a couple separate intents that represent fundamentally different “skills”. As long as you supply the model with a healthy amount of training data for each intent, you’ll be fine. For a given utterance you can check what the model classifies the intent as in the “intent” and “intent_ranking” response when you test it.

Shotgun, I got your point. In addition to NLU, Core part also needs to accomodate such a request. The latest announcement on the merging NLU and Core may have an impact on this, but it would be great if Rasa team can share some thoughts on this.

The impression I got from Rasa at the NYC meetup was that the NLU+Core merger is purely administrative in nature. The actual software will go on as before.

1 Like

Hi, @daxaxelrod, to me an intent is very different from a skill. Typically, a skill contains multiple intents, such as “thank_you”, “greet”, “faq”. If we build all very different skills (‘intents’ in your term) into one model, multiple issues probably arise. For instance, quality issue, and secondly, maintenance issue.

Basically what you need is a BoB (bot of bots), that will quickly figure out what the user wants and transfer to the appropriate bot. You can’t do this in Rasa natively, you’d have to hack it by creating N+1 bots (where N is the number of skills and 1 is the general bot) and then write custom code to transfer the tracker (conversation state) or just the slots (information the bot has collected) to the more specific bot.

1 Like

I see no reason why one couldn’t do this in Core with appropriate stories. If you had a skill-specific bot it would have story ‘A’ about how to deal with something, so your ‘meta bot’ would just have some header stuff which leads to that story then the same story verbatim, i should think. One could possibly even write a script which will add these headers automatically so you only need to write the sub-stories.

For example, just imagine it as a human that can do all those things, the stories might be complicated but I imagine it would work in principle, maybe I am wrong about this? (or underestimate the required story length/overestimate Core)

True this would make for a large complicated model, but would make it possible to switch skills easily, rather than having the skill-bot have to flag the original bot once it’s done so more things could continue.

1 Like

Thankyou so much for given this link.

I don’t think you’re wrong, Zylatis. It can be done. The issue is if it is the best way to solve the problem. The “one model to rule them all” approach is necessarily going to be error prone, because the model is being asked to be an expert at so many things at once. In my case, so error prone as to be nearly useless. The intents come back accurately most of the time. The entities are much less so, and I’m getting entities that were not trained with the intent.

Consider this. You want an AI to identify animals. Distinguishing between cats and dogs has been demonstrated with good results, but the classifier is only trained with pictures of cats and dogs. Now, add in catfish, porpoise, eagle and sparrow. Which would do better at identifying a turkey buzzard, the “animal” classifier or the “bird” classifier?

Igrinberg’s term “BoB” is a concise term for this idea, whether implemented as a flock of micro-services or a Rasa instance directing a flock models. Understand what area the utterance concerns, then direct it to a model that specific to that area.

Perhaps I am misunderstanding your point about the entities not trained with intents, but in custom actions I believe you can deal with this (unless you are referring to a purely NLU problem, not Core, in which case I would need an example to understand better). Anyway, if you are talking about Core, then one can specify to extract given entities only from certain intents, but again i confess i may not understand your situation properly. Effectively, you can make intent-specific entities with appropriate intents and actions.

Regarding the animal classifier point, I’m not sure I understand. To state the obvious if you want to classify birds you need birds in the example. I would, perhaps naively, expect that one could make use of low NLU confidence + custom fallback policies to try steer things in the right direction.

To clarify, I am not trying to be dismissive of the complexity of the task, rather I lack a concrete example to help get over my assumption it can be done as is =) That being said, a BoB would be cool and I wonder if something like this could be done just by using a series of NLU and NLU + Core servers in a flock of microservices as you describe. You could have an NLU server simply distinguish the coarse grain stuff which then Core uses to decide which NLU server to use for subsequent inputs (i.e. NLU server for birds, NLU server for dogs, etc)

Anyway, if you are talking about Core, then one can specify to extract given entities only from certain intents, but again i confess i may not understand your situation properly. Effectively, you can make intent-specific entities with appropriate intents and actions.

Could you point me to an example of how to do this, i.e. make intent-specific entities. This looks like it will be easier than a BoB.

I have been working through what it would take to construct a BoB.
In the trainer, you’d have to loop through each intent pulling just the examples for that intent out of the json, then persist each model.
In the server startup, you would start a sub-server for each model on disk, incrementing the port number for each. When queried, instead of returning the result, it would look at the predicted intent and pass the “text” member of the response json to that server as a query. The response to the user would then become that server’s response. The server would need a flag to indicate whether it is a master or slave, but that is just housekeeping.

This is from the master, looks to be the way for multi domain, but not able to find documentation on this. Skills

Hi ramgesg,

I looked at this link and it’s not valid anymore. It was in version 1.1.6:

But looking at this page, it’s not so clear to me that they try to implement a multi-skills bot. I see that this SkillSelector class seems to aggregate some paths in its training but beyond that, I’m not sure to get how it would be used.

Any comment?

Hey did you find answers to its implementation?

1 Like