Is it possible to deactivate certain intents at top-level (i.e. for example at the beginning of a dialog)?
I’m worried about contextual intents that would be used within a form, such as confirmations like “yes” or slot-filling for numbers. My current understanding is that they are active all the time. So please correct me if I’m wrong. :slight_smile
For example, a user might start the conversation by saying “5”. This might not be an issue for text-based chatbots, but for voice-based chatbots it could still happen (e.g. misrecognizing “hi” as “five”).
Do I understand it correctly that you’d like to somehow “block” the prediction of a certain subset of all intents in a specific situation? If that’s correct, then – abstracting away from what features Rasa has – how would you specify exactly what these situations are? Sure, you can have easy situations like “I only want to block intents XYZ at the very beginning of the conversation (i.e. as the first user intent)” but maybe you want to also work with more complicated ones? Either way, I think if/once you know exactly how you would specify those situations in human terms, a work-around using Rasa could be created somehow (and we can talk about it then). While there isn’t a dedicated feature for this inside Rasa, many things are often possible even just with rules and similar tools
thanks for your reply. Let’s take the example of a confirm-intent. This really only makes sense after the bot has prompted for a confirmation, e.g. by prompting “Is that right?” “Do you agree?”, etc. Outside the context of such a question the confirmation intent is rather meaningless, and should rather be classified as chitchat or considered a misrecognition…
Now, my understanding (correct me, if I’m wrong :-)) is that in Rasa all intents are active all the time. That surprised me, because all other dialog managers I have worked with so far were eager to switch off unneeded intents to avoid confusion in case of a misrecognition. After all, correct classification is easier if there are less options.
Admittedly, this does not block me at the moment… I’m working on building a purchase dialog, and still struggle with more fundamental matters like accumulating several items the user would like to buy… (Purchase Dialog Form -- multiple slots)
Thanks for welcoming me to the forum! It is very encouraging for me to receive such a quick feedback to my posts!
Let’s take the example of a confirm-intent. This really only makes sense after the bot has prompted for a confirmation, e.g. by prompting “Is that right?” “Do you agree?”, etc.
I’d disagree or say “it depends” here From my experience, especially from looking at the conversations people with have with our demo bot Sara, sometimes people behave rather unpredictably and it doesn’t sound right to try to limit them only to those options that the bot maker can imagine. In the case of affirming, sometimes people will say things like “alright”, “ok” or “cool” (which can serve as affirmatives) but they’ll use them merely to express a positive reaction or acknowledgement of anything the assistant has said.
One thing is for sure in my opinion: the NLU part should always classify things (and be trained to classify things) according to the real intent, not according to what the set of expected intents is. If some utterance is intent:affirm, it should always be classified under that intent, i.e. that intent should always be available for the intent classifier. I’d “block” certain intents by adding some logic on top of the classifier. Some component that knows which intents are allowed given the context, and would change any other predicted intents to chitchat or something else.
Either way, I’d think twice before enforcing something like this on the users of the system. It’s often easy to not think (or forget) about something that the user could reasonably say. And if it ever happens that the user says something but the assistant will not understand it because it does not expect that intent at that particular point, the user may become very frustrated… My personal opinion is that it’s better to allow all intents at all times and cover infrequent/unexpected uses in the stories/rules logic. But I respect that some other frameworks explicitly allow for different ways of handling things. One reason why Rasa doesn’t focus on this is that Rasa tries to not encourage building assistants whose logic is basically a state machine (i.e. with all possible states and transitions being explicitly enumerated and defined). Instead, Rasa is built with maximum flexibility in mind, allowing for conversation states that the developer doesn’t have in mind when building the assistant…
In any case, let me know if/once you have some more questions