Let’s take the example of a confirm-intent. This really only makes sense after the bot has prompted for a confirmation, e.g. by prompting “Is that right?” “Do you agree?”, etc.
I’d disagree or say “it depends” here From my experience, especially from looking at the conversations people with have with our demo bot Sara, sometimes people behave rather unpredictably and it doesn’t sound right to try to limit them only to those options that the bot maker can imagine. In the case of affirming, sometimes people will say things like “alright”, “ok” or “cool” (which can serve as affirmatives) but they’ll use them merely to express a positive reaction or acknowledgement of anything the assistant has said.
One thing is for sure in my opinion: the NLU part should always classify things (and be trained to classify things) according to the real intent, not according to what the set of expected intents is. If some utterance is
intent:affirm, it should always be classified under that intent, i.e. that intent should always be available for the intent classifier. I’d “block” certain intents by adding some logic on top of the classifier. Some component that knows which intents are allowed given the context, and would change any other predicted intents to
chitchat or something else.
Either way, I’d think twice before enforcing something like this on the users of the system. It’s often easy to not think (or forget) about something that the user could reasonably say. And if it ever happens that the user says something but the assistant will not understand it because it does not expect that intent at that particular point, the user may become very frustrated… My personal opinion is that it’s better to allow all intents at all times and cover infrequent/unexpected uses in the stories/rules logic. But I respect that some other frameworks explicitly allow for different ways of handling things. One reason why Rasa doesn’t focus on this is that Rasa tries to not encourage building assistants whose logic is basically a state machine (i.e. with all possible states and transitions being explicitly enumerated and defined). Instead, Rasa is built with maximum flexibility in mind, allowing for conversation states that the developer doesn’t have in mind when building the assistant…
In any case, let me know if/once you have some more questions