Just that. I see a max_history parameter for policies, but not for classifiers. Still, I imagine conversation history could influence the intent. Let’s say there’s a conversation going and the user inputs the exact text for a specific intent, but all stories that include the current sequence of interactions have as next intent something different. I’m wondering because this would imply I also have to look at stories when something goes wrong with intent classification.
No history does not influence the natural language classification. The point is, that you can get different answers to the same intent with the TED policy. The AI from the TED policy “recoginizes” that the same question was asked before and so it “can decide” to ask differently.
@harloc thank you for your reply. In that case, how would Rasa handle something like ‘Good evening’? That can be used as a way to greet and to say goodbye, and its correct classification would depend entirely on context (whether it happens at the start or end of the conversation). Just using the same intent and having the TED policy pick a different answer seems confusing and semanticaly unsound (greetings and farewells are mostly different things). Actually, that exact example comes in the default
rasa init project. Even doing
rasa data validate points out the problem of having two different intent with the same exact example, but since it’s the official default project, I wondered.
You actually picked a case, which shows one of the downsides of Rasa. It is not able to handle phrases which have a different meaning depending on the context. One solution to the “Good evening” example, would be to have an intent with mixed greetings and farewells. Let’s call the group of these mixed phrases
usual_phrases. You can then use stories to train the AI which answer to choose. Have some stories, which start with
usual_phrases as the first intent and train the AI to answer with
utter_greeting and have some stories having
usual_phrases as the last intent and return
utter_farefell as the final action. This might still cause some confusion, if the user utters several greetings and farewells just randomly. But at least you can stick to rasa and keep it simple using intents and actions. Another possibility is to use Rasa’s end-to-end training, which came with the latest versions, where you stick with your usual
farewells and use such as phrase as “good evening” just for end-to-end training, which in return should help to get the right behavior of the AI.
But you are right. It is a tricky problem. And having dialog context during the NLU would help a lot, but is difficult to engineer and probably will eat a lot of ressources when running the Rasa instance. So I personally do not blame the Rasa team “keeping it simple”.