As we are in the process of building a very large conversational bot application, we are facing more and more issues with overlapping intents (same for entities, but I’ll cover that in another thread), and also intents that are contextually the same meaning – or different in other contexts. Officially the solution is to create a lot of stories, so memoization and TED can learn the stories to deal with the intents the users might voice.
However, as shown in the TwoStageFallback, a loop action also can be somewhat context aware. Sadly, LoopAction is not exposed to the SDK, afaik.
Right now, I have tackled the problem by writing a custom policy that modifies the tracker object, based on meta data hidden in the button properties of the utterances. I am wondering if there could be a better way, because now I am creating a mini-state-machine for one turn (well, also the memoization policy is some kind of state machine, too).
It works by assigning additional intents to the buttons, as an extra meta data key “button_intents”. Some intents are automatically created (like “the first”, “the last” as ordinal inform intents for the button position, which you will see a lot in voice channel transscripts), and more are up to dialogue designers to include manually. If such an intent is recognized (e.g. by DIET) in the user utterance after a button bot utterance, the policy will change the recognized intent to the payload of the button. TED, Memoization Policy and other policies are then able to make a good prediction on the next action.
A modified “Sara” fork is available here to examine my proposal: raoulvm/rasa-demo at button_policy (github.com)
Do you, @koaning , see any major drawbacks with that approach?
The README on the domain format: