in my use case the chatbot mostly deals with simple QA-stories consisting of two turns (user asks, bot answers). In some cases the stories can also get longer and have up to eight turns. After adding quite some training data, the bot wasn’t successfully handling my QA-stories anymore. After I edited the default
MemoizationPolicy and set the value to
1, the QA-stories worked well again. But since then, longer stories do not work anymore. I especially get into problems with stories like this:
* qredits_info - utter_qredits_info_prepare * affirm - utter_qredits_info
The affirm-intent is recognised but not interpreted in relation to the story and the bot jumps to the FallbackPolicy, uttering
In a different case I am working with buttons at the end of the first two bot-utterances:
* no_response_to_ads - utter_what_type_of_ad * no_response_to_nondating_ads - utter_no_response_to_nondating_ads * deny - utter_no_upsell_info
In this case, the story works correctly for the first four turns, but fails when the user presses the button for
* deny – the bot then again uses the FallbackPolicy.
Both cases have worked before changing the MemoizationPolicy. I am not sure what do to successfully cover both, easy QA-stories as well as longer ones.
Thanks in advance!