Hello,
While creating test scripts for the bot we ran across multiple bizarre behaviors. We follow the interactive training to generate scripts (stories), to make sure that the stories are in a good format for Rasa. But when we perform end to end evaluation we noticed that the bot predicts some very strange stories.
For example:
- request_quote: I would like to get a quote
- utter_quote_form
- quote_form
- form{“name”: “quote_form”}
- slot{“requested_slot”: “know_product_type”}
- form: ask_whyneedinfo: Why would you need to know this?
- form: quote_form
- form: action_listen
- form: quote_form
- form: quote_form
The idea is to handle uncooperative user’s behavior and we have a specific action that gets call when the intent (ask_whyneedinfo) is detected. But in the test story format, adding this action would result in a failed test. And as you can see quote_form (the form action), gets called three times, following the user message in this evaluation script. Also, action_listen as the second action makes very little sense, as generally action_listen should be the last action before user message, and we have no instance of action
The issue we have is that, while this test goes through with no issues, we do not understand why the correct story is not the following:
- request_quote: I would like to get a quote
- utter_quote_form
- quote_form
- form{“name”: “quote_form”}
- slot{“requested_slot”: “know_product_type”}
- form: ask_whyneedinfo: Why would you need to know this?
- form: quote_form
- action_whyneedinfo
- quote_form
- action_listen
Which would make more sense. If anyone has experience with how the evaluation stories are/should be written we would appreciate it.
Thanks.