How to handle failed_stories in Evaluating core models

How to handle failed_stories in Evaluating core models

I may need more information from you but generally you can create a test stories file say called e2e_stories.md for example with something like this:

This first story isn’t right so it would give me an error on tests

## bot challenge
* bot_challenge:  are you a bot?
  - utter_bot

## GREETINGS
* greet: hello
  - action_utter_greet

## GOODBYES
* goodbye: bye
  - action_utter_goodbye

I can test it two different ways:

  1. rasa test --stories e2e_stories.md --e2e which will run and then create a results folder and I can look at the failed_stories.md to see what it looks like:

So in this case it tells us that it should be utter_iamabot and not utter_bot so we could fix that and retry.

## bot challenge
* bot_challenge: are you a bot?
    - utter_bot   <!-- predicted: utter_iamabot -->
    - action_listen   <!-- predicted: utter_iamabot -->
  1. You can run the same command but add the additional option to it --fail-on-prediction-errors which will fail and not write the results and will give you the similar error in the console, it is good for using with Travis and such.

Let me know if you have follow up questions or want to provide more context to your issue.

Thanks

Hi @btotharye , Thanks for response. There are hundreds of intents are there. Writing end to end stories is difficult.

I created an insurance chatbot. It is working good. After that I have used a dialogflow small talk . Then the bot is giving the default fall back action for all the intents. If I use small talk alone It is working. But combined with insurance bot stories, that problem occurs.Here it is the failed stories in evaluated core model. failed_stories.md (430 Bytes)

Well based on those intent errors I would investigate them and their stories and ensure they work how they should, those 2 apparently don’t appear to be working right.