How to handle failed_stories in Evaluating core models

I may need more information from you but generally you can create a test stories file say called e2e_stories.md for example with something like this:

This first story isn’t right so it would give me an error on tests

## bot challenge
* bot_challenge:  are you a bot?
  - utter_bot

## GREETINGS
* greet: hello
  - action_utter_greet

## GOODBYES
* goodbye: bye
  - action_utter_goodbye

I can test it two different ways:

  1. rasa test --stories e2e_stories.md --e2e which will run and then create a results folder and I can look at the failed_stories.md to see what it looks like:

So in this case it tells us that it should be utter_iamabot and not utter_bot so we could fix that and retry.

## bot challenge
* bot_challenge: are you a bot?
    - utter_bot   <!-- predicted: utter_iamabot -->
    - action_listen   <!-- predicted: utter_iamabot -->
  1. You can run the same command but add the additional option to it --fail-on-prediction-errors which will fail and not write the results and will give you the similar error in the console, it is good for using with Travis and such.

Let me know if you have follow up questions or want to provide more context to your issue.

Thanks