Why is Rasa core training so unstable?

In some of my skills, the training of Rasa core is stable, while in some other skills, the accuracies between different runs of the exactly same training differs a lot. For example, one training produces as high as 95% accuracy, and then I repeated the training without changing anything, the accuracy could go down as low as 75%.

Is this normal? Since the amount of stories is typically small, is this could be the reason?

it depends on the stories. Do you have contradicting stories?

How do you test if stories are contradicting?

@Krogsager We recently created a new story validation tool for that. It is experimental, but you can check out the rasa branch.

Just install rasa from source in a new environment and checkout the GitHub - RasaHQ/rasa at story-tree-1 branch with git checkout story-tree-1. Run pip install -e . again, to be sure the dependencies are all correct. Then run rasa data validate stories --max-history 5 on your project, or whatever max_history setting the policy is using that troubles you (5 is the default for most policies). If your stories are consistent, it will output

... INFO rasa.core.validator - No story structure conflicts found.

I’ll post more information on this next week.

Addendum I also created a Colab notebook where you can test the feature in the cloud. Would be great to hear your feedback!

3 Likes

I just tested this out and it works. I am very pleased! This should make it much easier to develop good bots - especially if it is implemented in rasa x. Looking forward to the release!

1 Like

Thank you @Krogsager!

I also created a Colab notebook for people to check it out.