Rasa policies issue


I am wondering if someone experience a similar problem with RASA and can give advice on how to fix it. Currently, my project is focusing on developing questions & answers using RASA , so stories are usually short.

The NLU module predicts intents correctly, the confidence score is over 0.9. The next action is chosen by several policies (see screen shot with settings). The problem is that for some intents the next action goes to FallbackPolicy, even if the intent recognised correctly. It usually happens with a set of few questions & answers. When attempting to add more training data for the faulty Q&A pair set and I retrain the system, the chatbot predicts correctly the next actions for those retrained set but shows the same issues with other Q&A pairs that were previously working. Is there a way to see a confidence score of the next chosen action determined by policies?

We use rasa docker image rasa/rasa:1.1.4-spacy-en.

config Stories

1 Like

You can view the policy confidences in the log. the lines Predicted next action 'action_default_fallback' with confidence 0.3 indicate the policies predictions.

But I guess you are looking for the predictions of the policies that did get overwritten? That is a good point we should add some logging there as well. Would be a good PR

The wrong prediction (at least in the above case) happens because no other policy predicted an action with a confidence > 0.3 - thats why the fallback is triggered. I am not sure why that is the case though, might be due to the training data or the model not fitting correctly

Hi Tom, Thank you for your reply! unfortunately , I could not find any direct link between the amount of training data and the policy prediction: questions with fair amount of data (150-200 examples per question) still get messed up while questions with few examples are getting the correct answer. It was also my first thought : to add more data - I added more examples/data - but the policies did not predict always better - sometimes they did , sometimes they didn’t and every time they “messed up” other questions - that were previously getting correct answered. To me it looks like a random choice … so I am not sure why this happens and how to fix it.