I am wondering if someone experience a similar problem with RASA and can give advice on how to fix it. Currently, my project is focusing on developing questions & answers using RASA , so stories are usually short.
The NLU module predicts intents correctly, the confidence score is over 0.9. The next action is chosen by several policies (see screen shot with settings). The problem is that for some intents the next action goes to FallbackPolicy, even if the intent recognised correctly. It usually happens with a set of few questions & answers. When attempting to add more training data for the faulty Q&A pair set and I retrain the system, the chatbot predicts correctly the next actions for those retrained set but shows the same issues with other Q&A pairs that were previously working. Is there a way to see a confidence score of the next chosen action determined by policies?
We use rasa docker image rasa/rasa:1.1.4-spacy-en.