I’m working with rasa 2.3.4.
I have two retrieval intents (faq and chitchat), when I provide a random input, Rasa NLU classifies the input using those retrieval intents when logically it’s a nlu_fallback intent.
This problem occurs even in the previous versions.
I found that with the new version 2.3.4 model_confidence " This should ease up tuning fallback thresholds as confidences for wrong predictions are better distributed across the range [0, 1]"
But in my case, that didn’t work.
config.yml (528 Bytes)
domain.yml (705 Bytes)
chitchat.yml (1.2 KB)
faq.yml (598 Bytes)
rules.yml (282 Bytes)
Also, I tried to vary the model_confidence parameter (softmax, linear_norm, even cosine in <=2.3.3).
When I execute the “rasa shell nlu” command, I found that the confidence of random inputs is too high, like the following example:
The problem is that this ‘ab’ token doesn’t exist in the training data and on the other hand the min/max char ngram = 4.
I tried to test that on another projects with more training data but I get always the same results.
I think that this is a problem, especially when we create Q/A assistants.
I hope you could give me some insights on that. Thanks.