Intent incorrectly recognised with high confidence

Hi all,

I have been testing rasa for use on a new project and have run into some issues with rasa incorrectly recognising intents with high confidence.

Examples:

  1. ‘greek food’ is recognised as mood_great intent with > 0.99999 confidence

Rasa shell output extract: Received user message ‘greek food’ with intent ‘{‘name’: ‘mood_great’, ‘confidence’: 0.9999910593032837}’ and entities ‘

I suspect this is due to ‘greek’ being similar to ‘great’ and ‘food’ being similar to ‘good’ so that Rasa is seeing ‘greek food’ as being similar to ‘great good’ based on these examples in the default nlu.yml for the ‘mood_great’ intent

  • great
  • I am great
  • extremely good
  • so good
  1. ‘ok’ is recognised as deny intent with > 0.98 confidence

Rasa output extract: Received user message ‘ok’ with intent '{‘name’: ‘deny’, ‘confidence’: 0.9822949171066284

This one I’m finding a bit more difficult to explain, I guess ‘ok’ is slightly similar to ’no’.

This is using rasa init; rasa train; rasa shell —debug. Using the default (unmodified) Rasa 3.5.6 on macOS-13.2.1-x86_64-i386-64bit.

I’m concerned that it won’t be feasible for me to use Rasa on my project unless I can improve intent recognition accuracy. As I would like to fallback to a human in the case that the message intent is not understood.

That is I’m hoping to find some method, such as modifying the config.yml, to significantly lower the confidence on such incorrectly recognised intents.

Thanks for any help, Simon

Probably you have nlu training data issues - not enough examples or conflicts. I would run cross-validation testing and review the results/intent_errors.json.

Hi Stephen,

Thanks for your reply.

I’m using the default Rasa data. Do you think this default data has conflicts?

I could certainly add an extra ‘ok’ example to the existing ‘affirm’ intent, and expect this would resolve the issue with ‘ok’ being incorrectly identified as the deny event.

But I don’t see how this addresses my concern in general.

For the specific example of ‘greek food’ being incorrectly identified as ‘mood_great’, it’s not clear to me why adding more correct examples to the ‘mood_great’ intent would fix this issue with an incorrect intent being identified with high confidence.

More generally, it is not these two examples in particular that are my concern, but rather incorrect intents being identified with high confidence in general is my concern. As it will compromise the ability of my chatbot to fallback to a human by the method of filtering by confidence level. If a user enters text that is incorrectly recognized with high confidence then the user will be taken down the wrong conversation path and I expect this will result in a bad experience for that user.

As a work around I can create a general out of scope intent, and populate it with examples of utterances that are incorrectly matched with high confidence, like ‘greek food’. But this seems more like a work around than a real solution. Ideally I would like to understand and address the root cause of the problem.

Any help appreciated, Simon

hey @SimonClarke did you ever find a solution to this? I am facing the same problem trying to trigger a fallback