Interpretation of the graph: intent prediction confidece distribution

Hello everyone,

I’m programming a bot for spanish language and in the pipeline I changed from SpacyFeaturizer to CountVectorsFeaturizer. I don’t know if it is related to the fact that when I do the rasa test, I came across with this graphic:

I found a little weird that there’s no intent prediction wrong and the axis contains negative numbers, I don’t know exactly how to interpret this. When I used the SpacyFeaturizer it looked like this:

I don’t know if this is related with the CountVectorsFeaturizer, or really there is no wrong confidence for the intents’ prediction.

Thank you very much from advance, I’m a bit stuck analysing this graphics.


Hi @mar. Would it be possible for you to share your NLU training data, domain, and config?

Is it okay if I only share the domain and the config? I’m not sure if I can share the training data too…

domain.yml (1.7 KB) config.yml (2.0 KB)

Thank you so much.

Hey @mar, can you check the intent_errors.json file which should’ve been created at the same time (and in the same directory) as the graph? If that file shows no errors for the case when you use CountVectorsFeaturizer, then maybe there really weren’t any mistakes, though it seems a bit odd :thinking:

1 Like

I’m a bit lost because in any case intent matches with intent_predicition and I don’t understand.

Well, these are clear intent prediction errors (apparently, intents criterio11 and criterio12 get confused). If this is really for the case with CountVectorsFeaturizer, then it means that the graph ignores some mistakes, which it shouldn’t. In such case, @mar please report this as a bug on Github and we’ll look into it. However, you might need to provide some minimalistic NLU data version in your report so that we can actually reproduce the bug…