Extracting All Confidence Levels for Predictions

Hello!

I am using the RASA NLU component to perform some benchmarks on intent classification and entity extraction. I am using the RASA train/test on my local machine.

When browsing, for instance, the file containing intent errors that resulted after the inference, I can only see that an erroneous intent class X was predicted with confidence Y. Is there a way to check how much confidence was associated with the correct class or any other classes?

Thank you!

Hey @radandreicristian that’s not part of the errors file at the moment, no. Out of curiosity: why do you want to see this?

Alright, thanks for the help. Might fork the source and see if I can tweak it to my needs.

The scenario I am dealing with consists with multiple intents out of which some have opposing meanings (i.e. turn the light on/off) - for the Romanian language.

I am developing a technique to modify word embeddings so that they can incorporate antonymy relationships (as per https://arxiv.org/pdf/1603.00892v1.pdf, but modified according to the needs).

I am trying different evaluation methods, including RASA. During the experiments, I have seen that the confidence with which the incorrect intent is predicted decreases with our pre-processing method, but I am curious to see whether the confidence for the correct intent has increased as well (in the case of predictions that are still wrong, but with a lower confidence).

1 Like