In evaluate.py is there a way to capture successes for Entities

I am using ner_crf in my pipeline. When I run the code in evaluate.py to get my performance statistics, I get summary results for both intents and entities. Not only this, if I include the successes command line argument I get both predicted and expected intent values (with confidences). Unfortunately, I cannot find how to get the same predicted/expected pairs with entities. F-Scores are great in proving that I am doing well overall, but I need to check out my misses. Is there an easy way to access to those entities that I am not predicting correctly (both FP and FN really).

Hi there @grjasewe,

Unfortunately there is currently only the implementation of errors.json (shows unmatched expected/predictions) for intent classification, not entity extraction. This is however a feature request that we’re aware of and are considering implementing when we get the chance. :slight_smile: Sorry that there is no easy solution at the moment!

Disappointed, but thanks. Do you have a link to the feature request so that I may follow its progress?

Here – I see it was tagged with a Help Wanted a while ago – feel free to implement this feature on your own and pull request it if you’re willing and able.