Just like ‘run_evaluation’ saves intent errors in errors.json, is there way to get Entity Errors when model evaluation is done?
While ‘run_evaluation’ give precision, recall & F1 Score for entities extracted but it seems there is no straightforward way to figure out what caused bad precision, recall & F1 Score.