Hi there,
When we get the results of nlu cross-validation test, how can we know which are exactly the examples of “confused_with”?
For instance, in: “modelo_44”: { “precision”: 0.4722222222222222, “recall”: 0.5666666666666667, “f1-score”: 0.5151515151515152, “support”: 30, “confused_with”: { “fora_âmbito”: 4, “modelo_10”: 3
is it possible to know which are the examples/questions of “fora_âmbito” & “modelo_10” with “confusion”?
Thanks to all of you!
Pedro Lopes