Cross-validation report

Hi there,

When we get the results of nlu cross-validation test, how can we know which are exactly the examples of “confused_with”?

For instance, in: “modelo_44”: { “precision”: 0.4722222222222222, “recall”: 0.5666666666666667, “f1-score”: 0.5151515151515152, “support”: 30, “confused_with”: { “fora_âmbito”: 4, “modelo_10”: 3

is it possible to know which are the examples/questions of “fora_âmbito” & “modelo_10” with “confusion”?

Thanks to all of you!

Pedro Lopes

1 Like

Look at the intent_errors.json output to see the incorrect prediction. Here’s an example where the expected intent was kanye_quote and the prediction was quote.

  {
    "text": "give me a kanye quote",
    "intent": "kanye_quote",
    "intent_prediction": {
      "name": "quote",
      "confidence": 0.9942581653594971
    }
  },
1 Like

Thanks a lot Greg!