Is there any way to check incorrect prediction while cross validation?

Hello,

I am new to NLP. I’ve started using rasa nlu for my dataset of 35k examples. I’ve trained using tensorflow_embedding. I got the below results.

2019-05-08 07:56:15 INFO     rasa_nlu.model  - Finished training component.
2019-05-08 08:01:51 INFO     __main__  - CV evaluation (n=10)
2019-05-08 08:01:51 INFO     __main__  - Intent evaluation results
2019-05-08 08:01:51 INFO     __main__  - train Accuracy: 0.952 (0.003)
2019-05-08 08:01:51 INFO     __main__  - train F1-score: 0.947 (0.003)
2019-05-08 08:01:51 INFO     __main__  - train Precision: 0.954 (0.003)
2019-05-08 08:01:51 INFO     __main__  - test Accuracy: 0.932 (0.005)
2019-05-08 08:01:51 INFO     __main__  - test F1-score: 0.927 (0.006)
2019-05-08 08:01:51 INFO     __main__  - test Precision: 0.932 (0.006)
2019-05-08 08:01:51 INFO     __main__  - Entity evaluation results
2019-05-08 08:01:51 INFO     __main__  - Entity extractor: ner_crf
2019-05-08 08:01:51 INFO     __main__  - train Accuracy: 0.985 (0.000)
2019-05-08 08:01:51 INFO     __main__  - train F1-score: 0.985 (0.000)
2019-05-08 08:01:51 INFO     __main__  - train Precision: 0.985 (0.000)
2019-05-08 08:01:51 INFO     __main__  - Entity extractor: ner_crf
2019-05-08 08:01:51 INFO     __main__  - test Accuracy: 0.983 (0.001)
2019-05-08 08:01:51 INFO     __main__  - test F1-score: 0.982 (0.001)
2019-05-08 08:01:51 INFO     __main__  - test Precision: 0.982 (0.001)
2019-05-08 08:01:51 INFO     __main__  - Finished evaluation

Is there any way to check where is my dataset lacking or incorrect prediction while the dataset is being tested ?