Cross validation results explanations

Hi !

I’m sorry for this really noob question…

I’m using the cross validation to check NLU health. I wonder which keys are useful, and how to interprete them, can you help me ?

I understand that cross validation process do “folds”, aka. it splits all data with training and testing data, start the checks, and do it again N times…

But with this kind of result :

intents.train.accuracy.mean: 0.973076923076923
intents.train.accuracy.std: 0.009421114395319882
intents.train.f1-score.mean: 0.9734318245856708
intents.train.f1-score.std: 0.008815357491779221
intents.train.precision.mean: 0.9789606227106227
intents.train.precision.std: 0.005836969327022949

intents.test.accuracy.mean: 0.7384615384615384
intents.test.accuracy.std: 0.10434353820192721
intents.test.f1-score.mean: 0.6913553113553114
intents.test.f1-score.std: 0.1129364325273312
intents.test.precision.mean: 0.6884615384615385
intents.test.precision.std: 0.1131986669468929

Why training scores are so good and testing are so bad ? I mean, are the real Accuracy/F1-score/Precision of my full data training those of “train” or “test” results ?

Thanks !