Hello, I’m trying to understand why when I evaluate my model, the rasa command go through my test fold two times, doubling the time it takes to complete.
Here’s the command launched :
python3 -m rasa_nlu.evaluate -d data/fr_init_data_set_test_2.json -m models/bert_feature_test_on_FR/nlu --report reports/bert_feature_test_on_FR --histogram reports/bert_feature_test_on_FR/hist.png --confmat reports/bert_feature_test_on_FR/confmat.png --errors reports/bert_feature_test_on_FR/errors.json --debug
When I take a look at my errors logs, I can see that each sentences from my test fold of 11 sentences appears two times, as if the evaluation was launch two times, one after the other.
My packages versions :
rasa-core==0.13.8 rasa-core-sdk==0.12.2 rasa-nlu==0.14.6
The logs doesn’t show anything suspicious between the last sentence of the first run on the test fold, and the first sentence of the second run on the test fold.
Is it supposed to happen ? Or do you have an idea of what may be going on ? I don’t think it’s what’s supposed to happen when we evaluate a model, as I doesn’t see the point of running the same prediction two time on the same examples without modifying the model in the meantime.