Hello,
I am using the NLU component of RASA in order to benchmark different language model featurizers for intent classification. I have experimented with training the model for different number of epochs, but I would like to be able to have some validation data which can help decide when it’s best to stop the model training. Running experiments with different number of training epochs is time-consuming, early stopping should help here.
However, I haven’t seen in the docs anything about validation data and early stopping. Is there such a feature currently implemented?
Thank you!