Creating checkpoints in model training

Hi, can I create any checkpoints in model training??

It takes long time to train my model and if I want change something on nlu data I have to train from the model again from the start. It’s annoying because core training takes the 99.9% training time and nlu is trained after that within few seconds. So if I can create a checkpoint after core training or something I can save a lot of time.

Hey @faheemv, neither Core nor NLU should re-train unless something changes in the training data or the domain definition or the config file (as hinted in the docs). What changes are you making before your Core starts training from scratch?

Hi @SamS, I just wanted to teach my model to understand some rephrased sentences and add a few lines under an existing intent in training data, so I think it’s not necessary to train the core to teach nlu something. I am using rasa 1.1.4 so rasa and core trained together to get the model right?. Also in case of any power failure will there any chance to continue my training from where I left, atleast somewhere near that ?


Hey @faheemv, now I’m not sure if I understand you correctly. If you just want to train and use NLU, then these docs will help you. If you want to use both NLU and Core, but only re-train NLU, then what I said earlier applies (Core won’t be trained again unless something in the stories or in the Core configs changes).

As to saving actual checkpoints during training, I’m afraid that Rasa doesn’t support this right now, though you may be able to change the TensorFlow code to create or load checkpoints. Does this help?

sorry some misunderstandings, thanks for your help @SamS.