Cannot train finetuned model

When I try to finetune the model and use it for further finetuning I get an error:
ValueError: Shapes (2575, 128) and (2570, 128) are incompatible
I configured CountVectorsFeaturizer and RegexFeaturizer as it’s said in the docs, but it didn’t help. Does anyone know what it could be? Or should I specify the last trained-from-scratch model to finetune it?

Hi, welcome to the Rasa forums!

Did you configure the featurizers before you trained your first model or after? If it was after, then the dimension has already been set on the first run, and the configuration (which is to keep the dimensions stable as more data is added) won’t help because it will be holding the dimensions steady after a model has already been trained.

Also, if you have a lot of new tokens that haven’t previously been seen this run (more than 1000 new ones if you’re following the example in the docs) then you’ll have to retrain from scratch. If you expect to see a lot of novel tokens as you retrain your model, you can set the various additional_vocabulary_size arguments to a larger number.

Thank you for the reply!

Did you configure the featurizers before you trained your first model or after?

Before

About new tokens: I tried also on simplified example with only few intents and little data, and tried to set additional_vocabulary_size to bigger numbers (even 10000), it did’t help.

Could it be that other featurizers (e.g., LanguageModelFeaturizer, LexicalSyntacticFeaturizer) also need additional config?

Definitely possible that another featurizer is the culprit! I’d actually post a GitHub issue about this & include your config file: Issues · RasaHQ/rasa · GitHub It sounds like something is missing from the docs about freezing featurizer dimensions or something isn’t working as intended.

Got it. Here is the issue I’ve created:

1 Like