I’m not certain I understand the difference between using what I’m doing versus say, using LanguageModelFeaturizer from Rasa. Not sure this makes sense.
I can see how it’s a little confusing. Let me try to explain:
When you use
LanguageModelFeaturizer (or the deprecated
HFTransformersNLP, or the deprecated
LanguageModelTokenizer) you’re using code that we’ve written and tested for
">=2.4,<2.12". That’s why I recommend you don’t use a different version. (As an aside, we may update this dependency, but it’s still only on our TODO list).
Why I said this:
figure out the correct model rather than using the
Your original problem is because you were trying to use
AutoTokenizer (features that come after the rasa-compatible
">=2.4,<2.12"). That’s why updating solved the problem.
I’m suggesting you can get around this by not using
AutoTokenizer , and instead figuring out which models to use. Then you can downgrade back to a rasa-compatible version of
transformers, but still keep your custom code in actions.
Does that clear things up?