If you have no entities in your dataset anyway then the gradient signal from your entities should be zero, in which case the setting should not have an effect.
If you have entities then technically, yes if only theoretically, these will influence the intent prediction. That’s because of the transformer block that is in the model. It’s explain a bit more in detail here.
I can’t recall experimental results where this made a large negative impact. I can also argue why technically it is somewhat unlikely that you’ll want to have two seperate models for intent and entities. Let’s look at an example conversation between a user and a digital assistent.
Notice how we have a few intents here and also an entity. In this example you should recognize that the product entity appears in an utterance with the order intent. In this example it’d be strange if the product entitiy was ever used in challenge_human intent.
It is likely that in your chatbot there’s a similar thing happening. Certain entities will likely only appear in certain intent utterances and this is a pattern that we could learn from. You could make two seperate models; one for intents and another one for entities but this opportunity to learn from both will be lost.