I was reading through Rasa NLU in Depth: Intent Classification and it sort of gave me an idea different of what I am thought was happening. So my question is, and perhaps for the various EntityExtraction choices there is a different answer, to what extent if any does the extraction of entities influence the ultimate intent prediction?
The example given in the blog post is one where the hypothetical user has separate intents where in one case a person name would be the distinguishing feature and a date would be in the other case. That is to say two intents are identical except for the entity slot present in each intent. Quite a common scenario I face as well, except that I don’t have the (full) control in intent creation. The recommendation I gather is to combine the two intents into a single intent and then handle that in the core component.
For some of the blackbox tools (I use DialogFlow a lot), it seems that the “NLU” portion does have its intent influenced by which entities were or were not picked out. We commonly see for example that if an expected person is missed, we would end up at a completely different intent.
So is it true that in Rasa the two processes, intent and entity detection, are completely separate? I had hoped to use a SpaCy entity extractor along with the EmbeddingIntentClassifier. Ultimately we would like a custom SpaCy model, I see that is not supported, but then would the off-the-shelf SpaCy NER model ever be used to influence which prediction is chosen.
I am sure I have stated my use case around these forums before, but using core just is not possible at the given moment so entities must work in the nlu for me.