My query has nothing to do with entity extractions because they seem to work just fine but the issue is with effect of entity in intent classification and model’s inability to understand semantic meaning of the sentence.
I need access to ABCD - > ABCD is the entity and it is extracted correctly
- I need access to ABCD - > intent : get_access
- i do not need access to ABCD -> intent: remove_access
The above are totally different intent right the first one being how to have access and the second being how to remove access but the classifier always classify it to have access with very high accuracy [even after adding training data for the remove access cases]. It seems like an issue with the semantics, the model is to not able to distinguish between sentences based on its semantic differences.
Let’s take an example
- what is football? Ans : Football is a sport (intent : meaning)
- who is Beckham? Ans: David Beckham is a football legend (intent: who)
Now if my client ask a question like
- what is Beckham? (intent: who) Ans: David Beckham is a football legend
- who is Football? (intent : player) Ans : Football is a sports (intent : meaning)
The above questions are logically wrong but still manages to give result due to the weights of entity words like beckham,football which creates a bias towards mapping these sentences to the wrong intents.
Is there any possible way to tackle such issues?
I have tried Google’s universal sentence encoders for semantic similarity but still no hope.
I hope my explanations are well enough