I am confused by the response that I am getting. Let’s take an example,
“what is the weather in new york”. Here in response, an intent name would be “check weather” and an entity would be “new york”. In this case, we have the template as we want new york as an entity.
But in the case of “how about new york.” Here an intent is “fallback intent” and entity would be “new york”. But in this case, we only want is intent we don’t want any entity.
Because there is a case where we don’t have the common_example template but still it identify an extra entity from the query.
Short answer: Rasa NLU is flexible in a way where one can choose to make them non contingent as per the need.
Long answer: For a very minimal language task, one can choose to not introduce entities and limit models to perform only intent classification. On the other hand, entity classification (or NER in the context of conversational AI) is a relatively hard problem (given small dataset or large unlabelled dataset) and not completely solved.
You can dig deeper here in NLU Pipeline docs to understand how intent and entity classification are not hard-wired and you can easily decouple them.
An answer specific to your example would be to think of a scenario where “how about Newyork” is a question asked after user has asked “whats the weather like in SF”. The assumption would be that bot is able to give a meaningful answer to the first query and user asks a succeeding question. In such case, knowing the state of the conversation and preceding question would make sense.
More importantly, it would be helpful to understand is your objective whether it is conversational AI or just NLU. IMO the answer to your question would vary in Academia Research vs Solving a real world problem. But digging deeper into intent classification would help for sure.
I have a similar question that builds on Kavya’s. Suppose the next example is “How do I get to New York?”
In the first two examples, I tagged the training data to name the entity “city”. In my example, it was tagged “destination”.
I have a script that takes action on the intent returned and processes the entities. If I get back the get_directions intent, I’ll look in the json for the destination entity, but it has city instead. Using ner_crf, is there a way to de-emphasize entities in an intent, so that the classifier is biased to those that the intent was trained for? That is, the classifier would be weighted to giving a destination for get_directions, and a city for get_weather?