Intent classification failing when entity extraction is performed

I’m pretty new to Rasa and was exploring its capabilities when a strange behavior occurred. I had started by creating a nlu model capable of intent classification and entity extraction for my native language, which worked fine when made available through http.

Example:

Request: { “q”:“Procuro um restaurante italiano em 12345” }

Response: { “intent”: { “name”: “restaurant_search”, “confidence”: 0.7113155505293147 }, “entities”: [{ “entity”: “cuisine”, “value”: “italiano”, “start”: 23, “end”: 31, “confidence”: null, “extractor”: “ner_mitie” }, { “entity”: “location”, “value”: “12345”, “start”: 35, “end”: 40, “confidence”: null, “extractor”: “ner_mitie” }], “intent_ranking”: [{ “name”: “restaurant_search”, “confidence”: 0.7113155505293147 }, { “name”: “greet”, “confidence”: 0.15049657601423475 }, { “name”: “affirm”, “confidence”: 0.0821662299874645 }, { “name”: “goodbye”, “confidence”: 0.0560216434689861 }], “text”: “Procuro um restaurante italiano em 12345”, “project”: “default”, “model”: “model_20181210-181406” }

However, when I used the model through a Rasa_core bot something unexpected happened with the model. Everything works fine while no entities are present in a message, however, whenever an entity is present no intent is detected. As an example something like: “Procuro restaurante” - Looking for a restaurant Works While: “Procuro restaurante mexicano” - Looking for a mexican restaurant Doesn’t

Relevant information:

Rasa Core version : 0.12.3

Python version : 3.6

Operating system (windows, osx, …): ubuntu 18.04

Content of domain file (if used & relevant):

language: "pt"

pipeline:
- name: "nlp_mitie"
  model: "data/total_word_feature_extractor_pt.dat"
- name: "tokenizer_mitie"
- name: "ner_mitie"
- name: "ner_synonyms"
- name: "intent_entity_featurizer_regex"
- name: "intent_featurizer_mitie"
- name: "intent_classifier_sklearn"

Edit: This is the training data used demo-rasa-pt.json (5.1 KB)

Any help regarding this issue would be greatly appreciated.

Hi @DNCoelho

Are you sure no intent was detected? Were you running rasa_core in debug mode?

If entities are present, they will affect the policy’s prediction. So if you don’t have stories including entities, the bot might fallback.

Yep, that turned out to be what was happening. I was able to circumvent the behavior by telling the bot not to use entities in the intent being used for testing. The message processed by nlu is then passed to a custom action where I can access the entities. However, I think there should be a more proper solution which I just didn’t find yet. I guess the more important question is if it is possible to define a story that then branches based on entity, or would the creation of multiple stories be necessary? Since the main cause of the problem was exactly as you suspected.

Good to hear it worked!

You could write a checkpoint before the user utterance and then have two branches - one which begins with an intent/entity pair and another with no entity.

Or if you’re storing this entity as a slot, having the slot{} step in your story will affect the next action predicted. More info here

Thank you very much for the help! :smile:

I will check both options and see which fits my use case better.