Technical questions on Rasa's Sara demo bot

  1. How does the “feedback” intent work? It’s in the domain.yml file but there’s no NLU training data for it. Does it need to be in there for “utter_ask_feedback” to work?

  2. I can’t figure out a user utterance that will trigger action_default_fallback. Everything I put in results in either “out of scope” or “not sure” or “enter data” being returned.

  3. Also, many random user utterances, e.g. “The color is blue” trigger the enter_data intent. I think they should trigger “out_of_scope”. Why does it think enter_data?

  4. Why does ActionStoreEntityExtractor need to be there? Doesn’t the NLU pipeline take care of figuring out which extractor to use?

Thanks! John

Here’s a few answers to your questions.

  1. The utter_feedback is an action defined in the domain.yml file. It can be used as a response even if the intent before was not feedback. For example, it can be part of a flow where somebody signs up for a newsletter like here. The name of an intent does not need to correspond with an utter_<name> action. You can use them as lego bricks to construct how stories have happened in the past.
  2. The action_default_fallback is triggered by a policy in this case. Check the config.yml file. The idea here is that if the model doubts between two intents that the fallback policy is triggered. I’m not 100% sure if that’s whats happening here because I’m not an active developer on that project but in general these fallback policies can trigger an action as well.
  3. Detecting out_of_scope is incredibly hard, even with advanced ML techniques. What is probably happening here is that nothing like “the color is blue” has ever been uttered and that it has trouble finding the right intent for it. If you’re interested in learning what kind of confusion is happening here you can either use the rasa shell nlu command to see the estimated confidence levels or the live-nlu feature from rasalit.
  4. Again, I’m not 100% sure because I’m not a core-dev on the project but the ActionStoreEntityExtractor seems to set a slot based on what the user is interested in detecting. If the user is interested in detecting “places” for example then the user can use pre-existing spaCy tooling. Otherwise if a user wants to detect “distance” then ducking is better. This action handles this logic. The action grabs an entity of type “entity” from the conversation and it sets a slot based on what backend the user can user to detect it. In this sense the action does not detect the entity, this is still handled by DIET, rather it just sets a slot based on an entity.

Feel free to zoom in on a few of my responses. I’m aware that I’m using a fair bit of Rasa-jargon in my reply so if there’s details that are still unclear feel free to ask away.