Two-stage-fallback does not work when entities occur

Hi everyone,

at the moment I wanted to implement the TwoStageFallback for a Chatbot. In order to test it, I set the treshold quite high, so that I can see how it is working. And usually it does. So if I write a greeting, it will ask me wether I wanted to greet the chat bot and so on. But, if I write a phrase and entities are extracted the fallback classifier does not ask for to affirm the intent. The bot just utters the response “utter_fallback” defined in the domain. I actually do not understand why the step asking to affirm the intent is skipped, even after reading the source code.

language: de
- name: my_tokenizer_R2.WhitespaceTokenizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
  analyzer: char_wb
  min_ngram: 1
  max_ngram: 4
- name: DIETClassifier
  epochs: 60
  constrain_similarities: True
  model_confidence: linear_norm
- name: my_synonyms.MySynonymMapper
- name: FallbackClassifier
  threshold: 0.9  # just for testing fallback mechanism
  ambiguity_threshold: 0.2
- name: MemoizationPolicy
- name: TEDPolicy
  max_history: 24
  epochs: 20
  constrain_similarities: True
  model_confidence: linear_norm
- name: RulePolicy
  core_fallback_threshold: 0.1
  core_fallback_action_name: action_default_fallback
  enable_fallback_prediction: True
  restrict_rules: True
  check_for_contradictions: True

I also defined a rule for the two-stage-fallback, but I did not create an out-of-scope intent, but that should, as far as I understand Rasa, not affect the behavior of the bot in this case.

What might be also important to mention is, that I use a custom version of ActionDefaulAskAffirmation but I only changed the text displayed to German. And so far it works in simple cases like greetings as explained above.

I hope someone can help.

Thank you!

In order to update things:

I tested quite some things and in the end removed all fallback behavior from the domain and the pipeline. Also, I removed my own version of the EntitySynonymMapper, because did not fully recognize the custom one. But even without any fallback mechanism, the chat bot sometimes just does not choose an action to take and core fallback is triggered. That seems like a real bug to me. Even if the prediction is low, there should be some answer, even if it is not the intended one.

Hi @harloc and thanks for your question.

Even if the prediction is low, there should be some answer, even if it is not the intended one.

This goes against our recommendations actually. It’s very important to fallback when there is no good answer predicted, because otherwise it could decrease the trust your users have in your assistant. So it’s not really a bug but a feature :smiley: