How should I prevent the bot from asking questions to confirm a particular intent?

Hi, I’m trying to understand how to prevent a bot from asking questions like: "Did you mean 'intent_abcd'?" I notice that sometimes this happens, sometimes it replies to the question.

This is Two-Stage Fallback, which is not enabled by default.

Read more here.

yep, i have this setup done already. @ChrisRahme. Sharing my config.yml file here:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en

pipeline:
# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.
# # If you'd like to customize it, uncomment and adjust the pipeline.
# # See https://rasa.com/docs/rasa/tuning-your-model for more information.
  - name: WhitespaceTokenizer
  - name: RegexFeaturizer
  - name: LexicalSyntacticFeaturizer
  - name: CountVectorsFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: char_wb
    min_ngram: 1
    max_ngram: 4
  - name: DIETClassifier
    epochs: 100
    constrain_similarities: true
  - name: EntitySynonymMapper
  - name: ResponseSelector
    epochs: 100
    constrain_similarities: true
  - name: FallbackClassifier
    threshold: 0.7
    ambiguity_threshold: 0.1
  # - name: "DucklingEntityExtractor"
  #   url: "http://localhost:8000"
  #   dimensions: ["time"]

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
  - name: MemoizationPolicy
  - name: RulePolicy
    nlu_threshold: 0.5
    core_threshold: 0.5
    fallback_action_name: "action_default_fallback"
    enable_fallback_prediction: True
  - name: UnexpecTEDIntentPolicy
    max_history: 5
    epochs: 100
  - name: TEDPolicy
    max_history: 5
    epochs: 200
    constrain_similarities: true

Well, yes. You’re saying you don’t want it (“I’m trying […] to prevent a bot from […]”), so remove it.

Or did I misunderstand?

Sorry that I did not explain correctly @ChrisRahme. Basically what I see is, when there is a sentence very very close to the sentence in the training data, that particular intent is identified and proceeds. When the user gives a sentence that is not veryyy close to the ones in the training data, it prompts the user to check if it has identified the right intent. I do not want the second case to happen, rather choose the one that is close enough and proceed. Hope I made some sense here? If not, kindly let me know

Yes, it’s clearer, thanks :slight_smile:

But anyway, this is what Two-Stage Fallback is for! It is made to ask the user about the intent. If you don’t want it, just remove it and use regular fallback.

Oooh, right. Got it thanks! My bad

1 Like