Catch and respond to user utterance which lies in none of the mentioned intents

Hello all.

I am facing an issue. I have implemented a Fallback Action with certain out of scope dialogues in the intent named out_of_scope and set the core_fallback_threshold to 0.4 using the RulePolicy.

Here is the snap of the config.py fille:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en

pipeline:
# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.
# # If you'd like to customize it, uncomment and adjust the pipeline.
# # See https://rasa.com/docs/rasa/tuning-your-model for more information.
#   - name: WhitespaceTokenizer
#   - name: RegexFeaturizer
#   - name: LexicalSyntacticFeaturizer
#   - name: CountVectorsFeaturizer
#   - name: CountVectorsFeaturizer
#     analyzer: char_wb
#     min_ngram: 1
#     max_ngram: 4
#   - name: DIETClassifier
#     epochs: 100
#     constrain_similarities: true
#   - name: EntitySynonymMapper
#   - name: ResponseSelector
#     epochs: 100
#     constrain_similarities: true
#   - name: FallbackClassifier
#     threshold: 0.3
#     ambiguity_threshold: 0.1

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
  - name: MemoizationPolicy
  - name: RulePolicy
    core_fallback_threshold: 0.4
    core_fallback_action_name: "action_dia_irrelevant"
    enable_fallback_prediction: True
  # - name: UnexpecTEDIntentPolicy
  #   max_history: 5
  #   epochs: 100
  # - name: TEDPolicy
  #   max_history: 5
  #   epochs: 100
  #   constrain_similarities: true

The issue is, for any query which does not matches even the out_of_scope intent, the bot reply is very ambiguous, it randomly detects the intent and replies for the same.

I would like to know if anything is missing in the policy or if there is any way I could handle this issue? Thanks in advance.

Uncomment your whole pipeline, what you actually want is the FallbackClassifier (you cannot only uncomment this part).

Please read more about Fallbacks in the docs :slight_smile:

Hello @ChrisRahme, thanks for the quick response and your reply. I tried the same and it did have a positive effect, but what I actually would like is for a user utterance which does not lie in any of the intents which we have mentioned but is still predicted a random intent with confidence greater than the provided threshold, how can we develop the way by this issue does not happen.

Example:

- intent: greet
  examples: |
    - want an icecream
    - buy a car
    - order a cab
    - president of US?
    - can you get my bank statements for me?

and user utterance is see a medical psychologist, the intent detect is goodbye with the confidence of 1.0.

Now I understand that data provided is very sparse and short, but there are infinite sentences which list down as out_of_scope.

Is there any way that I can check for a better way to implement out of scope handling without putting large data for the same intent?

That just means you would have to provide better training data. Be careful, better does not necessarily mean more, but in your case, 5 examples is too little, and all of them are unrelated anyway!

- intent: want_to_buy
  examples: |
    - want an [icecream](food) // maybe even consider separating the intents for each category
    - buy a [car](vehicle)
    - where can i get [pizza](food)?
    - i need a new [phone](electronics)
    - any idea where i can get a good [fridge](appliances)?

- intent: order_taxi
  examples: |
    - order a cab
    - i need a taxi
    - please order a cab
    - any taxis nearby?
    - i need transport to [Beirut](city)

- intent: ask_president
    - president of [US](country)? // labelling entities is crucial in this intent
    - whos the president of [Lebanon](country)?
    - tell me who the president of [Switzerland](country) is
    - what about [UK](country) president
    - any idea whats the name of the president of [France](country)?

- intent: get_bank_statements
    - can you get my bank statements for me?
    - bank statement please
    - give me my statements
    - need acc statment // introducing typos and abbreviations can be good
    - i said i need by bank statements

Please read more in the docs:


A way to visualize how your bot is doing is using Tensorboard with Rasa.

Also read about Evaluating an NLU Model with rasa test, which will output graphs like a confusion matrix and histogram.

Building a good pipeline is also really important.


Here is a list of useful Rasa courses (approximately from least to most advanced):

  1. Rasa Masterclass (outdated)
  2. Conversational AI with Rasa
  3. Rasa for Beginners
  4. Certification Workshop (paid)
  5. Advanced Custom Actions, Forms, & Responses Workshop (paid)
  6. Understanding Rasa Deployments
  7. Advanced Deployment Workshop (paid)
1 Like

Thank you again @ChrisRahme. I will re-format the training data as per the guidelines and your example.

1 Like

It’s not enough just to do that, okay? These are just small examples.

You also need to implement stories and rules, etc.

Please watch the courses I’ve sent you - at least #2. And read the whole docs.

Sure @ChrisRahme. Thanks for the generous reply.

1 Like