I am facing an issue. I have implemented a Fallback Action with certain out of scope dialogues in the intent named out_of_scope and set the core_fallback_threshold to 0.4 using the RulePolicy.
Here is the snap of the config.py fille:
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.
# # If you'd like to customize it, uncomment and adjust the pipeline.
# # See https://rasa.com/docs/rasa/tuning-your-model for more information.
# - name: WhitespaceTokenizer
# - name: RegexFeaturizer
# - name: LexicalSyntacticFeaturizer
# - name: CountVectorsFeaturizer
# - name: CountVectorsFeaturizer
# analyzer: char_wb
# min_ngram: 1
# max_ngram: 4
# - name: DIETClassifier
# epochs: 100
# constrain_similarities: true
# - name: EntitySynonymMapper
# - name: ResponseSelector
# epochs: 100
# constrain_similarities: true
# - name: FallbackClassifier
# threshold: 0.3
# ambiguity_threshold: 0.1
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
- name: MemoizationPolicy
- name: RulePolicy
core_fallback_threshold: 0.4
core_fallback_action_name: "action_dia_irrelevant"
enable_fallback_prediction: True
# - name: UnexpecTEDIntentPolicy
# max_history: 5
# epochs: 100
# - name: TEDPolicy
# max_history: 5
# epochs: 100
# constrain_similarities: true
The issue is, for any query which does not matches even the out_of_scope intent, the bot reply is very ambiguous, it randomly detects the intent and replies for the same.
I would like to know if anything is missing in the policy or if there is any way I could handle this issue? Thanks in advance.
Hello @ChrisRahme, thanks for the quick response and your reply. I tried the same and it did have a positive effect, but what I actually would like is for a user utterance which does not lie in any of the intents which we have mentioned but is still predicted a random intent with confidence greater than the provided threshold, how can we develop the way by this issue does not happen.
Example:
- intent: greet
examples: |
- want an icecream
- buy a car
- order a cab
- president of US?
- can you get my bank statements for me?
and user utterance is see a medical psychologist, the intent detect is goodbye with the confidence of 1.0.
Now I understand that data provided is very sparse and short, but there are infinite sentences which list down as out_of_scope.
Is there any way that I can check for a better way to implement out of scope handling without putting large data for the same intent?
That just means you would have to provide better training data. Be careful, better does not necessarily mean more, but in your case, 5 examples is too little, and all of them are unrelated anyway!
- intent: want_to_buy
examples: |
- want an [icecream](food) // maybe even consider separating the intents for each category
- buy a [car](vehicle)
- where can i get [pizza](food)?
- i need a new [phone](electronics)
- any idea where i can get a good [fridge](appliances)?
- intent: order_taxi
examples: |
- order a cab
- i need a taxi
- please order a cab
- any taxis nearby?
- i need transport to [Beirut](city)
- intent: ask_president
- president of [US](country)? // labelling entities is crucial in this intent
- whos the president of [Lebanon](country)?
- tell me who the president of [Switzerland](country) is
- what about [UK](country) president
- any idea whats the name of the president of [France](country)?
- intent: get_bank_statements
- can you get my bank statements for me?
- bank statement please
- give me my statements
- need acc statment // introducing typos and abbreviations can be good
- i said i need by bank statements