Using Fallback policy breaks stories inside form with unhappy path

I am training a chatbot, and followed the tutorial in the guideline. I have a form in the chatbot, where I already have stories to handle unhappy paths such as when the user asks for retrieval actions (FAQs) in the middle or ask why a certain slot is required. Without adding a Fallback Policy (either Fallback or TwoStageFallback), everything works just find. Once I add a Fallback Policy, my stories break, I get “Failed to extract…” from the Form and directly Fallback action is fired.

Here’s my config file just in case it’s needed (I reduced epochs sizes just to make it faster to train):

language: en
pipeline:
  - name: SpacyNLP
  - name: ConveRTTokenizer
  - name: ConveRTFeaturizer
  - name: RegexFeaturizer
  - name: LexicalSyntacticFeaturizer
  - name: CountVectorsFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: "char_wb"
    min_ngram: 1
    max_ngram: 4
  - name: DIETClassifier
    epochs: 10
  - name: EntitySynonymMapper
  - name: ResponseSelector
    epochs: 10
  - name: "SpacyEntityExtractor"
  # dimensions to extract
    dimensions: ["PERSON"]

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
  - name: MemoizationPolicy
  - name: TEDPolicy
    max_history: 10
    epochs: 10
  - name: FormPolicy
  - name: MappingPolicy
  - core_threshold: 0.3
    name: TwoStageFallbackPolicy
    nlu_threshold: 0.8

Hello @HusamSadawi,

From my experience, when the “Failed to extract…” happens, the bot won’t just fallback, it just mean that it is not able to fill the slot (not recognize entity) and will ask for the slot again.

The reason your bot execute fallback might be because 1 of the reason:

  1. The core prediction’s confidence is lower than the threshold -> The bot is not sure about what action to execute next for your message.
  2. The nlu prediction’s confidence is lower than the threshold -> The bot doesn’t understand your message and not sure what intent that is.

Now when you disable Fallback, it SEEMS to work normally because you basically tell it to just response no matter how well it understand the user. Think of it like this:

  1. Without fallback:

    • User said something -> Intent A
    • The bot is not sure about it (only 20% sure that the user meant A)
    • The bot doesn’t care, even if it is only 1% sure about the user’s intent, it will just response correspondingly.
  2. With fallback:

    • User said something -> Intent A
    • The bot is not sure about it (only 20% sure that the user meant A)
    • Now it thinks “I don’t really understand what the user meant, better inform them about that”
    • The bot execute fallback

So i suggest you run the bot in debug mode, without Fallback Policy and watch the log to see if your bot core and nlu confidences are actually solid. Your nlu threshold (0.8) is not low at all so the bot might fallback even if it’s 70% sure about the user’s intent.

Hi @fuih, thanks for your quick answer! I really appreciate it…

I have checked the chatbot, and it seems like it’s classifying intents rather correctly, actually. But so far I’ve experimented with the NLU threshold, it might be that as you pointed out, the NLU is working fine but not what it should do with this intent.

Do you think increasing the epochs for TED policy would help my chatbot be more confident in predections?

Yes, i think you should try increasing the epoch for the TED Policy. If i’m not mistaking, the TED Policy is responsible for predicting what action to be executed for certain intents (core prediction). So even if your NLU predictions are solid, the bot can still fallback if core predictions are not high enough.

If i remember correctly, the default number of epoch is normally 100, so 10 epochs maybe too low depends on your data. Maybe try 50 and then 100 too see if it’s improved.

Actually, you can check if the problem is caused by core predictions by setting the core threshold to 0 but keep your nlu threshold (0.8). If your bot works correctly again, then you indeed need to improve the core part (TED Policy).