Rasa uses nlu even though there is a rule


I’ve encountered the following problem: I have a rule stipulating that every time a user asks for help, the bot should respond with the custom action action_provide_help:

 - rule: provide help anytime the user asks for it
   - intent: need_help
   - action: action_provide_help
   wait_for_user_input: false

When I train with rasa interactive, however, my bot sometimes makes different suggestions after the intent need_help (mainly in places where I haven’t used a need_help-Intent before). The confidence score for action_provide_help in these cases is only 0.05 even though I don’t have a single story where a need_help-intent is not followed by the action_provide_help-action.

My question is: Why does my bot use nlu in the first place given that there is a rule?

I’m going to add the content of my config-file as I suspect that it is a configuration issue.

   - name: WhitespaceTokenizer
   - name: LexicalSyntacticFeaturizer
   - name: CountVectorsFeaturizer
   - name: CountVectorsFeaturizer
     analyzer: char_wb
     min_ngram: 1
     max_ngram: 4
   - name: RegexEntityExtractor
     case_sensitive: False
     use_regexes: True
   - name: DIETClassifier
     epochs: 50
     constrain_similarities: true
   - name: EntitySynonymMapper
   - name: ResponseSelector
     epochs: 50
     constrain_similarities: true
   - name: FallbackClassifier
     threshold: 0.3
     ambiguity_threshold: 0.1

   - name: AugmentedMemoizationPolicy   
     max_history: 4                    
   - name: RulePolicy
     core_fallback_threshold: 0.2 
     core_fallback_action_name: "action_give_hint" 
     enable_fallback_prediction: True 
   - name: UnexpecTEDIntentPolicy
     max_history: 5 
     tolerance: 0.4
     epochs: 50 
   - name: TEDPolicy
     max_history: 10 
     epochs: 50
     constrain_similarities: true

Thank you in advance!

Why does my bot use nlu in the first place given that there is a rule?

To predict the intent and any entities.

Try dropping the wait_for_user_input: false line from your rule.

Thank you for your quick answer.

I had it like that (without the wait_for_user_input: false-line) before, but then the training didn’t work because the rules conradicted my story data (where after action_provide_help another action is called). So I cannot drop the line.

Any other ideas how I can get my bot to always call action_provide_help after a need-help-Intent?

That probably explains the problem you initially reported. You need to address the conflict. If you’ve checked your bot into a repo, share the link to the repo.

I’m not sure I understand - there is no conflict any more, after action_provide_help there is always another action of the bot. So all my stories follow the pattern of the rule - any time someone asks for help in a story, the bot answers with action_provide_help and then does something else.

I just don’t understand why this rule is not being followed in the interactive mode.

I suspect that there is a problem with my rule-file or rather the way it is being read: I’ve just discovered that when I use rasa shell, the behavior of my bot is completely different from anything that I’ve trained him to do and he does not respect the mentioned provide-help-rule. He does respect some of the rules, though: The debug-logs tell me that sometimes he recognizes that there is a rule for specific behavior, and sometimes he does not. I opened a separate thread about it but after I edited it to add some information, it got closed because you guys suspect me to be a bot myself :smiley:

So to make a long story short, it seems to me that some rules are being read and followed and some are not…

it seems to me that some rules are being read and followed and some are not

When the expected rule is not followed, I would look closely at the debug log to see why this is happening.

This time in rasa shell, he found the “need help → provide help”-rule, but the same thing happened with a different rule. The rule goes as follows:

- rule: activate get-aggregations
  - intent: get_agg_from_dataset
  - action: action_reset_slots
  - action: get_agg_from_dataset_form
  - active_loop: get_agg_from_dataset_form

I typed something which made the assistant recognize the correct intent get_agg_from_dataset, he then also seems to have found the rule, but then he didn’t follow through on it. In the debugging logs, it said:

rasa.core-policies.ted_policy – User Intent lead to ‘action_reset_slots’.
rasa.core-policies.ted_policy – User text lead to ‘utter_before_phase_two_form”.
rasa.core-policies.ted_policy – TED predicted ‘utter_before_phase_two_form’ based on user text.
rasa.core.policies.rule_policy – There is a rule for the next action ‘action_reset_slots’.
rasa.engine.graph – Node ‘select_prediction’ running ‘DefaultPolicyPredictionEnsemble.combine_predictions_from_kwargs’.
rasa.core.policies.ensemble – Made e2e prediction using user text
rasa.core.policies.ensemble – Added ‘DefinePrevUserUtteredFeaturization(True)’ event.
rasa.core.policies.ensemble – Predicted next action using TEDPolicy.
An end-to-end prediction was made which has triggered the 2nd execution of the default action ‘action_extract_slots’.

Why has an end-to-end prediction been made here? I have one end-to-end example in my storydata, but that doesn’t apply here at all…

Looks like you’re using end-to-end stories. This is experimental and should not be used in production.