I’ve encountered the following problem: I have a rule stipulating that every time a user asks for help, the bot should respond with the custom action action_provide_help:
- rule: provide help anytime the user asks for it
steps:
- intent: need_help
- action: action_provide_help
wait_for_user_input: false
When I train with rasa interactive, however, my bot sometimes makes different suggestions after the intent need_help (mainly in places where I haven’t used a need_help-Intent before). The confidence score for action_provide_help in these cases is only 0.05 even though I don’t have a single story where a need_help-intent is not followed by the action_provide_help-action.
My question is:
Why does my bot use nlu in the first place given that there is a rule?
I’m going to add the content of my config-file as I suspect that it is a configuration issue.
I had it like that (without the wait_for_user_input: false-line) before, but then the training didn’t work because the rules conradicted my story data (where after action_provide_help another action is called). So I cannot drop the line.
Any other ideas how I can get my bot to always call action_provide_help after a need-help-Intent?
That probably explains the problem you initially reported. You need to address the conflict. If you’ve checked your bot into a repo, share the link to the repo.
I’m not sure I understand - there is no conflict any more, after action_provide_help there is always another action of the bot. So all my stories follow the pattern of the rule - any time someone asks for help in a story, the bot answers with action_provide_help and then does something else.
I just don’t understand why this rule is not being followed in the interactive mode.
I suspect that there is a problem with my rule-file or rather the way it is being read: I’ve just discovered that when I use rasa shell, the behavior of my bot is completely different from anything that I’ve trained him to do and he does not respect the mentioned provide-help-rule. He does respect some of the rules, though: The debug-logs tell me that sometimes he recognizes that there is a rule for specific behavior, and sometimes he does not. I opened a separate thread about it but after I edited it to add some information, it got closed because you guys suspect me to be a bot myself
So to make a long story short, it seems to me that some rules are being read and followed and some are not…
I typed something which made the assistant recognize the correct intent get_agg_from_dataset, he then also seems to have found the rule, but then he didn’t follow through on it. In the debugging logs, it said:
rasa.core-policies.ted_policy – User Intent lead to ‘action_reset_slots’.
rasa.core-policies.ted_policy – User text lead to ‘utter_before_phase_two_form”.
rasa.core-policies.ted_policy – TED predicted ‘utter_before_phase_two_form’ based on user text.
…
rasa.core.policies.rule_policy – There is a rule for the next action ‘action_reset_slots’.
rasa.engine.graph – Node ‘select_prediction’ running ‘DefaultPolicyPredictionEnsemble.combine_predictions_from_kwargs’.
rasa.core.policies.ensemble – Made e2e prediction using user text
rasa.core.policies.ensemble – Added ‘DefinePrevUserUtteredFeaturization(True)’ event.
rasa.core.policies.ensemble – Predicted next action using TEDPolicy.
An end-to-end prediction was made which has triggered the 2nd execution of the default action ‘action_extract_slots’.
Why has an end-to-end prediction been made here?
I have one end-to-end example in my storydata, but that doesn’t apply here at all…