I’m trying to get an Action from my actions server to run another Action using the return FollowupAction(‘name_action’) method as seen in Events. Unfortunately it starts executing way more actions than I asked it to.
Basic example: I want ‘action_utter_goodbye’ to run ‘action_utter_greet’. Yet this is what I’m getting when I try that.
And how can this be solved? I’m also using AugmentedMemoizationPolicy and have the same problem with FollowupAction. And utter_template neither works as expected.
We have a PR in to document more about the AugmentedMemoizationPolicy but from reading it I believe this to be the issue:
Info about the policy below here leads me to think that is why this is occurring.
The AugmentedMemoizationPolicy remembers examples from training stories for up to max_history turns.
It has a forgetting mechanism that will forget a certain amount of steps in the conversation history and try to find a match in your stories with the reduced history. It predicts the next action with confidence 1.0 if a match is found, otherwise it predicts None with confidence 0.0.
Note
If it is needed to recall turns from training dialogues where some slots might not be set during prediction time, add relevant stories without such slots to training data, e.g. reminder stories.
Since slots that are set some time in the past are preserved in all future feature vectors until they are set to None, this policy has a capability to recall the turns up to max_history from training stories during prediction even if additional slots were filled in the past for current dialogue.
Hi Brian, thank you for checking this issue. For me it was the Keras policy that I had to remove for the system not to act up with the FollowupAction return. The AugmentedMemoization I am still using and I get the follow-up action as I expect.
On your question as to which NLU Pipeline I’m using, I’m not using any at the moment. Just the RegexInterpreter being passed to the Agent so that I can directly test intents using: “/greet_intent” and so on.
As mentioned, I think there might be an issue with the KerasPolicy and the FollowupAction.
Maybe, I was testing without a pipeline, just pure Rasa Core Intents and Actions. Either way, I removed policies 1 by 1 in my tests, and it was only when the KerasPolicy was gone that the strange behaviour would also stop.