Stories are mixed with each other

Hey there :slight_smile:

in my use case there is one situation where a user walks through a three-step-dialogue and can select different paths via buttons. While the first path (first story) works perfectly, in the second path (second story) only the first and second bot action is performed correctly. The third bot action that follows an affirm-intent never is the planned one, but instead the one that follows the affirm-intent in the other story. To stick with the example: Instead of “utter_auto_push_set” I always get “utter_upsell_info”. As mentioned before, the first story is working without any problems.

As my bot is basically an FAQ-bot, I have set the max_history to 1 in my MemoizationPolicy.

First story:

* no_response_to_ads
  - utter_what_type_of_ad
* ads_nondating
  - utter_no_response_to_nondating_ads
* affirm
  - utter_upsell_info

Second story:

* no_response_to_ads
  - utter_what_type_of_ad
* ads_dating
  - utter_no_response_to_dating_ads
* affirm
  - utter_auto_push_set

Maybe you have some advice for me, what is going on here.

Thanks in advance!

Are you using any other policy? How does your config file look like?

Theoretically max_history of 1 should work. Are you sure that the intents are predicted correctly? Do you have any other story that uses the intent affirm?

Just to be sure, does it work if you set max_history to 2?

Hey Tanja,

Thank you for your answer. Yes, I have a few stories with affirm in it. And this is what my config looks like:

language: de
pipeline: supervised_embeddings
policies:
  - name: MemoizationPolicy
    max_history: 1
  - name: KerasPolicy
  - name: MappingPolicy
  - name: FallbackPolicy
    nlu_threshold: 0.4
    core_threshold: 0.4
    ambiguity_threshold: 0.1
    fallback_action_name: action_default_fallback

For a test I have set my MemoizationPolicy to 2 – and it works well this way. But now most of my one-turn stories (user asks – bot acts/utters) don’t work anymore and lead to a fallback.

I was once the advice given, to use a mapping for my one-turners and try to handle the others with a higher value for the MemoizationPolicy. Do you think this might help to solve the problem?

Edit: I’ve gained some further insights that might help. I always stumble upon these problems whenever there is an intent within a multi-turn story, that is not exclusively part of that story but also appears within others (such as affirm or deny). In these cases, the bot produces either my fallback utterance or (seems to) randomly pick one of the utter-actions that immediately follows the specific intent in any of my relevant stories.

Hey Tanja,

It looks like I found a solution. For all my FAQ one-turners I have mapped the intents to the corresponding actions and at the same time set the MemoizationPolicy back to default. This seems to work well for most of my cases until now. :slight_smile:

Hey @Tanja,

Unfortunately I need to re-open this thread as I am stumbling upon these problems again:

Whenever there is an intent in my story, that also appears in another story (usually affirm or deny), the bot “randomly” picks an action in response to this intent, that follows the affirm-/deny-intent in any of my stories. Sometimes the bot instead presents my fallback-utterance after affirm/deny. There is a striking pattern: This does not happen, if the story is tested right at the beginning of a new conversation. Only in case another story was initiated in this conversation beforehand, the bot fails after affirm/deny in the new story.

I have mapped most of my intents to actions, but of course affirm/deny are not mapped to a specific action. I use MemoizationPolicy with max_history of 6:

language: de
pipeline: supervised_embeddings
policies:
  - name: MemoizationPolicy
    max_history: 6
  - name: KerasPolicy
  - name: MappingPolicy
  - name: FallbackPolicy
    nlu_threshold: 0.4
    core_threshold: 0.4
    ambiguity_threshold: 0.1
    fallback_action_name: action_default_fallback

Maybe you have an idea how I can solve this problem. :slight_smile:

Thank you in advance!

Sebastian

I might have found a solution. By setting the max_history to 2, everything seems to work now. I am still a little confused about max_history in general and I’m not sure if understand the docs correctly.

Let’s say I have the a story with three intent-actions-pairs like this:

## example story

* greet
- utter_greet_back
* ask_for_help
- utter_offer_help
* thank
- utter_you_are_welcome

Let’s imagine, the bot has presented utter_offer_help and the user has just typed in “Thank you”. The bot now tries to interpret that input to select the next (re-)action. Let’s say it has a high confidence to interpret “Thank You” as the intent thank. As far as I understood, with a max_history of 2 the bot would now also take the the last two “steps” of the actual dialogue and check if it finds a resemblance of these two steps (+ the just interpreted intent thank) in any story. The two steps in this case would be the action utter_offer_help and the foregoing intent ask_for_help. If it does not find a resemblance of all three steps in this exact combination, it will utter_default, if it finds one, it will execute the action that corresponds to the detected story. Is this so far correct?

I am still a little confused and try to understand, why max_history of 6 led to failure in my case, while max_history of 2 now seems to work.

Thanks in advance!

Your explanation sounds right. If max_history of 6 did not work, it was most likely the case that your story did not fully match any story in the training data as all 6 previous actions need to match. Can you verify that?

Hey Tanja,

yes – until now, everything seems to work fine and I can verify it. :slight_smile:

I have one last question regarding your statement that “all 6 previous actions need to match”: What exactly is meant by action here? Does the max_history only concern the previous bot-actions or is it also sensitive to the intents that were part of the last – let’s call it – events of the conversation?

Have a nice day!

Sorry for the late reply. With action I was referring to all kind of events that happened before, so bot utterances/actions and user utterances.

Thank you. :slight_smile: