TEDPolicy starts in middle of story(Unwanted), KerasPolicy does not

I have noticed that whenever i train my chatbot with TEDPolicy my chatbot is likely to utter random responses that are extracted from the middle of one of my stories.

Some of my stories I have a question that requires user to give a one worded ‘inform’ answer. If a user randomly decides to utter that ‘inform’ when not inside a story, the chatbot will still respond with the follow up response from one of the stories, which seems unnatural.

Meanwhile I do not have this issue with KerasPolicy which triggers action_default_fallback if user attempts to utter inform at the start of the conversation without entering the story. How do I fix this behavior on TED or is this a limitation of the TEDPolicy?

Although I see the benefits it can have I do not have any stories that can benefit from this scenerio, I dont want the chatbot responding “Got it! one pizza coming up” every time a user randomly just says “Yes” to the chatbot.

TEDPolicy has typically different confidences, maybe it is possible to solve this issue by updating core threshold for your fallback policy

could you share your stories and point out where TED makes mistakes

Here is one story that TEDPolicy makes a mistake in. if user just says ‘batman’ it will start in the middle of story and respond with following 3 responses.

I have had also some cases where if user informs batman without being in the story, it will the return first line utter_correct_answer and then followed by a default fallback response, which again does not make sense. If it helps, I am also using ‘inform batman’ in other quizes which are not much different from the screenshot below. If it helps I am using TEDPolicy of maxhistory 10 batch size 64,32, augmentedmemoizationpolicy of maxhistory 5.

my core threshold is 0.3 on default fallback


what do you expect it to predict after inform_batman? Do you have a story for correct behavior?

In the case where it jumps into the middle of a story? nothing. The correct behavior would be to trigger fallbackpolicy as no story begins with ‘inform_batman’. the Inform_batman similar to ‘affirm’ should only be part of stories. Simply stating “Yes” or “Batman” at the start of a conversation should not jump into the middle of a quiz story. KerasPolicy does this sometimes too but not as a often as TEDPolicy.

it is not a learnable pattern to predict nothing, especially if your users do that. I’d come up with an action or utterance for it and create an appropriate story