We are currently having trouble with the flow of our conversation. I’ve seen someone has already asked something similar, but I’m still in the dark about what to do with some messages like these I’ve been receiving repeatedly:
DEBUG rasa_core.policies.memoization - Current tracker state [None, None, None, {}, {‘entity_PER’: 1.0, ‘prev_action_listen’: 1.0, ‘slot_PER_0’: 1.0, ‘intent_name’: 1.0}]
DEBUG rasa_core.policies.memoization - There is no memorised next action
DEBUG rasa_core.policies.ensemble - Predicted next action using policy_2_KerasPolicy
DEBUG rasa_core.policies.ensemble - Predicted next action ‘action_check_per’ with prob 1.00.
I’m a bit confused about the ‘there is no memorised next action’ and going back to checkpoints? There’s already a predefined answer for each step of the conversation. Is it a problem perhaps with our stories? I’ve noticed the bot doesnt seem to follow the path in the flowchart as it was supposed to.
Could someone have an idea what could we possibly do to avoid this log messages and improve out bot regarding those issues?
I think somewhere I have mentioned this before, I might create a tutorial about stories and how each policy works on Rasa core very soon, since i notice this a lot. Bear in mind, i also have a naive understanding of how rasa core works in reality
First thing you have to realise is how each policy “predicts” what to do next in the conversation
Memoization Policy - Actually it memorises your training data, but there are two key parameters
Max_history - How far back in the stories you would like the bot to remember in order to predict the next action, keep it at best 3 so when you use memoization policy to train your bot, it will compare the tracker with any stories upto last 3 conversation turns and see if it matches with any of your training data, the moment it doesn’t it falls back to the Keras policy
Augmentation Factor - by default this is 50 and it is creating new stories by stitching your existing stories to create longer ones. This will in turn can be set to 0, if you want to stories to predict exactly as you have in your training data
Keras Policy - It also uses the max history to create the features along with slots for an input feed to a neural network (LSTM) to create a classifier that predicts the next action. LSTM is a RNN that is able to use features in sequence to determine the next action in the sequence. Meaning if your max history is 3, you are generating 6 sequential features (last 3 intent, last 3 actions) and slot features if any for each action in your domain.
I also notice you are talking about flowchart, keep in mind rasa core stories aren’t rules but examples to train the model to generalise a pattern based on a sequence in a conversation. Meaning the model will generalise that an Acton X will most likely occur if the intent was X, previous intent was A and previous action was B, meaning there if your Action X reoccurs in another conversation, there could be some confusion between what is the likely action that results to bad prediction. More confusion there is , worse the prediction is going to be.
you should provide a lot more examples if you want to generalise your models to predict an Action X given a sequence in the conversation and also ensure how far back is the sequence useful
Thank you, @souvikg10, your answer actually helped a lot!
I’m new to this and although I’ve read the documentation, there were still doubts. Your explanation about policies was great (it was very clear) and it helped understand better. I would love to see a tutorial about them if you create it in the future!