I think somewhere I have mentioned this before, I might create a tutorial about stories and how each policy works on Rasa core very soon, since i notice this a lot. Bear in mind, i also have a naive understanding of how rasa core works in reality
First thing you have to realise is how each policy “predicts” what to do next in the conversation
Memoization Policy - Actually it memorises your training data, but there are two key parameters
Max_history - How far back in the stories you would like the bot to remember in order to predict the next action, keep it at best 3 so when you use memoization policy to train your bot, it will compare the tracker with any stories upto last 3 conversation turns and see if it matches with any of your training data, the moment it doesn’t it falls back to the Keras policy
Augmentation Factor - by default this is 50 and it is creating new stories by stitching your existing stories to create longer ones. This will in turn can be set to 0, if you want to stories to predict exactly as you have in your training data
Keras Policy - It also uses the max history to create the features along with slots for an input feed to a neural network (LSTM) to create a classifier that predicts the next action. LSTM is a RNN that is able to use features in sequence to determine the next action in the sequence. Meaning if your max history is 3, you are generating 6 sequential features (last 3 intent, last 3 actions) and slot features if any for each action in your domain.
I also notice you are talking about flowchart, keep in mind rasa core stories aren’t rules but examples to train the model to generalise a pattern based on a sequence in a conversation. Meaning the model will generalise that an Acton X will most likely occur if the intent was X, previous intent was A and previous action was B, meaning there if your Action X reoccurs in another conversation, there could be some confusion between what is the likely action that results to bad prediction. More confusion there is , worse the prediction is going to be.
you should provide a lot more examples if you want to generalise your models to predict an Action X given a sequence in the conversation and also ensure how far back is the sequence useful
hope this helps