does rasa strictly follow stories, I trained a bot on few stories roughly around 10 stories, when i converse it seems that it is fiiting the live converstaion into the story does this work this way , or it tries respond according to the query, I have trained it with tf embedding , can someone suggest a blog which tells more about the performance improvement of the bot.
The key is to understand the different policies. I will try my best here though i may not be completely correct
- Memoization Policy - This policy basically copies all your training data into Memory and upon Parse, creates a storyline based on the tracker object and see if the conversation is following any stories mentioned in the training data, this does not account for stateless stories however because of the default augmentation which randomly glues different stateless stories together to create a path. So if you want a strictly rule based bot, using memoization policy is your best bet. Make sure you set --augemtation to 0, if you always want for some intent a particular action ALL the time,
- intent_thankyou - utter_thanks
- Keras Policy - This policy is using a sequential model, LSTM to predict the next action based on a certain number of features. Consider a state machine where you would like to predict the state of the conversation
- State1- place_order
- State2- book_order
In order to go from State1 to State2 , you will need certain information from the user ( these are your slots) How you retrieve these features during a conversation and how they can impact in going to state 2 can depend on how long in a conversation these features are relevant. that is the max_history flag.
Keras Policy if it is part of your training pipeline, will fit a model considering a set of features( current_intent, prev_action, slots, prev_intent), this is then fit into a multi-classification problem in order to predict the next action from a list of actions you have in your domain file. LSTM(RNN) is able to retain some amount of information during the part of the conversation as it is time driven and helps in predicting the next action.
Embedding Policy - i have not used it though i see in the documentation, there is a pretty good explanation, i don’t fully understand attention LSTM myself so won’t be able to understand it.
There is sklearn_policy if you think a linear classifier will work better with your data.
Overall the key to understanding rasa core is to understand the different features that it assumes in order to predict the next action.
If you would like to get a better performance, something i long believe myself is to reduce the dependency on the ML policy. Keras or any classifier in a state transition model is there to guide the user to next best action if in case the user has deviated from the defined path, the goal should be for you to guide the user back to the flow that you would like for him to continue.
I hope this helps, if you see somewhere the explanation is incorrect, you can reply.
how can i do that ???
when you run the train command
parser.add_argument( '--augmentation', type=int, default=50, help="how much data augmentation to use during training")