Decison tree policy for core

I was having difficulty in training a policy for conversation of wildly different but deterministic flows, I think using a decison tree might help. Any idea how I can incorporate that?

I think you can use the Memoization policy to create deterministic flows, however it will not account for unlikely scenarios

tried that but memoization policy isnt able to learn exixsting flows, cant understand why.

Strange, set augmentation to 0, otherwise it randomly glues stories together

Thanks I will try that

didnt help, I have about 300~ stories

can you run with --debug to see what the policy predicts?

It just says

There is no memorised next action Predicted next action ‘action_listen’ with prob 0.00. Action ‘action_listen’ ended with events ‘[]’ topic: None

keep in mind, memoization looks at the entire history of the conversation in the tracker

if you want to ignore that, you have to create headless stories

_intent_hello
- utter_hello

and set augmentation to 0

i like this explanation

class MemoizationPolicy(Policy):
    """The policy that remembers exact examples of
        `max_history` turns from training stories.
        Since `slots` that are set some time in the past are
        preserved in all future feature vectors until they are set
        to None, this policy implicitly remembers and most importantly
        recalls examples in the context of the current dialogue
        longer than `max_history`.
        This policy is not supposed to be the only policy in an ensemble,
        it is optimized for precision and not recall.
        It should get a 100% precision because it emits probabilities of 1.0
        along it's predictions, which makes every mistake fatal as
        no other policy can overrule it.
        If it is needed to recall turns from training dialogues where
        some slots might not be set during prediction time, and there are
        training stories for this, use AugmentedMemoizationPolicy.
    """

I am somewhat confused what effect does max history have on memoization?

in order to predict the next action using this policy - it will compare the tracker with a max_history let’s say 1

* hello
- utter_hello
* how_are_you
- utter_fine
* what_can_you_do
- help_with_mails

suppose this is your conversation with the user so far

when you use memoization with max_history 1, in order to predict what should happen next, it will consider what happened one history before and compare it with your training data so your training data must have

* what_can_you_do
- help_with_mails
* how_to_send_mail
- send_by_button

then only send_by_button will be called as the next action for a particular intent

Tbh, i haven’t used Memoization just by itself but i use it to make sure that some really certain flows don’t break ( happy path ) and use Keras for the unlikely scenarios

But i will still answer your first question though,

You can override the model_architecture to implement your own policies for decision trees such as CART and pass your policy to the agent

from your policy import DecisionTree

from rasa_core.agent import Agent

agent = Agent(“domain.yml”, policies=[ DecisionTree(your_custom_features)])

what if I train the memoization policy for longer and predictable flows and separately train the keras policy for shorter/stochastic flows, and use them together. Could that work?

well technically an ensemble is using them together

For each prediction - the ensemble checks which gives a better prediction

You can pass both the policies to the agent, i am not sure how would you route your parse to two different models and check which is right