Dealing with uncertain actions

Hi,

I have a question with regards to using Rasa Core with the HTTP API when predicted actions are uncertain.

Say we have the case that the user utterance is ambiguous, so that we have a probability of 0.55 for action A and 0.45 for action B. In that case, we would like to ask the user “did you mean A or B or something else?” and use the user feedback to determine further action.

I could see three possibilities to achieve this:

  1. Modify the Rasa Core response to include not only the most likely action but the N most likely actions and their probabilities. Then we have some logic in our channel component that checks the probabilities and instead of returning the answer from action A (the most confident one), it passes an utterance to the user as shown above. When the user responds with B, action B is executed.

This approach has several drawbacks. Rasa Core will assume that action A was executed, since it has a higher probability. Therefore, action A is already in the tracker and it is not trivial to remove the last action from the tracker. Also, if action A triggered a side-effect (like an API call), it could be too late to instead execute action B.

  1. We have a special action, say ActionUncertain. If the probability for any one action is too low, we predict this action. It then returns the utterance above and if the user says B, we somehow hard-match this to action B (how?).

If I understand the inner workings of Rasa Core correctly, with this approach, the ActionUncertain would be tracked in the Tracker and thus be part of the features created for the machine learning model to predict the next step. Consequently, we would need stories where this ActionUncertain appears – otherwise, the model cannot make sense of this feature. But introducing this action into existing stories seems to be cumbersome. Furthermore, since we are labelling real customer interactions with our service agents, this kind of action never occurs.

  1. Use the fallback action. However, AFAIK, this only covers the case when we didn’t understand the user at all, not for asking whether they meant A or B.

I hope that there is a simple way for us to implement this use case. Any help would be welcome.

PS: Are there more Rasa Action SDKs on the roadmap, say for JS (node)?

2 Likes

I think with the user utterance being ambiguous, this should be handled on the NLU level. So we’d handle it with the Fallback policy – we’re actually looking into something like this at the moment. We should hopefully be merging that in the near future.

As for more Action SDKs, in the immediate future no. But it’s something we might consider

Have you tried this?

Thank you for the replies. For our case, however, we often find that the intent recognition is pretty confident but the predicted action is still ambiguous. Do you know of a solution for that?

we’re actually looking into something like this at the moment

Could you point me to a PR or issue so that I can have a look at this?

You can create a disambiguation policy in policy ensemble along with Memoization, Keras and Fallback

Where you can compare the difference in confidence between top two actions and reply with a particular action, i guess the code will follow the same as fallback policy i suppose

@souvikg10 Thanks for the tip.

If I understand your suggestion correctly, it would correspond to my option 2. I.e. the predicted FallbackAction would be written to the tracker and then featurized for the next prediction, at which point we reach a state that has never been seen before during training, and the policy ensemble can’t make any sensible prediction.

Here your predicted Disambiguation Action can be written to the tracker but that is something you can potentially omit, i am wondering if fallback actions are writtent to your tracker or not and then featurized because otherwise it will be difficult to write stories for that. I think it goes back to the previous state

@benjamin-work if the action predicted is ambiguous, i would assume there’s probably a problem in your stories. As for the fallback policy disambiguation for NLU, we don’t have any implementation yet, we’re still brainstorming how best to handle this

Do you know what the best way is to achieve this when using Rasa core as http API? Would you need to implement a custom featurizer?

This was my concern as well.

No matter how good your stories are, I would assume that there is always the possibility to encounter an ambiguous case. Do you have any better suggestion on how to deal with those when the NLU is not the problem?

I tried to look into the fallback policy to see if the outcome from the fallback policy is saved into the tracker or not. It seems like it does.

That raises the question if a fallback action is featurized for the next prediction since MaxHistory Featurizer will take the last history??

In that case, wouldn’t the Memoization policy will never work the moment a fallback action is triggered due to some mistake. Isn’t it possible to ignore the fallback in the tracker?

@akelad

Well, the FallbackAction gets added to the tracker, but the default one also immediately returns a UserUtteranceReverted() event, which forgets everything that happened until just before the user entered the utterance that caused the fallback

@benjamin-work well, it gets less and less likely the more stories you add. So my best suggestion is add more stories. I don’t think the user should be allowed to choose which action should be predicted, as they don’t know the correct behaviour. The only thing that should be allowed is maybe correcting the intent of their message

1 Like

That makes sense. Thanks

Thanks for the answers.

So to recap, in case it is uncertain what to do next, we should use the FallbackAction that asks the user to re-formulate their last message. There would be no need to do anything special (using Rasa core http API) because the tracker is reverted to the state before the ambiguous utterance.

1 Like