Hi all, so I’m new to rasa and I had a question about stories. I just ran rasa init --no-prompt and had the moodbot. I noticed one of the stories was ‘say goodbye’ which utters goodbye when the user says a ‘goodbye’ intent. I deleted this story, trained with ‘rasa train’ and then started the bot and said something from the ‘goodbye’ intent. I noticed that the bot responds with the same thing as before I deleted that story. The only other place the bot can utter goodbye is at the END of a different story
My question is, are stories followed in order? Why can the bot still utter_goodbye at the very beginning of our conversation even if that isn’t a standalone story and it only appears at the END of sad path 2? Don’t I have to follow sad path 2 to get the bot to utter goodbye? Also, if the bot can jump around and reply with ‘goodbye’ because it sees it in another story, what is the point of creating a standalone story just for goodbye?
I’d appreciate any help, thanks!
Hi @juncrendi, thank you for your question!
By default, Rasa Core (the component that predicts the next action and learns from stories) is configured with the TED policy, which is a machine learning algorithm. Mood Bot is very simple and TED might be able to generalize correctly from just those few examples, even though it has never seen this exact dialogue. Once you introduce more intents and actions, and make your bot more complex, you’ll need more stories to train on so that this kind of generalizations still works.
hello @j.mosig, thanks for your response, however I am not sure I fully understand.
What you are saying is that it is able to jump around because there are not enough intents and stories? I was confused with how it was able to jump to the last intent of the story without following the the first parts of the story. Is the story not a rigid structure that rasa has to follow? If it isn’t a rigid structure, what is the point of having stories?
It is pretty much impossible to write down every possible story that your assistant might encounter in an interaction with a real user. Even for relatively simple bots. Therefore, Rasa by default (though this is configurable) does two things with stories: First, if the conversation that it is having with a user exactly corresponds to one of the training stories, then it does exactly what the story says. But, second, if the user does something unexpected (which will be the case most of the time, even if you have thousands of stories), then it “extrapolates” using machine learning - so in a sense it guesses what it should do in this unprecedented situation. The more stories you have, the better this extrapolation works.
Btw., the distinction between “rigid structure” and “machine learning” will become more clear in Rasa 2.0, which is in alpha right now.
@j.mosig thank you so much for the response, the extrapolation explanation made it more clear for me.
@j.mosig Is there a way in rasa to check the confidence of what the utterance will be? In the same way I run ‘rasa shell nlu’ to get the confidence of the intent, is there a way to get the confidence of which utterance will be produced based on the input?
What is the major change coming in rasa 2.0 for this/
@juncrendi Do you mean the confidence in the action prediction? That should come along with predictions from the policy (e.g. TED). You can easily create a fallback in case confidence is low, see https://rasa.com/docs/rasa/core/fallback-actions/#fallback-policy
@sibbsnb We’re going to replace and unify all the non-machine learning policies, such as memoization, fallback, mapping, etc. into a single new
rule system. So you can write rules for fixed behaviour and stories for learned behaviour. Both rules and stories are going to look quite similar, only that rules have some extra notation that allows you to specify under what circumstances the rule should trigger. Check out our first alpha release to learn more! GitHub - RasaHQ/rasa at 2.0.0a1 Or the blog post: What’s Ahead in Rasa Open Source 2.0. Lots of other new things will be there, too, and we’re keen on hearing from you if you have feedback
@j.mosig, thanks for that page, it has a lot of good info. I am looking for a watch to see what the confidence of the action prediction is. In the fallback policy, we have to say explicitly state the core-threshold. I am looking for a way to extract what the exact confidence value of the next action is. I am Using the Agent API to extract the intent classification prediction but couldn’t find a way to extract the confidence of the next action prediction
awesome, saw the announcement yesterday
@juncrendi You can access the last event in the tracker in a custom action with
tracker.events[-1] and then look for the
intent_ranking data in there. But Rasa X also shows the confidence scores, I believe, so the easiest option would be to talk to your bot and review conversations in Rasa X. See Review Conversations