Stories and conversations - is my mental model right?

Hello all, first post. Hopefully appropriate, please let me know if not.

My goal: confirm my mental model of a conversation aligns correctly with rasa.

I’ve read through the docs, installed core + NLU & have the quickstart running locally. Per instructions I ran the bot from the command line:

python -m -d models/dialogue -u models/current/nlu

If I enter text that maps directly to the stories (e.g. starting with “hi”), the bot responds as expected. If I don’t - for example, starting with “banana” - the bot hangs. That wasn’t what I expected. I’m guessing that’s because the example doesn’t have any “fallback” behaviour. First off, is that right?

Secondly, how does rasa deal with state as a conversation progresses? Let’s say I’m part way through a story - and then enter an utterance that maps to some part of another story.

  • Does rasa core switch stories?
  • Is the state for the original story maintained? If so, how does it “come back”? Perhaps by entering an utterance that maps to the original story?

I hope that makes sense. My reference point is building bots in other toolkits (e.g. Microsoft bot sdk + luis) where the conversation is essentially managed as a finite state model. I’m trying to understand how the rasa model aligns/differs from that.


PS: this is my second visit to rasa, having looked several months ago. Installation was much easier this time, and the docs have improved a lot - great job.

Hi, can I ask, what did you expect after you input banana?

For the second part: rasa_core stores the history of conversation inside DialogueStateTracker object. Prediction of a next action and “coming back” behavior depends on the policies you chose for your bot. By default it is MemoizationPolicy which simply looks for exactly the same order of events as in training stories for certain max_history; and KerasPolicy which uses LSTM based algorithm to predict next action trained on your training stories.

1 Like

Good question. My initial expectation was that the input would be rejected in some way (“sorry I don’t understand”). When that didn’t happen, I rationalised it as “ahh - I haven’t set up a default response”. So that seemed reasonable. However, I didn’t (don’t) expect it to hang - which is what seems to happen. If I enter bananas there’s no response but the console remains active. If I then type hi there’s still no response. The only way to recover - that I’ve found so far - is to kill the process (Ctrl-C) and start again.

Thanks, that’s really helpful. I’ll go take a look.

1 Like

The problem with hanging is because the bot predicts action_listen. You’re totally right saying that it is wrong behavior. We have an issue for it, and we’re planning to work on it soon: