Pause story and continue where you left off

Hi, I was wondering if it’s possible to “pause” the story to go to another story and then go back where you left in the other story (and asking the question where you left off again).

This sounds vague, so let me give an example:

Right now, I have a bot who wants to guide the user, so it asks the user questions. However, the bot can also ask questions. Whenever the user asks a question, the bot should respond but then go back to where it left off (ask the question again) and continue asking guiding questions.

Example conversation:

  • Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
  • User: I am dutch
  • Bot: Cool! And when are you planning on starting the study?
  • User: This year.
  • etc.

During this conversation, the user can ask a question. Example:

  • Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
  • User: How do I enroll for the study?
  • Bot: You can enroll by …
  • Bot: Are you a dutch student or an international student?
  • User: I am dutch
  • Bot: Cool! And when are you planning on starting the study?
  • User: This year.
  • etc.

Does anybody know if this is possible?

1 Like

Use MappingPolicy for this.

Thanks for your reply, I will take a look at this tomorrow!

It does answer an FAQ when asked, but it also continues the story… For the example story it now looks like:

  • Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
  • User: How do I enroll for the study?
  • Bot: You can enroll by …
  • Bot: Cool! And when are you planning on starting the study?
  • User: This year.
  • etc.

Thus, the bot pretends the question was also an answer to the question whether the person is international or dutch…

EDIT: I added the FAQ’s to the story line (just one intent which is the question and one utter which is the answer), and now it does not pretend it’s an answer to the bot’s question, but it seems to stop the story. Example:

  • Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
  • User: How do I enroll for the study?
  • Bot: You can enroll by …
  • User: I am dutch
  • Bot: I am sorry, I do not understand you [utter_default]

Don’t do that. MappingPolicy ALWAYS maps that ‘how do I enroll’ intent to the answer ‘you can enroll by…’. It’ll just confuse the chatbot if you train stories containing your mapped intents within them.

So write your stories as you think they would happen without small talk and the mappingpolicy will handle the small talk for you.

Ah thanks. I just tried it to see what would happen because when I don’t have them in my stories, it still doesn’t work as expected.

It’s the same thing as before though:

But more weird things happen… so it always gives an answer to the FAQ, but then it either:

  • thinks the FAQ was also an answer to the bot’s question, so it continues the story one step too far
  • it waits for a response to its question, but when I answer it says it does not understand me
  • it waits for a response to its question and continues the story according to that answer, exactly what i want!

And these three things happen at random during the conversation. How is it possible that it sometimes works and sometimes doesn’t?

I am using checkpoints btw. Can that mess it up?

Okay, so first of all, as a general note: Did you retrain the model without the ‘FAQ’ stories?

second:

can you post a screenshot of your log file where this case is handled?

same as above

same as above, this should be the intended behaviour.

It’s never random :wink: You’ll have to double-check your stories I’d guess

It shouldn’t. As long as MappingPolicy handles the question, the conversation state gets reverted to before the answer. It should behave as if the ‘small talk’ never occurred.

So in summation: Please post screenshots of your bot handling these wrong cases!

I did retrain the model. I do after every small edit.

Please note that the GUI still needs to be fixed: it puts two utters in one message. In the first picture you can see this as after the link it says “Another thing to look at in yourself is…”, which is a new utter, thus assuming “how do i enroll” was an answer to the question.

And

And this is to show how the flow should behave without asking an FAQ:

And thanks for wanting to help me :slight_smile:

Edit

It even seems to have a different behavior everytime I retrain… (again, the GUI is not correct yet, but you can see two utters in one message here)

Knipsel

Hi Nikki, thanks for the screenshots, that clears things up regarding the actual flow.

However, I might not have been clear enough about booting the log files. Could you open up a terminal (command prompt) and use the command:

python -m rasa_core.run -d MAPWHEREYOURCOREMODELIS -u MAPWHEREYOURNLUMODELIS --debug

obviously replace the capitalized letters with your maps.

Alternatively, you can search for a file called rasa_core.log to open up the log itself. This is more tedious however.

With this command you can troubleshoot what is going wrong. You can send screenshots of that, or upload the log file if you need help.

Ah, sorry. Here is the log file:

output.txt (25.9 KB)

It says “Feature ‘entity_student_nationality’ (value: ‘1.0’) could not be found in feature map. Make sure you added all intents and entities to the domain”. All my slots are an entity, or is that not what it means? I have an entity called ‘student_nationality’, but what is the ‘feature map’?

Alright, when I look at this, 2 things stand out:

  1. After using MappingPolicy (which it does correctly) it switches over to KerasPolicy. It should go to Memoization. What is your Max_History setting like? It should be about 10 for your purposes I think. You can also try using AugmentedMemoizationPolicy as it stitches together more different usecases. You can set the Augmentation factor in your training config.
  2. The Feature map is what Rasa uses to set slots in accordance to the stories it trains on. Have you set the entity ‘student_nationality’ to unfeaturized? This makes it so it won’t be set in the feature map. If you want your users to put in any country that matches a real country, consider using ‘text’ as the slot type.

I was not aware of ant Max_History, so it was set in default I guess. I just set it on 10, if I did it correctly. I added this to the config file:

  - name: "MemoizationPolicy"
    max_history: 10

However, this did not change anything.

I had set ‘student_nationality’ to categorical (either dutch or international), as I only care about if they are dutch or not. I could of course make it a boolean, not sure if it will matter for Rasa? But setting that slot to unfeaturized did not change anything in the conversation either…

Though setting it to categorical will bring some problems of its own.

Doing this means that the user can either say ‘dutch’ or ‘international’ to the bot. Anything else will send it reeling. You’d need to add all other nationalities other than dutch as synonyms for ‘international’ to make that work.

You may also need to change MemoizationPolicy into AugmentedMemoizationPolicy and use an augmentation factor, to make the bot train for those stories as well.

Did you make sure to delete all stories containing your mapped intents?

I changed all my categorical slots to booleans, but that didn’t change anything.

I get one warning message during training which I hadn’t seen before:

..../AppData\Local\Programs\Python\Python36\lib\runpy.py:125: RuntimeWarning: 'rasa_core.train' found in sys.modules after import of package 'rasa_core', but prior to execution of 'rasa_core.train'; this may result in unpredictable behaviour

It sounds important, but I’m not sure how to change it to make it correct?

Setting the augmentation_factor to 0 or 10 made no difference. I did it like this in the config file:

- name: "AugmentedMemoizationPolicy"
    augmentation_factor: 0

I have deleted all stories containing my mapped intents. I now have created just one simple story:

* mood_deny
    - utter_greet
* greet
    - utter_ask_student_nationality
* mood_affirm
    - utter_greet
* goodbye
    - utter_goodbye

When I randomly somewhere ask a question, it answers. When I then answer its question, it says it doesn’t understand me…

Just goes to show developing bots is harder than it looks!

Don’t get discouraged.

The warning is a known bug. It doesn’t affect your training in any way as far as any of us know. Just wait for a new rasa version. That’ll correct that. In the meantime, don’t worry!

Now this is new behaviour! Great! This means we have new log files to look at to see what’s going on! Can you post those?

Haha, it’s hard not to get discouraged, but it’s a school project so I can’t stop now!

When the bot says it does not understand the user, its the ‘utter_default’. Here is the log file:

output2.txt (3.5 KB)

I am also still confused about the AugmentedMemoizationPolicy with augmentation factor and the MemoizationPolicy with a max_history. What I could find, max_history can look x back (x intents? utters? stories?) to decide what will come. And the augmentation pastes stories together, if I’m correct. (rasa.com seems to be down at the moment, so I could not check). How do I know what those values should be and which one of the Policies to use?

Huh. The output you provided is just you typing ‘how do i enroll?’ and the bot answering correct?

so the ‘story’ looks a little like this?:

  • enroll
    • utter_enroll
    • utter_default

That’s kinda weird. I’d think the trained core model just doesn’t know what to do because for the model, “utter_enroll” suddenly appeared without warning because you weren’t in a story already. MappingPolicy reverts the last user statement and it goes back to that conversation state. For your example it goes back to a state that doesn’t exist since it cannot revert back further than the start of the conversation. I think you can offset that by adding a short story that’s just:

  • enroll
    • utter_enroll

In your story data. This makes it so MappingPolicy will predict utter_enroll, and the machine learning model kicks in afterwards to start predicting the right way again.

Now for your other questions:

  1. Max_history just looks at the history of the conversation. It looks back x intents (and the slots etc that get set within them). It’s basically the bot’s memory. You need this if you have stories that are long. As an example:
  • out_of_scope
    • utter_default
  • out_of_scope
    • utter_default
  • out_of_scope
    • utter_help_message

The bot will only predict this story if the max_history is 3 or longer. Without this, it will never be able to remember that the user already asked out_of_scope 2 times. It’ll just loop because it only remembers the last thing that happened.

  1. Augmentation indeed randomly stitches together stories. This way, your user can navigate all kinds of generated stories you yourself can’t think of.

As a general rule of thumb:

Max_history should be your longest story (so if it has 10 turns it has to be 10) so you can be sure it will remember the whole story.

For augmentation factor: idunno. It depends fully on your stories. You should experiment.

Maybe that was a bad example then. Here is another one where it happens mid-story:

output3.txt (12.7 KB)

Keep in mind this was the story:

In the example output above I ask the question after ‘utter_ask_student_nationality’, after which it answers and waits for a response. I then say “yes”, belonging to ‘mood_affirm’ and then it doesn’t understand me.

Thanks, that helped!

Hmm Keras kicks in here as well. Let’s ask one of the developers.

@akelad After MappingPolicy, Keras kicks in. Is this correct behaviour?

Just an observation: I wanted to know how Rasa could figure out the difference between the intents “I want to join a soccer team” and “I want to join a dream team”, as the sentences are really alike. I thought I might see something in the log (which I didn’t), but I did see that it switches to KerasPolicy after MemoizationPolicy, no matter which question is asked first.

output4.txt (21.6 KB)

Not sure if this helps with figuring out the problem, but I just wanted to let you know