Hi, I was wondering if it’s possible to “pause” the story to go to another story and then go back where you left in the other story (and asking the question where you left off again).
This sounds vague, so let me give an example:
Right now, I have a bot who wants to guide the user, so it asks the user questions. However, the bot can also ask questions. Whenever the user asks a question, the bot should respond but then go back to where it left off (ask the question again) and continue asking guiding questions.
Example conversation:
Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
User: I am dutch
Bot: Cool! And when are you planning on starting the study?
User: This year.
etc.
During this conversation, the user can ask a question. Example:
Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
User: How do I enroll for the study?
Bot: You can enroll by …
Bot: Are you a dutch student or an international student?
User: I am dutch
Bot: Cool! And when are you planning on starting the study?
It does answer an FAQ when asked, but it also continues the story… For the example story it now looks like:
Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
User: How do I enroll for the study?
Bot: You can enroll by …
Bot: Cool! And when are you planning on starting the study?
User: This year.
etc.
Thus, the bot pretends the question was also an answer to the question whether the person is international or dutch…
EDIT:
I added the FAQ’s to the story line (just one intent which is the question and one utter which is the answer), and now it does not pretend it’s an answer to the bot’s question, but it seems to stop the story. Example:
Bot: Hi, I would like to know your nationality for further guidance. Are you a dutch student or an international student?
User: How do I enroll for the study?
Bot: You can enroll by …
User: I am dutch
Bot: I am sorry, I do not understand you [utter_default]
Don’t do that. MappingPolicy ALWAYS maps that ‘how do I enroll’ intent to the answer ‘you can enroll by…’. It’ll just confuse the chatbot if you train stories containing your mapped intents within them.
So write your stories as you think they would happen without small talk and the mappingpolicy will handle the small talk for you.
Okay, so first of all, as a general note: Did you retrain the model without the ‘FAQ’ stories?
second:
can you post a screenshot of your log file where this case is handled?
same as above
same as above, this should be the intended behaviour.
It’s never random You’ll have to double-check your stories I’d guess
It shouldn’t. As long as MappingPolicy handles the question, the conversation state gets reverted to before the answer. It should behave as if the ‘small talk’ never occurred.
So in summation: Please post screenshots of your bot handling these wrong cases!
I did retrain the model. I do after every small edit.
Please note that the GUI still needs to be fixed: it puts two utters in one message. In the first picture you can see this as after the link it says “Another thing to look at in yourself is…”, which is a new utter, thus assuming “how do i enroll” was an answer to the question.
It says “Feature ‘entity_student_nationality’ (value: ‘1.0’) could not be found in feature map. Make sure you added all intents and entities to the domain”. All my slots are an entity, or is that not what it means? I have an entity called ‘student_nationality’, but what is the ‘feature map’?
After using MappingPolicy (which it does correctly) it switches over to KerasPolicy. It should go to Memoization. What is your Max_History setting like? It should be about 10 for your purposes I think. You can also try using AugmentedMemoizationPolicy as it stitches together more different usecases. You can set the Augmentation factor in your training config.
The Feature map is what Rasa uses to set slots in accordance to the stories it trains on. Have you set the entity ‘student_nationality’ to unfeaturized? This makes it so it won’t be set in the feature map. If you want your users to put in any country that matches a real country, consider using ‘text’ as the slot type.
I was not aware of ant Max_History, so it was set in default I guess. I just set it on 10, if I did it correctly. I added this to the config file:
- name: "MemoizationPolicy"
max_history: 10
However, this did not change anything.
I had set ‘student_nationality’ to categorical (either dutch or international), as I only care about if they are dutch or not. I could of course make it a boolean, not sure if it will matter for Rasa? But setting that slot to unfeaturized did not change anything in the conversation either…
Though setting it to categorical will bring some problems of its own.
Doing this means that the user can either say ‘dutch’ or ‘international’ to the bot. Anything else will send it reeling. You’d need to add all other nationalities other than dutch as synonyms for ‘international’ to make that work.
You may also need to change MemoizationPolicy into AugmentedMemoizationPolicy and use an augmentation factor, to make the bot train for those stories as well.
Did you make sure to delete all stories containing your mapped intents?
I changed all my categorical slots to booleans, but that didn’t change anything.
I get one warning message during training which I hadn’t seen before:
..../AppData\Local\Programs\Python\Python36\lib\runpy.py:125: RuntimeWarning: 'rasa_core.train' found in sys.modules after import of package 'rasa_core', but prior to execution of 'rasa_core.train'; this may result in unpredictable behaviour
It sounds important, but I’m not sure how to change it to make it correct?
Setting the augmentation_factor to 0 or 10 made no difference.
I did it like this in the config file:
Just goes to show developing bots is harder than it looks!
Don’t get discouraged.
The warning is a known bug. It doesn’t affect your training in any way as far as any of us know. Just wait for a new rasa version. That’ll correct that. In the meantime, don’t worry!
Now this is new behaviour! Great! This means we have new log files to look at to see what’s going on! Can you post those?
I am also still confused about the AugmentedMemoizationPolicy with augmentation factor and the MemoizationPolicy with a max_history. What I could find, max_history can look x back (x intents? utters? stories?) to decide what will come. And the augmentation pastes stories together, if I’m correct. (rasa.com seems to be down at the moment, so I could not check). How do I know what those values should be and which one of the Policies to use?
Huh. The output you provided is just you typing ‘how do i enroll?’ and the bot answering correct?
so the ‘story’ looks a little like this?:
enroll
utter_enroll
utter_default
That’s kinda weird. I’d think the trained core model just doesn’t know what to do because for the model, “utter_enroll” suddenly appeared without warning because you weren’t in a story already. MappingPolicy reverts the last user statement and it goes back to that conversation state. For your example it goes back to a state that doesn’t exist since it cannot revert back further than the start of the conversation. I think you can offset that by adding a short story that’s just:
enroll
utter_enroll
In your story data. This makes it so MappingPolicy will predict utter_enroll, and the machine learning model kicks in afterwards to start predicting the right way again.
Now for your other questions:
Max_history just looks at the history of the conversation. It looks back x intents (and the slots etc that get set within them). It’s basically the bot’s memory. You need this if you have stories that are long. As an example:
out_of_scope
utter_default
out_of_scope
utter_default
out_of_scope
utter_help_message
The bot will only predict this story if the max_history is 3 or longer. Without this, it will never be able to remember that the user already asked out_of_scope 2 times. It’ll just loop because it only remembers the last thing that happened.
Augmentation indeed randomly stitches together stories. This way, your user can navigate all kinds of generated stories you yourself can’t think of.
As a general rule of thumb:
Max_history should be your longest story (so if it has 10 turns it has to be 10) so you can be sure it will remember the whole story.
For augmentation factor: idunno. It depends fully on your stories. You should experiment.
In the example output above I ask the question after ‘utter_ask_student_nationality’, after which it answers and waits for a response. I then say “yes”, belonging to ‘mood_affirm’ and then it doesn’t understand me.
Just an observation: I wanted to know how Rasa could figure out the difference between the intents “I want to join a soccer team” and “I want to join a dream team”, as the sentences are really alike. I thought I might see something in the log (which I didn’t), but I did see that it switches to KerasPolicy after MemoizationPolicy, no matter which question is asked first.