I found my intent model seems to remember the examples provided in nlu.md. In other words, if I enter a question which is just a slight modification of an example from nlu.md, the bot will not be able to recognize it. Any tips on how to create training examples or twist the model so that the model will not just remember examples provided and can be generalized to all similar questions.
Actually Rasa is prepared to be able to identify new entries different from the provided examples and to know to which intent they correspond.
Here is an example that I have tried myself:
Examples from the nlu.yml file:
Conversation to test the bot:
As you can see it identifies perfectly the action I want to perform, even if I provide different texts than the examples.
If you can’t get Rasa to behave like that, it may be because:
- You are not providing enough examples.
- The examples are not adequate to train that intent.
- The examples are not correctly written.
Another option is to modify the pipeline values in the configuration file (config.yml). Specifically you could increase the value of the epochs field, to increase the number of times the machine learning model will see each training example during the training:
Although it would not be the most advisable, since, if I am not mistaken, by default the value is 100, and that should be enough for Rasa to be able to detect and redirect inputs slightly different from those provided, to their corresponding intent.
I hope it helps.
In conclusion, I think in your case by adding more examples and setting the configuration as in the image above, after running rasa train
you should be able to comfortably talk to the chatbot without having to provide the identical examples you have provided it to train.
Important: Remember to run rasa train
and restart the Rasa server each time you modify the data, so that the model learns the new changes.
I know this thread is old and I thought maybe someone will still find this tips useful. I wrote a blog post discussing basic principles to improve your intent detection model. You can find it here: You might be training your chatbot wrong | Everything Chatbots Blog