In this story, the user wants to know his/her current achievements of a course. As the chatbot does not know from the previous conversation which course he/she is talking about, the chatbot asks the user. Now, there is the problem. The user now informs the chatbot, e.g.: “Introduction to data science”. Now the chatbot should continue with the checkpoint “get_achievements” and query an API. However, the inform never works as the chatbot tries to get the intent from this statement and then gives e.g. an answer to a FAQ instead of giving the user the achievements of that course.
So is there a possibility, to inform the chatbot about e.g. the course, without evaluating the answer and continuing in the story?
My intent inform looks like this:
- intent: inform
examples: |
- [AI Introduction](course-title)
- [Introduction to data science](course-title)
- [Advanced data science](course-title)
- [Fundamental Questions on AI](course-title)
Feel free to ask questions, it was hard to explain the behaviour / expected behaviour.
@threxx Hi, can you please delete all the previous trained model and train again. Even please share the rasa --version ? Further, please share the next stories for the checkpoint as check point used at the end of one story and start of another story.Ref: Stories
thanks for the quick reply. I did delete the models and train again, however, still the same problem.
rasa --version
Rasa Version : 2.2.6
Rasa SDK Version : 2.2.0
Rasa X Version : None
Python Version : 3.7.2
Operating System : Darwin-20.6.0-x86_64-i386-64bit
Python Path : /Users/theresa/.pyenv/versions/3.7.2/bin/python3.7
The next stories after the checkpoint is:
- story: get achievements and no certificate
steps:
- checkpoint: get_achievements
- action: action_get_achievements
- slot_was_set:
- current_course_achieved: false
- action: utter_anything_else
- checkpoint: more_information
So it works, when I use a course title that is already an example of inform (e.g.: Introduction to data science). However, when I use another (valid) course title that was not part of the example of inform (e.g.: Working with AI Experts), I get a totally different answer and storyline. Is there a way to achieve an inform intent with many different course titles and then get the correct storyline?
nlu:
- intent: check_balance
examples: |
- how much do I have on my [savings]("account") account
- how much money is in my [checking]{"entity": "account"} account
- What's the balance on my [credit card account]{"entity":"account","value":"credit"}
So there are about 120+ courses. Users can work on the courses and then have progress. They can ask the chatbot about the progress. So an example conversation would be:
User: What's my progress?
Chatbot: What course do you mean?
User: Introduction to AI
Chatbot: [making an API call with course name to API to get progress}
Chatbot: Your progress is 80%
However, this should not only work with the 10+ examples given to the chatbot for training data but also with all the other 110+ courses available. The thing is, that these courses do not have a generic naming so it’s just many different course titles. The course title should not really be evaluated by the chatbot as it is only needed for the API request to continue with the story. So either I need something to “skip this and continue with storyline” or an option to add an intent, where non of the example are similar to each other (as the course titles are unique titles). The intent that you listed, there the examples are rather similar to each other, but as I said, course titles are just names of courses (which can be everything from IT to Math to Meds)
Does this scenario make sense?
Edit: I know, this is a very specific use case, but I thought, maybe somebody had a similar problem and there is maybe an easy solution
@threxx If you are making an API can you check the Lookup table concept NLU Training Data, where you can mention the course titles), but did you provide the training examples and check that you were able to get the result?
Note: I hope in your API call the data are fetch based on couses-titles? is that right?
I implemented a look up for courses. It still does not work the way I want it to work. The problem is, that look up tables need a “known set of possible values” which I do not have at the time of training.
When the user asks for the progress, the chatbot asks which course and provides a list of the user’s courses. The users then selects one of these courses. Then the chatbot queries the progress for that one specific course and provides it to the user.
However, this only works if the course title was in the training set / look up table. As soon as I choose a different title, which was not part of the training set, the chatbot continues / starts a new story. For example there are several courses with AI (e.g: “Introduction to AI” or “Advanced AI in Med”). Let’s say the first one was part of the training set. So if the user chooses that one, the progressed is queried and shown. However, when the user chooses the other course, which was not part of the lookup table the chatbot answers with an explanation of AI (which is part of the faqs).
So I have different approaches in my mind which could solve my problem but I do not know if Rasa has a implementation for that:
Fill lookup table with data at point of request (which user’s course list)
Skip intent recognition for course titles and just continue with the next action (query progress)
I did not come up with any other ideas, however, I think look up tables are not the thing helping me out as there are more than 100 courses which I cannot add to the training set + they are updated daily.
@threxx Right! One Simple question, for fetching the data (progress) you required course title? so how you currently managed or will managed without mentioning in the training example or is that you expecting or want that user input the course title, this directly goes to API call and render you the result? Please elaborate me more with some bot/use example if possible. Thanks. Nik
@threxx On the other hand, Lookup table required all 100 courses list, that’s how Lookup table work.
@nik202 yes exactly. Other use cases with API calls are fetching all courses of a user or enroll / unenroll a user to / from a course. Other than that there are use cases without API calls which are mostly FAQs. So for example: Questions about the platform, general questions about AI, question about the pricing of courses.
Regarding your edit: So if the course list changes daily I need to add all (new) courses to the look up table and train a new model, right? Which does not seem very practical
Right, that is a new concept you just mentioned, or did I miss in the previous post.
You can right the small python code for that, which fetches the new courses from the API and added into the Lookup Table, but yes I guess you need to train the model again.
PS: You can even use the concept of Slots and SlotSet and send the value of Slot to a API call and render the response.
When you see this video, you can save the slot value or entities values → send it to API call → get the response → display on chatbot. I guess with few training examples.
That way, whatever the user writes will be acknowledged as a course and set in a slot. On the downside, it means that even if the user said “I don’t want to answer”, this sentence will be stored in the slot without any problem.
Therefore you could add the condition to set a slot only if a certain is (or isn’t) detected. On top of that, you can add custom validation if you feel the need for it.
Check the slot configuration you have for course-title
I had a similar issue in the past where it would only accept my training examples as valid input. You may want to make the slot a type: text or even a type:any and set the influence_conversation : false and set your auto_complete: true
Train it up and see if that helps.
It may not make a difference, but try it just to see.
Do you have this posted to GitHub where I could take a look? It may be how you’re collecting the value in your custom action. You could do this simply with the inform intent too.
action_all_slots only prints the values of all slots in the console. However this action is never called. In the console the following is shown (among others):
2021-10-26 12:07:43 DEBUG rasa.core.policies.rule_policy - Current tracker state:
[state 1] user intent: get_achievements | previous action name: action_listen
[state 2] user intent: get_achievements | previous action name: course_form | active loop: {'name': 'course_form'}
[state 3] user intent: nlu_fallback | previous action name: action_listen | active loop: {'name': 'course_form'}
2021-10-26 12:07:43 DEBUG rasa.core.policies.rule_policy - There is no applicable rule.
2021-10-26 12:07:43 DEBUG rasa.core.policies.ensemble - Execution of 'course_form' was rejected. Setting its confidence to 0.0 in all predictions.
2021-10-26 12:07:43 DEBUG rasa.core.policies.ensemble - Made prediction using user intent.
2021-10-26 12:07:43 DEBUG rasa.core.policies.ensemble - Added `DefinePrevUserUtteredFeaturization(False)` event.
2021-10-26 12:07:43 DEBUG rasa.core.policies.ensemble - Predicted next action using policy_2_RulePolicy.
2021-10-26 12:07:43 DEBUG rasa.core.processor - Predicted next action 'action_default_fallback' with confidence 0.30.
2021-10-26 12:07:43 DEBUG rasa.core.processor - Policy prediction ended with events '[<rasa.shared.core.events.DefinePrevUserUtteredFeaturization object at 0x149198668>]'.
2021-10-26 12:07:43 DEBUG rasa.core.processor - Action 'action_default_fallback' ended with events '[<rasa.shared.core.events.UserUtteranceReverted object at 0x14917b278>]'.
In the config I have added the RulePolicy. Am I doing something wrong?
@jonathanpwheat Unfortunately, the project is on a private repo. But I can move some of the parts (which are important here) to a GitHub repo for a better demonstration.