Testing conversations involving direct slot updation to tracker

I’m currently in the process of moving some parts of the conversation to direct tracker updation. I’ll basically update the necessary slots on the tracker directly using the appropriate endpoint, while I just send the whole intent to the bot.

Example:
The bot sees /sample_intent while slot_1 and slot_2 (which were originally extracted as entities and mapped to slots) are directly updated on the tracker.

This has not had a negative impact on how my conversation flows – manually testing the bot works perfectly fine.

The issue is that the test stories I had written earlier (when tracker updation wasn’t in the picture) are now failing quite often, while the exact scenarios are working if I test them manually.

Basically, my question is:

How do I modify my test stories / write new ones that account for this direct updation of slots on the tracker?

I figure it has something to do with entities because removing the listed entities from my training stories seems to have helped bot performance, but of course, there aren’t any entities listed in test stories, so need another solution.

Thanks in advance.

I’m curious why are you are updating the tracker directly and which API endpoint you are using?

Since the slots are not being set via the conversation, I don’t see how you will be able to use the normal testing process.

Hi, @stephens. There are some parts of the conversation in my project that don’t require the bot to actually parse the data to extract entities. The input in these cases will always follow the same structure, so it’s easier to use python to just extract these pieces of information and populate slots directly, rather than hoping that entities would be correctly extracted.

The endpoint I’m using is:

/conversations/<sender_id>/tracker/events?include_events=NONE

Since the slots are not being set via the conversation, I don’t see how you will be able to use the normal testing process.

Exactly, that’s where I’m struggling. Of course, I require some sort of a test suite since its not possible to test every single case every time I retrain the bot…

Basically, if there were a way to indicate slot sets in force set slots in the middle of a test conversation, that would solve my problem…

You should use a more standard approach. You can accomplish the python extraction using custom slot mappings and a slot validation action.

Not too sure about this… Correct me if I’m wrong but wouldn’t this still entail using an extractor to probabilistically extra some piece of information that should definitely be there? And hypothetically, if a valid entity is failed to be extracted, what then? Based on my requirements, it won’t be possible to reconfirm with the user…

I need an implementation that would work 100% of the time, irrespective of what values are there in the user message, so I went with direct tracker updation. And though this approach solves all my problems, testing it becomes an issue.

It’s up to you. You have complete control over this. You can use an entity extractor or not. You could parse the entire message in a custom action. You could also provide metadata with these values and set the slot values in your custom action from the metadata.

@stephens, this seems to be a solution that works in Rasa 3. Any suggestions for Rasa 2.x?

My requirement: A mechanism to optionally ignore the entire rasa nlu pipeline. Just pass the bot an intent directly (eg. /my_intent) and any relevant piece of information to be set as slots directly.

Custom slot mapping and slot validation is also available in 2.x

oh i see. but I still don’t get how I can completely bypass the NLU pipeline (so entity extraction and all) for certain kinds of inputs with custom mappings and validation…