Supervised Response Selector [Experimental]

We have been working on a new exciting feature which includes two new components - retrieval actions and supervised response selector. Checkout the introduction to the feature, give it a go and share your feedback with us on the overall feature idea, possible use cases you envision, ease of integration and any other suggestions you may have.

hi @dakshvar22, Thanks for the the feature, it was easy to handle the small talk and interruptions without writing stories for each small talk within the stories. Below are few things which I would like to share :slight_smile:

I am using the latest Rasa(1.3.7) and Rasa - X(0.21.3) version this is how my nlu.md fine looks like:

pic_2

and it responds properly for the small talk:

pic_3

but the problem occurs when I test it in Rasa X, I dont see any response from the bot

I don’t know whether Rasa X supports but Retrieval Actions as of now but when I close Rasa X and checked my nlu.md file , this is what I found:

pic_6

All the small talk intents were deleted and merged under single intent chitchat probably by Rasa X and when I train the bot, this is what I got:

The problem is I have lost all the small talk Intents and again I had to rewrite those :sweat_smile:

@JiteshGaikwad Thanks for writing about the problem in detail. Yes, Rasa X / Interactive learning do not support Retrieval Actions as of now, since the feature is experimental and we want to gather substantial feedback before fully integrating it with the complete stack.

1 Like

@dakshvar22, it would have been better if it was mentioned in the Rasa docs that Rasa X doesn’t support the Retrieval Actions as of now else I won’t have lost the data :slight_smile:

Thanks.

1 Like

@JiteshGaikwad Completely agree with your suggestion. Will be added in the docs in the next patch release. Thanks

1 Like

@JiteshGaikwad Regarding the first part of the problem, where you mentioned that the reply from the bot is not displayed: I tested with same versions of Rasa and Rasa-X and it works on my side. Here is what I did:

  1. Created a rasa project using rasa init
  2. Added some retrieval actions and retrieval intents.
  3. Trained the bot using rasa train
  4. Started Rasa X with rasa x
  5. Tested an example conversation and retrieval action gets triggered with correct bot message getting displayed.

Did you do anything different?

Ya I had tried the same steps but I didn’t found the responses but one thing I noticed when I switch the tabs within the Rasa-X I get the response, pls see the image below:

but this wasn’t interactive learning mode, I was talking w.r.t Interactive Learning mode

@dakshvar22, If you see the screenshot here, its in interactive learning

@JiteshGaikwad Not sure if I understand you correctly. The first screenshots that you uploaded were also from Talk to your bot(Interactive Learning) screen only or was it that screen after you did some steps in between(like annotating new data, retraining model from Rasa X, etc.)

Attaching some ss for your reference:

  • when I test it without interactive learning mode:

  • after I click on some other tabs and again come back to the chat tab, I get the response:

If you can see for the first time I didn’t get the response unless an until I switchthe tabs

hey @dakshvar22, Sorry I might have confused you, In the first screen I had clicked on switch to strict conversation mode

I found out that Only when I click on switch to strict conversation mode my Retrieval Actions intents are getting merged under one single intent as chitchat.

Anyways thanks for reaching out, I am not using Rasa X for Retrieval Actions as of now till the time this Retrieval Actions feature is completed for the stack

:slight_smile:

@JiteshGaikwad Thanks for finding the exact step. Noted your feedback.

1 Like

Hi @dakshvar22. Any updates about the longer term functionality of the feature? i.e. Will it be taken away or radically changed in the near future? I have bot I’d like to try this feature to see if there is a performance upgrade over the current but hesitant to do so given the experimental nature of the feature and the amount of code changes I have to do . Does it look like it will go away in the next version ;)?

Also, a couple quick question about the modeling part.

  1. Can the embedding layer of the ResponseSelector be shared with the EmbeddingIntentClassifier?
  2. what exactly is the target variable in the training process? i.e. I want to know if the response text has any relevance or the response text only serves as a target label ?

@thusithaC Thanks for your question. We are currently collecting feedback on the training data format for retrieval actions because we are not completely sure of it at this moment. So, once we have that, we’ll make a decision on the next steps for the feature. We are definitely very excited about the feature and have already collected some constructive feedback on it. It will be tough to comment on the extent of the change that it may undergo. Nevertheless I would recommend you to try it out on maybe a smaller dataset of yours and comment on what you like/dislike.

For your specific questions -

  1. No, not as of now. You can use a shared glove featurizer, through SpacyFeaturizer but the learnt embedding layer will be different.
  2. The target variable is the similarity between user utterance and a candidate bot utterance. The response text and the user utterance text are featurized by an embedding layer and the similarity between positive pairs is maximized and negative pairs is minimized. Feel free to take a look at the diagram in the related blogpost - Integrate response retrieval models in assistants built with Rasa

Thanks for the answer @dakshvar22. Understanding the logic was indeed helpful. I have one more question that you might be able to assist. There always data-cleaning we would like to perform before passing on text to a ML component. i.e. the response text we like the user to see might be have to be cleaned a bit to make it more suitable for training the ResponseSelector component. In the normal pipeline, we can do any cleaning we like by adding a custom component at the top of the pipeline. However is this a possibility for Supervised Response Selector response Text?

@thusithaC Yes, the response text is stored as response attribute within the Message object inside NLU pipeline. So, train and process method of your custom component can basically act on them and perform any cleaning that’s needed.

1 Like

Hi @dakshvar22 I’m trying out the ResponseSelector and need some help to get going. I am only interested in RASA NLU (i.e. to understand the intent of the message and where applicable get the suggested response. I will not be implementing a full bot that acts on the intents) so to date I have not needed to use stories.md or domain.yml as I only train an NLU model.

I followed the tutorial and (1) added my FAQ intents to my nlu.md, (2) added the responses.md file in my data folder and (3) updated my config file to include the ResponseSelector. However, when I try to split my data using the CLI I get an error “ValueError: No response phrases found for xxx. Check training data files for a possible wrong intent name in NLU/NLG file”.

My first attempt was to add an FAQ story to stories.md and to add to the intents and actions lists in the domain.yml files. However, I still get the same error.

The thing that is most puzzling for me is that if i change nothing in my data or yaml files but rerun my rasa data split nlu command, the name of the intent that is included in the error message (the xxx in the quotation marks above) changes every time although nothing has changed?

Any pointers to help me try out the ResponseSelector? Im using Rasa 1.3.9

The issue was not with my setup but with non-ASCII spaces in the responses.md file.

Thanks @dakshvar22. So far so good. There are a couple of new issues wrt ResposeSelector I might like to get some insights into.

  1. Limiting the number of responses after the model/parse stage - I can see that not just the most likely response, but top N intents as well as responses are now a part of the result of the NLU model/parse stage. i.e. I have a few ResponseSelector components, and after NLU parsing the response object is becoming HUGE. is there a way to limit the number of responses in the parse object? May be limit to the highest likelihood few? I don’t mind sub-classing a component if I have to.
  2. Model sizes- Well, to put it short my trained model size has gone up from 40Mb to 300Mb. May I know what is causing such an explosion of size? Anything we can do to trim it?