Retrieval Actions and closed Doman QA

Dear All,

I am very excited for this new experimental feature Retrieval Action and I really would like it to attain an official status. I have a set of simple Q & A that RASA with this feature can handle. However, in case the scoring was low enough, can the component trigger a REST API call with the question as its body to a pre-trained BERT model (a closed domain QA) and send back the response ?

Let me elaborate a little bit: We have a set of Q & A but also we have an internal “news room” where the user is free to ask any question. So when the question is sent:

  1. It search the intent to see if the question matches any of the local RASA intents.

  2. If not, it will go and try to use the Response Selector to see if the questions matches any "ChitChat"ing intents.

  3. If the scoring is low again, it will lookup through the Action Server using a REST API call and using the question inside the Request against a pre-trained BERT models trained for this internal “news-room”

  4. If nothing else matches, it will response with a negation such as “Sorry, I don’t know how to answer this”

Is this kind of pipelining possible with RASA ?

Many Thanks,

Hi, glad, you like the new feature. Why don’t you add internal news-room to the training data? Otherwise, you can provide custom action to FallbackPolicy, that will call your API if core or nlu confidence is low

Thank You Vladmir,

The idea of the internal news-room is a set of articles (with titles) composed of several paragraphs. The idea is that the answer should be around a specific paragraph among several articles which is like searching for a needle in a haystack. Closed Domain BERT model is good at this. So I am thinking of integrating it with RASA giving it a powerful edge.

Thank you for the tip on the FallBack Policy. Will Check it out for sure. Are there any examples on it ?

Many Thanks,

Many Thanks Vladimir for your help

Hello, I tried the FallBack Actions and the concept works. I have set the Fallback policy threshold for nlu and core to 0.7. However, there are some “questions” that belong to the externally training BERT model (internal newsroom) are getting trapped by the Response Selector and is returning the response from one of the “ChitChat” intent instead of going through the FallBack policy.

Now, I can increase the Fallback Threshold to let’s say 0.9 but my question is: How do I troubleshoot an input getting a relatively high confidence for a specific intent even though the input does not contain any word that suggest this wrongfully returned intent ?

Many Thanks

sorry, I don’t understand what you mean

Hello Vladimir,

Remember I have a set of FAQ that I configured using RASA Retrieval Actions and pre-trained BERT model for handling other set of questions (internal news room). I send a question (directed for the BERT model to answer) but it is getting handled by the RASA Response Selector which is answering with a confidence of 70% and the answer is completely irrelevant to the question being asked. I looked into the key words of the question and that does not match any either for the set of FAQ for the Response Selector.

So my question was: How do I troubleshoot this intent classification for the Response Selector ? Meaning : Why am I getting a 70% confidence for the intent ask_faq even though the question does not match (word for word) any question inside the Response Selector ?

I hope I made it clearer, Best Regards,

Hello, I have started the rasa shell nlu. and Here is the output. Responses were masked for confidentiality reasons.

How Can I prevent the response selector from returning the default response which is AAA and return responses which have confidence higher than , say , 80% ? Is there a parameter for that ?

Thank you for the clarification. This is a typical problem of the classifier, it operates in the space of defined intents, trying to pick one of them, quite often with high confidence, even though it is wrong.

One of the possible ways to mitigate it is to create explicit out_of_scope intent which contains the phrases that should be covered by your bert model, and use both this intent to predict your custom action and fallback policy

Thanks Vladimir,

It is good practice to setup out_of_scope however, my dataset for this BERT model is really big and constantly changing so I cannot keep up.

What I ended up on doing is modifying the embedding_response_selector.py and adding this logic on the process function: if (rs[“ask_faq”][‘response’][“confidence”] < 0.7 and intent[“name”]==‘ask_faq’): message.set(“intent”, {“name”: “ask_faq”,“confidence”: 0.0}, add_to_output=False) else: ##Default Behavior self._set_message_property(message, prediction_dict, selector_key)

The thing is about the Response Selector if a question from the BERT model is getting asked, it is choosing always one of the questions in the Response Selector eventhough the confidence of this chosen answer is really low 0.2 but it ends up being returned as an answer because the parent intent ask_faq is being detected with a high confidence (0.9) so that is why it is not going through the FallBack action.

Now, I did my workaround but is there a possibility in the future to add a nlu_threshold parameter specifically for the Response Selector Component ?

Many Thanks,

I see, very good point. Would you be up to creating a PR to support it in FallbackPolicy?

1 Like

Will do that for sure

Many Thanks for your help and thank you for your patience,

1 Like

Have we implemented any thing on setting the threshold for response selector. I think its as very good point and a necessity, i am facing the similar issue have just started playing around with response selector.

not yet, do you mind creating a GitHub issue, if it wasn’t created yet?

1 Like