How to debug when ResponseSelector output is wrong?

In case of faq bot, when we use two stage fallback policy, when the probability of chosen intent falls below threshold, it asks a question did you mean faq, with yes or no options. User will never know what this faq is! How can this be mapped to actual intent so that user understands and answers appropriately?

Looks like nlu classifier is returning the intent as faq, instead of faq/intent_name as it appears in nlu.md file. When ResponseSelector does not give right answer, how to find whether it is nlu classifier that did not output correct intent, or nlu classifier found correct intent, but ResponseSelector did not find correct answer from responses.md file?

In case of intent classification, nlu data that contains examples for each intent(class), which is used to build classification model. For ResponseSelector model, what data is used, what are the classes and what are observations/examples?

Appreciate if any one clarify these questions or point me to right place where these are explained. Tried to go through Rasa documentation and masterclass series, but could not figure these out. Thanks in advance

Hi @psvrao.

here is an example of the TwoStageFallbackPolicy where it retrieves a nice prompt for ResponseSelector intents: https://github.com/RasaHQ/rasa-demo/blob/4fd1f9d47650708a6ef0763978d2138aa0ece651/actions/actions.py#L490 . Hope that helps you :slight_smile:

For ResponseSelector model, what data is used, what are the classes and what are observations/examples?

The ResponseSelector works very similar the intent classification. It calculates embeddings for user messages and the potential responses and checks which are the most similar. While the ResponseSelector does this for user messages and the different responses, the “regular” NLU classification computes the similarity of user message and intent labels.