Multiple questions related to testing the bot

Hello all,

This topic is has been created in the view of following questions related to bot testing for which i could not find relavent solutions/answers. I kindly request you to go through the relevant que as it might take a bit of your time.

  1. I am currently testing my rasa NLU using k-fold cross validation and interstingly i found out that one of the 5 model were giving nearly same metrics as the one trained on the entire data. There is no option to save models in cross validation(5 models if k=5) , please tell me if there a way to do the same.

  2. The train test split command generates 4 files which are different from what is supposed to be in the actual test folder(conversation_test.md file). I could not understand where i can really use the outputs of the train test split. My sincere apologies if you find this question to be rather silly, but please tell me.

  3. The rasa test nlu command is not producing a seperate metric for response selector models. Although i have seen the documentation, it refers to usage of – report parameter to generate a seperate results for respose selector and i am unable to accomplish.

I have used the following command and it is not working, please tell me a way to do the same.

rasa test nlu -u data/nlu.md --config config.yml --cross-validation --report
  1. I am trying to do hyper paramter tuning as per the instructions . In my situation, i have used response selector and entity synonym mapper too. The data splitting part is confusing as i have responses.md file. Please suggest a way to do the splitting in such case.

  2. The usual way of writing conversation_test.md file is clear, however when i add response selector into pipeline it is not clear as how to create conversation_test.md file

## definition FAQs
* greet: Hello
  - utter_greet

* def/MachineLearning: what is machine learning
  - respond_def  ---> *what to place here?*

## business FAQs
* bus/service: what services do you offer
  - respond_bus  ---> *what to place here?*

The normal mapping can be found in either action or utter keywords, whereas response selector works differently and mapping happens in a seperate file without any utter action reference. please tell me the correct format for the same.

I am greateful for all the users/developers in this forum who are so humble in taking time and answering all the questions. RASA is a wonderful open source platform for sure and considering the vastness in number of features the questions that i’ve posted might take time to see the lime light.

Thank you!

1 Like

were you able to get answer for question 3? I see the response selector evaluation module as the part of rasa/nlu/test.py how ever the terminal doesn’t recognise the --report flag altough its mentioned in blog?

Hey @dakshvar22 wiill you able to help on this?

$ rasa test nlu -u tests\validation_data\test.md --out results\res --successes --report usage: rasa [-h] [–version] {init,run,shell,train,interactive,test,visualize,data,x}

rasa: error: unrecognized arguments: --report

Nope, i have not found any answer. Pls post it here if you find one