The response intent is wrong

Something like that the response of intent classifier is wrong. There are some training data of intent “ask weather”, e.g. image

After training, If I type in “what is the weather now”, it returns the right intent “ask_weather”. But if I type in “Open the door”, “Turn on the light” casually, the response is still “ask_weather”. How can I fix this problem?

Hello!

Could you give more details:

  1. do you have other intents?
  2. can you show pipeline in config?

Hello, @artemsnegirev

I just set up a demo and there are only three intents similar to asking weather. The following picture is my config, I use the spacyNLP with chinese model. And my training data are also chinese. I just translate them to English convenient for posting the question.

You have too little training data.

It’s recommended to have at least 10 examples, while you only have 2. Train-test split usually removes 20% of your training data for testing, and since you have only 2 examples it basically takes 50% instead, so you are left with one example only.

And quality is better than quantity of course, examples should be as dissimilar as possible (that does not mean having similar examples is bad, it just doesn’t help much).

Take a look at this example. You don’t have entities so you don’t need as much data.

1 Like

I would also add after @ChrisRahme that you dont really need CountVectorsFeaturizer as it may lead to overfitting, Spacy embeddings is enough for your purpose.

You could also try:

  1. use only bpe embeddings for Chinese: BytePairFeaturizer - Rasa NLU Examples
  2. explore this repo: GitHub - howl-anderson/rasa_chinese: rasa_chinese 专门针对中文语言的 rasa 组件扩展包,提供了许多针对中文语言的组件
1 Like

@ChrisRahme Thanks for your very nice and detailed response. The keypoint is that if the input are other casual words, its output is also ‘ask_weather’. Of course, it works well if the input is really about weather.

It’s all about training data again.

Consider adding an out-scope intent if the inputs should not be understood by the bot.

@ChrisRahme Yeah. I know that this is due to the amount of training data. Because I have just set up a demo, so there are only several intents. If adding an out-scope intent here, the training examples of this intent are innumerable, since the users can say unexpected anything in the real application.

Actually you don’t need to add all of out_of_scope examples, DIET with good regularization will try to learn common patterns as long as you use contextual encoders (T5, BERT, …). Training examples just should be diverse as much as you can provide, 50 examples is a good start point. You can access them from rasa demo project. I usually translate this phrases into target language (e.g. russian).

Great suggestions! Is there any method which can assign the text out of all my specified intents to the “out-of-scope” intent?

@artemsnegirev Any idea for preventing over-fitting, except for adding more data?

Sure! You can play with learning_rate, drop_rate and regularization_constant as described in DIET hyperparams section. You can use cross validation to fairly check pipeline performance. Also checkout this repo to automate hyperparams search.

1 Like

I wrote a blog post discussing basic principles to improve your intent detection model. I know this thread is old and I thought maybe someone will still find this tips useful: You might be training your chatbot wrong | Everything Chatbots Blog