Rasa bot is not predicting the correct intents

My config file is as follows: pipeline:

  • name: “nlp_spacy” model: “en_core_web_lg”
  • name: “tokenizer_spacy”
  • name: “intent_entity_featurizer_regex”
  • name: “intent_featurizer_spacy”
  • name: “ner_crf”
  • name: “ner_synonyms”
  • name: “intent_featurizer_count_vectors”
  • name: “intent_classifier_tensorflow_embedding”

language: “en”

I have 13 different intents with 10 or more examples for each.

One of the intent is greet with examples like hi,hello,hey etc but when a user types “who am i”, instead of showing out of scope message the bot is predicting greet intent and i have no idea why.

Examples inside greet intent are hey,hi,hello, hi there etc type only which has no relation with “who am i” question.

Can you please help me out in why it is predicting wrong intent and how can i figure out how it is actually predicting intents so that i could improve it.


Did you initialise the default fallback intent?

Yes i have already initialised the fallback action with nlu threshold 0.8 and core threshold 0.3

When i am typing who am i,it is showing intent greet with confidence of 0.82 which is just not understandable. Kindly help

How many stories do you have? Do the stories take into account different scenarios?

I have around 30 stories. Out of which 7 are independent stories like:


  • greet
  • utter_greet

And 22 stories for sequential question and answers.

try with this pipeline

language: “en”


  • name: “tokenizer_whitespace”
  • name: “intent_entity_featurizer_regex”
  • name: “ner_crf”
  • name: “ner_synonyms”
  • name: “intent_featurizer_count_vectors”
  • name: “intent_classifier_tensorflow_embedding” intent_tokenization_flag: true intent_split_symbol: “+”

Can you take the ‘who am i’ related intent as a separate story and check the output?

I can do that ofcourse,but thats not something right to do. If i make a separate story,the bot will eventually take it under different intent but for how many such outliers showing wrong intent will i be able to do that?

I wanted some core reason behind such senarios so that i could remove this bug from the root itself but seems like its not gonna happen here.

Anyway thanks

So the bot follows the stories, that is the reason it’s not giving the output as it has not seen it separately. I do this for all my intents, I just define them as a separate story (initially). You could also write a story which begins with ‘who am i’ related intent.

Also, use interactive training to correct the bot, that way you don’t have to manually write each story.

Okay i can try interactive learning but before that want to know how many examples do i have to give during interactive learning? Like if i had previously 22 stories for sequential question and answers,now if i make all stories independent then how many trainibg examples do i have to give to make the bot learn all my sequential question and answers

You don’t need to count and give the exact, once you start the interactive session, chat with the bot as any human would do. I mean, ask it any question and continue the story however you wish. All the combinations will be captured automatically in the process. Keep stopping in between so it saves as a separate story.

For example: The first time start with ‘Who am I’ intent and ask it 2~3 more follow up question. The next time start with ‘greet’ intent and end with ‘who am I’ and so on

@dakshvar22 I m also getting the same issue, NLP is predicting wrong intents on random and not exist words words