Interactive Learning always predicts wrong intent

Hello everybody,

Since a couple of days, I’m stuck on a serious problem with my Rasa bot. After training my dialog model and training my NLU model, I tried to improve my bot by using the “Interactive Learning” (https://rasa.com/docs/core/interactive_learning/)

But when I use it, the problem is always the same : the NLU classification is always wrong.

Let us take, for example, this story :

* greet
  - utter_greet

My NLU training data (in french) is ready to recognize this simple intent :

  "common_examples":[
    {
      "text":"salut",
      "intent":"greet",
      "entities":[]
    },
    {
      "text":"bonjour",
      "intent":"greet",
      "entities":[]
    },
    {
      "text":"bonsoir",
      "intent":"greet",
      "entities":[]
    },
  ]

But the interactive learning tool never detects the correct intent !

Bot loaded. Type a message and press enter (use '/stop' to exit).
2018-10-12 11:51:08 DEBUG    rasa_core.tracker_store  - Creating a new tracker for id 'default'.
127.0.0.1 - - [2018-10-12 11:51:08] "GET /conversations/default/tracker?include_events=APPLIED HTTP/1.1" 200 406 0.001513
? Next user input:  salut
2018-10-12 11:51:22 DEBUG    rasa_core.tracker_store  - Recreating tracker for id 'default'
2018-10-12 11:51:22 DEBUG    rasa_core.processor  - Received user message 'salut' with intent '{u'confidence': 1.0, u'name': u'salut'}' and entities '[]'
2018-10-12 11:51:22 DEBUG    rasa_core.processor  - Logged UserUtterance - tracker now has 2 events
127.0.0.1 - - [2018-10-12 11:51:22] "POST /conversations/default/messages HTTP/1.1" 200 716 0.002946
2018-10-12 11:51:22 DEBUG    rasa_core.tracker_store  - Recreating tracker for id 'default'
127.0.0.1 - - [2018-10-12 11:51:22] "GET /conversations/default/tracker?include_events=AFTER_RESTART HTTP/1.1" 200 716 0.002047
? Is the NLU classification for 'salut' with intent 'salut' correct?  (Y/n)

Should not the bot in intervative training propose : “Is the NLU classification for ‘salut’ with intent ‘greet’ correct? (Y/n)” ???

Some additional information :

  • If I run the bot in a “standard way” (with rasa_core.run), the intent is correctly classified.
  • My NLU pipeline is configured for french language

Commands used :

* python -m rasa_core.train -d xxx_domain.yml -s data/stories.md -o models/dialogue
* python -m rasa_nlu.train -c nlu_config.yml --data data/data.json -o models --fixed_model_name nlu --project xxx --verbose
* python -m rasa_core.train --online -o models/dialogue -d xxx_domain.yml -s data/stories.md 

Thank you in advance for your precious help

TBX

Your train online command misses the model for NLU

python -m rasa_core.train --online -d xxx_domain.yml -s data/stories.md -o models/dialogue -u yournlu model

Your bot doesn’t know where the NLU is and hence relies of default REGEX

Thank you so much @souvikg10 for your fast (and functional !) answer.

Thus, it should maybe be a good idea to add the “-u” option in the Interactive Learning documentation ?

TBX

I suppose it is mentioned in the

python -m rasa_core.train \
  --online -o models/dialogue \
  -d domain.yml -s stories.md \
  --endpoints endpoints.yml

endpoints.yml though it is not quite clear what the endpoints.yml contains and by default people have NLU models in the same project Maybe the documentation needs to reflect that

@tbx yes good point, would you mind submitting a PR for that?

Yes, I can do it. But can you tell me the procedure please ?

Regards

Just fork the repo and make the adjustments, and then you can open a PR!