A question about the Quickstart 'greeting' tutorial

Hi, I am following this tutorial:

It mostly works. However, I have two specific questions:

  1. After I re-trained my NLU model, do I need to re-train my Core dialog module?

  2. The results with 2 runs are different: a. python -m rasa_core.run -d models/dialogue/ -u models/current/nlu/ b. With the following API code:

    from rasa_core.agent import Agent from rasa_core.interpreter import RasaNLUInterpreter

    interpreter = RasaNLUInterpreter(‘models/current/nlu’) messages = [“Hi! you can chat in this window. Type ‘stop’ to end the conversation.”] agent = Agent.load(‘models/dialogue’, interpreter=interpreter)

    while True:

    print(messages[-1]) a = input() messages.append(a) if a == ‘stop’: break responses = agent.handle_message(a) for r in responses: messages.append(r.get(“text”))

My dialog looks like the following through the command line run: Your input → hi
Hey! How are you? Your input → sad
Here is something to cheer you up: Image: https://i.imgur.com/nGF1K8f.jpg Did that help you?

However, with my API call running, the output form the bot missed the “Here is something to cheer you up, Image: …” part. Instead, it directly answered “Did that help you?”

What might caused this difference in two cases? Is there a way to debug? Thanks.

Hey @lingvisa. You should retrain your Core model if the changes you made in the NLU part will have some effect on the dialogue (if you added new intents or entities, etc). Otherwise, you don’t have to retrain core model because it loads a trained NLU model which should have the implemented changes saved.

Regarding your second question - I would say it happened because of the small amount of training datat used to train the models. SInce the trainign sample was tiny, it’s likely that the bot makes some mistakes or is unstable with what actions it predicts.

Thanks, Juste.

Hi, Juste:

Regarding my 2nd question, I did more testing and in another tutorial, the “joke”,

https://github.com/RasaHQ/starter-pack-rasa-stack

I encountered the same issue. When I use the command line tool to demo the dialog, it works expected. The example: Question: Can you tell me a joke? Answer: In soviet Russia Chuck Norris still kicks your ass!

However, if I use this API call like below, it didn’t work correctly. The bot just repeat the question, without giving the joke. Since I tested many times on both command line tool and the API call through my code, it always got the same result. The command line bot works as expected, but the API code doesn’t work always right (it works in some other testing). The model is exactly the same.

Since I encountered the same issue in two tutorials, I suspect there may be an issue in using code to test the bot. Would you please check the code to see whether is any potential issue that caused this discrepancy? Thanks a lot.