Running separate core and nlu

Hi all,

In older rasa 0.14 core and nlu were separated. I was running them separately.

In newer rasa 1.2.2 or 1.3.0 can I run separate rasa nlu and rasa core? I can run nlu with switch --enable-api and supply rasa with nlu model only. However when I start core with core only model no nlu processing takes place. Even though my endpoints.yaml contains nlu tag.

nlu:
  url: localhost:5005

my nlu start command:

rasa run --enable-api --model models/nlu-20190909-0955.tar.gz --endpoints configuration/endpoints.yml --debug

my core start command:

rasa run --endpoints configuration/endpoints.yml -p 5010 --credentials configuration/credentials.yml --model models/core-20190909-084306.tar.gz --debug --enable-api

following curl to core does not trigger call to nlu:

curl -XPOST http://localhost:5010/conversations/default/messages -d '{"text": "Hello!", "sender": "user" }'

nlu seems to be working fine as following cmd works:

curl http://localhost:5005/model/parse -d '{"text":"hello"}'

How can I run separetely rasa nlu and rasa core?

Thank you.

1 Like

Why do you want to run them separately?

We would like to use our own NLU and use rasa core for dialogue driving. As POC I’m trying to separate them. For the time being I’m working with rasa NLU and rasa core. We are curious whether we can use only rasa core in the future?

You can definitely only use core and only use NLU. However if you use them both, you should be using them both in the combined model/server sense, not separately.

So this piece of configuration in endpoints.yml is only legacy from rasa 0.14? It seems to be ignored. I’m not getting any calls on NLU server if I load only core model into rasa.

Hi, The following video: https://www.youtube.com/watch?v=jMGgT4lgI28 suggests to use 2 NLU servers for single core server. I am assuming then, that they should be run separately. Am I missing something?

Hi @Cekir , if you want to have a bot that runs multiple NLU models, that is the suggested way to do it. If you only need one NLU model, we recommend running them in one single rasa model instead of running the two servers separately.

Some of these pages may be helpful if you go the separate route: