Only NLU for RasaX

My application does not have a conversation flow, We are just doing a one-time message analysis. I just want to use only NLU option. However, not able to expose the model/parse option for RasaX. Then how can we capture/log for those messages from the users and used for further training?

Hi @krish, the endpoint you are looking for is the /logs endpoint, which is the one that interacts with the Annotate new data page. That endpoint acts the same as the model/parse enpoint, but also creates a log in the Annotate new data so that you can look at all of the messages and save/fix them as necessary.

I added --enable-api into docker-compose.yml file,


but I’m not able to access http://mydomain/api/model/parse. It says not found. However, in the local mode of rasa x, I’m able to access it. For local mode I run rasa x --enable-api --cors "*" and able to access it from http://localhost:5005/model/parse

As I mentioned, you should use the /logs endpoint for this, not the /model/parse. It will do the same thing. You could hit the parse endpoint via a redirect, but then you won’t get any of that information in rasa x.

Hello, @krish you can check HTTP API

Hello @erohmensing - I had a follow-up question to this.

Let’s say, that I too have an NLU only model served on a Rasa Open Source server. If I connect a Rasa X instance to this deployed bot, using the live monitor feature of Rasa X, which API method/endpoint should I use to pass messages to the Rasa Open Source server?

If I use model/parse for the Rasa Open Source server to pass an unlabeled utterance, as indicated on this page, would the labeled utterance automatically show up in the connected Rasa X instance, for review?

Messages will only get to the connected model server if they go through a core model, since the Agent is resposible for sending the events through an event broker. /model/parse only returns parse data and does not create any events.

If you would like to use NLU only and get entries in your NLU inbox, I would still recommend the /logs endpoint on the Rasa X server (and host your model in rasa-production).

Is it possible then to call /model/parse from a deployment done with Rasa-X? It’s not working to me, whereas locally it does work.

Can anybody help?

I have exactly the same problem. Have you managed to figure out how to solve it?