Integrate Rasa NLU with an external conversation manager

All,

What is the best approach to integrate Rasa NLU (intent interpreter) with an external conversation manager that is written in Java? Do I need to build Flask microservice between the two?

When you run Rasa as a service, you also host the NLU model over REST.

rasa run --enable-api

This exposes the NLU model as well. You’re likely interested in the parse endpoint which will take predict intents/entities. You may want to wrap this service in a docker container though.

That said, what conversation manager do you want to use? Is there a reason why you’re not using the Rasa conversation manager? The rasa run command also exposes our conversation tracker over REST.

1 Like

@koaning thank you for your response. The reason is that my company already has an existing home grown conversation manager that was written in Java.

1 Like

@koaning thanks again. Could you please provide a mode detailed step by step instruction to use the parse endpoint? I have the Rasa server up and running, please see the picture. How can I call the parse endpoint to predict intents/entities?

As is shown in the docs the endpoint lives at http://localhost:5005/model/parse assuming you’re running it locally. You should be able to send a POST request with a json body containing a text field to get your predictions.

1 Like

what’is Conversation Manage?

The NLU part of the pipeline does intent prediction and entity detection.

The conversation manager/policy part of the pipeline takes this as input together with the conversation sofar and uses this to predict the next action.

@KheireddineAzzez Conversation Manager is the equivalent of dialogue management component of Rasa Open Source. In my company, we have developed one in-house and we wanted to leverage that with the NLU model.

@koaning when wrapping this in a docker container, is CMD [“start”, “–enable-api”] the correct syntax to perform rasa run --enable-api inside of the container? Please see my complete Dockerfile below.

FROM rasa/rasa:2.5.0-full
USER root
WORKDIR /app
COPY . /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD ["start", "--enable-api"]
USER 1001

If I look at the Dockerfile definition then it seems like the standard entrypoint is rasa but the full command is rasa run --enable-api, not rasa start --enable-api.

Maybe this works:

FROM rasa/rasa:2.5.0-full
USER root
WORKDIR /app
COPY . /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD ["run", "--enable-api"]
USER 1001

@koaning Yessss! This works! Thank you so much! You’re amazing!

1 Like

@koaning one more question, now that the service is up and running in the container, how do I call this service from outside the container? Our home-grown conversation manager is written in Java so I need to provide the endpoint for the conversation manager.

This depends a bit on how you’re running your container, but assuming that security is configured to open up the right port you should be able to communicate with it over HTTP.

You should see a port number in the logs whenever this command is run:

rasa run --enable-api

But there’s also a setting to set the port number. You should be able to find more info via:

rasa run --help
1 Like

@koaning now I am having issues with loading the model onto docker container.

Rasa model couldn’t be loaded, please see the screen shot below and the logs for detail.

/domain GET get_domain

/ GET hello

/model PUT load_model

/model/parse POST parse

/conversations/<conversation_id:path>/predict POST predict

/conversations/<conversation_id:path>/tracker/events PUT replace_events

/conversations/<conversation_id:path>/story GET retrieve_story

/conversations/<conversation_id:path>/tracker GET retrieve_tracker

/status GET status

/model/predict POST tracker_predict

/model/train POST train

/conversations/<conversation_id:path>/trigger_intent POST trigger_intent

/model DELETE unload_model

/version GET version

2021-06-29 06:31:01 INFO root - Starting Rasa server on http://localhost:5005

2021-06-29 06:31:01 DEBUG rasa.core.utils - Using the default number of Sanic workers (1).

2021-06-29 06:31:01 INFO root - Enabling coroutine debugging. Loop id 92476912.

2021-06-29 06:31:01 INFO rasa.model - Loading model models/20210628-225258.tar.gz...

2021-06-29 06:31:01 DEBUG rasa.model - Extracted model to '/tmp/tmpmeay06bs'.

2021-06-29 06:31:03 DEBUG root - Could not load interpreter from 'models'.

2021-06-29 06:31:03 DEBUG rasa.core.tracker_store - Connected to InMemoryTrackerStore.

2021-06-29 06:31:03 DEBUG rasa.core.lock_store - Connected to lock store 'InMemoryLockStore'.

2021-06-29 06:31:03 DEBUG rasa.model - Extracted model to '/tmp/tmpdpwnqvqz'.

2021-06-29 06:31:04 ERROR rasa.core.agent - Could not load model due to cannot reshape array of size 170529788 into shape (684830,300).

/opt/venv/lib/python3.8/site-packages/rasa/shared/utils/io.py:97: UserWarning: The model at 'models' could not be loaded. Error: <class 'ValueError'>: cannot reshape array of size 170529788 into shape (684830,300)

/opt/venv/lib/python3.8/site-packages/rasa/shared/utils/io.py:97: UserWarning: Agent could not be loaded with the provided configuration. Load default agent without any model.

2021-06-29 06:31:04 DEBUG rasa.core.nlg.generator - Instantiated NLG to 'TemplatedNaturalLanguageGenerator'.

2021-06-29 06:31:04 INFO root - Rasa server is up and running.

2021-06-29 06:35:56 DEBUG rasa.core.processor - Received user message 'Hi' with intent '{'name': 'Hi', 'confidence': 1.0}' and entities '[]'

Those are warnings. Are you sure they’re errors? The model seems to be able to process user messages, or am I misinterpreting?

@koaning I think this is an ERROR.

ERROR rasa.core.agent - Could not load model due to cannot reshape array of size 170632188 into shape (684830,300).

the problem is, when I send POST request to the endpoint in postman, the endpoint is echoing the input text it receives as opposed to actually returning the intents. And I noticed these logs from the container and thought this might be the issue.

@koaning

Dockerfile

> FROM rasa/rasa:2.5.0-full
> MAINTAINER Ben Jenis
> COPY ./*.yml /app/
> COPY ./data /app/data/
> #COPY ./tests /app/tests/
> #COPY ./actions /app/actions/
> COPY ./requirements.txt /app/requirements.txt
> COPY ./models /app/models/
> WORKDIR /app
> USER root
> RUN pip install --upgrade pip
> RUN pip install -r requirements.txt
> RUN python -m spacy download en_core_web_lg
> #RUN python -m spacy link en_core_web_lg
> RUN ls ./models/
> EXPOSE 5005
> USER 1001
> CMD ["run", "-m models", "--enable-api"]
> 

**config.yml

> 
> language: en
> 
> pipeline:
>   - name: SpacyNLP
>     model: "en_core_web_lg"
>   - name: SpacyTokenizer
>   - name: SpacyFeaturizer
>   - name: RegexFeaturizer
>   - name: LexicalSyntacticFeaturizer
>   - name: RegexEntityExtractor
>   - name: CRFEntityExtractor
>   - name: DIETClassifier
>     epochs: 100
>     entity_recognition: False
>     constrain_similarities: True
> 
> policies:
>   - name: MemoizationPolicy
>   - name: TEDPolicy
>     max_history: 5
>     epochs: 100
>     constrain_similarities: true
>   - name: RulePolicy

@koaning No, this is an ERROR

Can you confirm the Rasa version that you used during training? Is it version 2.5? It needs to match the Dockerfile.

@koaning yes I did use rasa 2.5.0 to train the model.

image