[How To] migrate pre-trained Rasa model to Rasa X

Dear community,

As you can see below, I used mitie and jieba to deal with a Chinese bot. It works fine with rasa. My question is about how to migrate it to Rasa X. With docker-installation of rasa X, I uploaded my pre-trained model. However, there are a few things I don’t know how to handle and I couldn’t find the solution in Doc or forum. Need help here.

  1. Rasa X contains rasa inside, if I can use rasa inside of rasa X to train my Chinese model, how to install jieba and mitie with pip as I’ve done with rasa. I think the default installation doesn’t include those packages.

  2. The other way to train my Chinese model is to use Rasa server outside of Rasa X. In this case, how should I connect the sperate Rasa server with the Rasa X server, if it’s possible. Does rasa X support this kind of connection?

  3. My app also has a knowledge graph engine Grakn included, which is another service. With Rasa X, should I install Grakn with Rasa X server together or separately? Right now, my setting is to install rasa and Grakn on the same EC2 instance and the test run works just fine. With the migration, I don’t know which is the best way to handle these three services: Rasa, Rasa X and Grakn.

  4. If it’s possible to connect Rasa X with existing Rasa and Grakn, how to write the endpoints.yml?

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: "zh"

pipeline:
- name: "MitieNLP"
  model: "data/total_word_feature_extractor_zh.dat"
- name: "JiebaTokenizer"
  dictionary_path: "data/jieba_userdict_zh.txt"
- name: "MitieEntityExtractor"
- name: "EntitySynonymMapper"
- name: "MitieFeaturizer"
- name: "SklearnIntentClassifier"

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
- name: "KerasPolicy"
  batch_size: 50
  epochs: 200
  max_training_samples: 300
- name: "MappingPolicy"
- name: "MemoizationPolicy"
  max_history: 5
- name: "FallbackPolicy"
  nlu_threshold: 0.3
  ambiguity_threshold: 0.1
  core_threshold: 0.3
  fallback_action_name: 'action_default_fallback'
- name: "FormPolicy"

My last question is related to uploading the trained model to Rasa X:

  1. After uploading model with curl, and active it, NO story/ synonym/ lookup show up. I’m not sure what kind of information will be included in the model. In the case of rasa, the folder structure is there, and rasa is easy to find the needed information. I decompose the file like ‘20200213-102926.tar.gz’, and there’re nlu/core/fingerprint.yml and I couldn’t find nlu.md or story.md . I can upload nlu and story through UI if that’s necessay, but those pre-defined synonyms in nlu do not show up in synonym page. Do I have to add them one by one through the “+” button?

Thanks for your time and help.

1 Like

Hi @yiouyou

  1. You will need to use a custom build of the rasa container for this. There is some documentation on how to do this in relation to custom actions here. The concept is the same: you just replace the rasa-sdk reference with rasa, and replace the default rasa image with your custom one in the docker-compose.override.yml file.

  2. This should be resolved with the answer to the first point - you can override the default rasa image.

  3. The Grakn dependencies will be handled by the action server, which is the app service. You can read more about how to customise that here.

  4. This should be answered with the above answers

  5. NLU data doesn’t get extracted from the models upload, this is separate. You can either upload the data through the UI, or you can have it sync from your github repo (if you’re using one), using Integrated Version Control

Hi, @akelad, there’re some links of Deploying your Rasa Assistant broken:

https://rasa.com/docs/rasa/rasa-x/docs/installation-and-setup/openshift-kubernetes/

https://rasa.com/docs/rasa/rasa-x/docs/installation-and-setup/docker-compose-script/

https://rasa.com/docs/rasa/rasa-x/docs/installation-and-setup/docker-compose-manual/

Hi, I am also dealing with a Chinease bot. I have migrated a rasa bot in Rasa X (Docker Version).

I find that Rasa in Rasa X or rasa/rasa:1..-full include jieba and mitie, you donot need to install these package. If you build your custom actions docker server with rasa/rasa_sdk, you need to install these.

@yiouyou thanks for pointing that out, we’ll fix it ASAP

@KStephen1991 Thanks, that’s good to know. But after installing the docker version of Rasa X and uploading the built model, the Rasa X seems not working for Chinese input. What do you mean by

I built a Chinese bot and run it with rasa interactive. If you have migrated a Chinese bot in Rasa X successfully, do you mind share more on how to do it? Could you leave an email or a WeChat account I can contact?

Thanks,

Hi, @akelad and @KStephen1991

Following the masterclass YouTube, to use Rasa X, what I’ve done:

  1. successfully connect the rasa x with git repository and upload the model

  2. all nlu and story data are shown correctly in rasa x

  3. to use custom action service, I mkdir and add files in /etc/rasa as told by masterclass image

But, I can’t talk to my bot:

Here is the docker-compose.override.yml

As you can see, in ‘actions’ folder, beside actions.py, I have grakn_kg.py which has some functions to query Grakn database. At the same instance of Rasa X, I also have installed and run the Grakn service.

This is why I’m asking before, what’s wrong with my settings. I thought it might because of lack of mitie and jieba.

I’ve read the doc and watched masterclass video, however, I’m still don’t know how to exactly run a Chinese bot with Rasa X.

btw, my bot is running smoothly with “rasa interactive” command line, and has been tested on other instance, and I installed rasa X server without rasa (pip3).

Thanks,

Hi, there may some causes:

  1. You use Mitie pipeline as below, except you upload the model, do you remember add the total_word_feature_extractor_zh.dat?
- name: "MitieNLP"
  model: "data/total_word_feature_extractor_zh.dat"
  1. When you build your action image, have you test it? Sometimes, I found some errors in my action container.

My WeChat is Stephen19910905, we can keep in touch and exchange study.

Hi, @KStephen1991, here is the config.yml:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: "zh"

pipeline:
- name: "MitieNLP"
  model: "data/total_word_feature_extractor_zh.dat"
- name: "JiebaTokenizer"
  dictionary_path: "data/jieba_userdict_zh.txt"
- name: "MitieEntityExtractor"
- name: "EntitySynonymMapper"
- name: "MitieFeaturizer"
- name: "SklearnIntentClassifier"

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
- name: "KerasPolicy"
  batch_size: 50
  epochs: 200
  max_training_samples: 300
- name: "MappingPolicy"
- name: "MemoizationPolicy"
  max_history: 5
- name: "FallbackPolicy"
  nlu_threshold: 0.3
  ambiguity_threshold: 0.1
  core_threshold: 0.3
  fallback_action_name: 'action_default_fallback'
- name: "FormPolicy"

What do you mean by “build your action image”? I’ve tested actions.py but didn’t build a docker image. Do I have to?

Thanks,

Thanks to @KStephen1991, I’ve created action image. Here is what I’ve done:

  1. Dockerfile:
FROM rasa/rasa-sdk:latest
USER root
WORKDIR /app
RUN pip3 install grakn.client
COPY ./actions /app/actions
COPY ./grakn_kg.py /app/grakn_kg.py
COPY ./data/nlu.md app/data/nlu.md
COPY ./data/stories.md app/data/stories.md
COPY ./data/jieba_userdict_zh.txt app/data/jieba_userdict_zh.txt
COPY ./data/total_word_feature_extractor_zh.dat app/data/total_word_feature_extractor_zh.dat
CMD ["start", "--actions", "actions"]

Notes: In the actions folder, there is only actions.py file, the grakn_kg.py is outside of the actions folder, otherwise, the rasa can’t find it when need to import it to actions.py.

  1. docker-compose.override.yml as below:
version: '3.4'
services:
  app:
    image: 'rasa-ticket:latest'
  1. then run
docker run rasa-ticket:latest

it shows

seems right. However, after run

sudo docker-compose up -d

It’s still not working for my bot.

My question NOW is how to test and justify the action image? @akelad @Tobias_Wochinger

What I can think of is to test compare it with rasa run command and rasa shell. As you can see below, the right panel is about rasa shell, the left up is image, the left down is rasa run; when first run “rasa run actoin”, the behavior is correct, however when run image, there’re errors ( Couldn’t connect to the server at ‘http://localhost:5055/webhook ):

The image behavior is different from “rasa run actions”.

Any clue to fix the problem?

Thanks for your attention!

@yiouyou are you trying to test this locally or with docker-compose? Can you send me the logs for the rasa-production container?

@akelad

Right now, the errors are shown as below:

root@iZZ:/etc/rasa# docker logs rasa_rasa-production_1
2020-02-25 16:48:56 ERROR    pika.adapters.utils.io_services_utils  - Socket failed to connect: <socket.socket fd=21, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.19.0.7', 36834)>; error=111 (Connection refused)
2020-02-25 16:48:56 ERROR    pika.adapters.utils.connection_workflow  - TCP Connection attempt failed: ConnectionRefusedError(111, 'Connection refused'); dest=(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('172.19.0.3', 5672))
2020-02-25 16:48:56 ERROR    pika.adapters.utils.connection_workflow  - AMQPConnector - reporting failure: AMQPConnectorSocketConnectError: ConnectionRefusedError(111, 'Connection refused')
2020-02-25 16:49:01 ERROR    pika.adapters.utils.io_services_utils  - Socket failed to connect: <socket.socket fd=24, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.19.0.7', 36860)>; error=111 (Connection refused)
2020-02-25 16:49:01 ERROR    pika.adapters.utils.connection_workflow  - TCP Connection attempt failed: ConnectionRefusedError(111, 'Connection refused'); dest=(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('172.19.0.3', 5672))
2020-02-25 16:49:01 ERROR    pika.adapters.utils.connection_workflow  - AMQPConnector - reporting failure: AMQPConnectorSocketConnectError: ConnectionRefusedError(111, 'Connection refused')

but are there logs after this? And can you answer my question about how this is deployed?

I’ve solved this problem. Some key points might be useful:

  1. To install packages based on rasa/rasa or rasa/rasa-sdk docker, remember to add “USER root” before installation.

  2. The connection between rasa and rasa-x is based on docker bridge network, it’s better to understand that first.

  3. If you need to run Grakn or other services, you can run them within a docker, then connect the its container to the ‘rasa_default’ docker network which is generated by rasa-x.

  4. Within ‘rasa_default’ network, all containers are allowed to access each other by their service name, such as ‘app’.

  5. After setting up “Integrated Version Control” and repository.json, you still need to upload the trained model to start the rasa-x conversation.

  6. Be careful about the workdir of your app docker, “actions.actions” means the actions.py inside of ‘actions’ folder.

  7. To debug the rasa-x with custom app, use docker-compose logs or docker logs my-container smartly. If you’ve successfullly run your bot with rasa but no luck with rasa-x, don’t worry. Most of time, it’s due to some simple connection error or one wrong folder location.

  8. Sometimes, the pip package inconsistance is just inevitable, such as in my case rasa/tensorflow/graknt. As long as it’s not ERROR, warnings are acceptable as long as it works. To beginners, it’s OK to stay with the version that works for you. Not necessary to catch up the latest version for everything at everytime.

At the end of this post, thanks a lot to people who helped me in and out the forum.

Hi

I am going through exact same problem.

I have my Rasa server and Action Server running independently on different server.

Now I will install Rasax (and not Rasa server) separately.

I have following questions:

  1. How is communication between Rasa and Rasa-X happen

  2. Correct my understanding: The communication between Rasa and Rasa-X is important because then we can visualize actaul conversations coming to our bot (Rasa Server) and correct it in Rasa-X. Hence communication between rasa and rasa-x is important. Is my understanding correct?

  3. Does this communication happen through Message broker like Rabbit MQ? If I configure then is that all I need to have communication between Rasa Sever and Rasa-X?

  4. What if I don’t care for communication happening between Rasa and Rasa-X. Instead I will manually upload pre-trained model in rasa-x and then see the performance.

I don’t want to manually upload a model in Rasa-X everytime when I train it. Can we automate this step? I am sure there must be some API do achieve this?

Can you please reply these questions and may be additional points which I might have missed?