How to implement Rasa that is compatible with Japanese?

Hi there,

I am a beginner to RASA, started working on a project in an attempt to intergrate RASA nlu + Core with RocketChat. All RASA, Rocket.Chat, MongoDB are created through docker compose and they display no network problem.

I created a RASA docker container which is recommended by Rocket Chat documentation .

A) GitHub - RocketChat/rasa-kick-starter: Rocket.Chat connector kick starter for Rasa.AI

The bot communicated well with Rocket.Chat in English and display same behavior as shown in example illustrated in the above URL. The docker image I used from A) is an older version of it(i.e. rasa/rasa:1.1.4-full)so in order to resolve the compatibility issue of the RASA version used in the B). See below.

Since the above docker image does not include a Japanese tokenizer, I first started the docker container process using docker image A) and then use “docker exec” command to login into the container as root, reinstall everything inside manually as instructed by the Japanese module below.

B) GitHub - mahbubcseju/Rasa_Japanese

I also tried installing B) on Linux local(not on the docker container) and RASA works well on local. (we tried “rasa train” works perfectly fine, we can also conversate with RASA without problem in Japanese.)

So I go back to the container which is now reinstalled with the content of B) and when I tried running “rasa train” from the container, the following error was shown.

‘’’ timestamp : W tensorflow/stream_executor/platform/default/dso_loader.cc:49]Could not load dynamic library ‘lidcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory ‘’’

I could not figure out why as we copy the exact set of python libraries to the container as it is for the local python libraries. (on Linux local “rasa train” command works, so I expected the same results should also occur on the container too)

I was just trying to implement a Rasa that is compatible with Japanese. Is there a way to solve this error? Or is there a better way for implementation, if so could you show us how to do so? (could not find an official guide to do so)

I appreciate your support. Look forward to your feedback.

Hi @SctRasa ! This error doesn’t look to be related to using Japanese. The line below looks like tensorflow is looking for a CUDA driver for a GPU.

‘’’ timestamp : W tensorflow/stream_executor/platform/default/dso_loader.cc:49]Could not load dynamic library ‘lidcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory ‘’’

Hi @fkoerner, Thank you for your reply. I understand that it is an error message due to CUDA driver but I could not resolve the dependency problem in Python so I wonder if there is a docker image that support Japanese rasa for Rocket.Chat? Or is there any getting start guide/tips regarding the implementation?

The module B) that I mention below is neither official nor is being maintained. (does not support newer RASA version)

B) [GitHub - mahbubcseju/Rasa_Japanese ] (GitHub - mahbubcseju/Rasa_Japanese)

Of course I would love to modify the docker image in Rocket.Chat RASA module A) below directly without using module B) at all but there are some files that I cannot get retrieve in order to rebuild the docker image of module A).

Currently, I am launching module A) as a docker process and manually login into docker container using “docker exec -it --user==root {docker id} bash” and reinstall manually the python inside the container and replace it with module B) but there is the CUDA driver error.

A) GitHub - RocketChat/rasa-kick-starter: Rocket.Chat connector kick starter for Rasa.AI

Hi @SctRasa, could you share more details about what you mean by “the dependency problem in Python”? Also, the warning is related to CUDA, but rasa (tensorflow, really) should fall back on the CPU, if the GPU cannot be used. So even though you have seen this warning, it should still work. Some questions for you:

  1. Aside from the warning you mentioned, are you seeing other error messages? Are the commands rasa train producing a model that works as you expect?
  2. Could you share your tensorflow version on the docker container?
  3. Does the computer you’re running the docker container on have a GPU available and would you like to use it?

Hi @fkoerner,

  1. rasa train causes the error message in the container but not local. I cannot start rasa as server neither. I just want to know whether there is a stable version of docker image that can support Japanese tokenizer for Rasa.
  1. rasa 1.1.4, tensorflow 1.13.1, tf-estimator-nightly==1.14.0.dev2019060501, python2.7 all requirement is installed through the following file.

The docker image originally comes with another set of Python modules(tensorflow==2.3.1), I have to create the docker container from the image then proceed to force reinstall those python modules by replacing them with the ones in requirement.txt

  1. No, there is no GPU on the container.
timestamp : W tensorflow/stream_executor/platform/default/dso_loader.cc:49]Could not load dynamic library ‘lidcudart.so.10.1’; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory

The above error should not be fatal (i.e. it should not result in rasa train failing). Since you don’t have a GPU, tensorflow should fall back onto the CPU.

Sadly, I do not know of any Rasa docker images that include a Japanese tokenizer. However, there are some options, and I will happily help you to figure out your problem if you can give me more information. Giving me more information will help me help you. I need from you:

  1. The full output from rasa train, including all other error messages and logs
  2. A description of what exactly isn’t working about rasa train in the docker image. Does it crash? Does it hang? Does it not produce a model?