Rasa X: [Kubernetes] Custom Pipline Element

Hi Rasa X Team, Hi everyone

I would like to have some advice on how to add a custom component in Kubernetes stack (I’m new to Kubernetes)

What I understand so far is that configuration are defined in rasa-config-files-configmap.yaml but are deployed with rasa-x-deployment.yaml. Rasa get back these configurations from Rasa-X thanks to the argument given to the Rasa container : --config-endpoint - "http://{{ include "rasa-x.fullname" $ }}-rasa-x:{{ default 5002 $.Values.rasax.port }}/api/config?token=$(RASA_X_TOKEN)"

Is that correct ?

I suppose that config.yml is not in rasa-config-files-configmap.yaml because this is something that is modified by the user thought the Raxa X UI, isn’t it ?

Do I need to create a custom image of the rasa sdk by adding my custom component in it ? Somewhat like in Rasa X: Custom Pipline Element

I’m using : Kubernetes ( rasa-x-1.2.7)

Thanks !

TL;DR;

Build your own rasa docker image like described in Rasa X: Custom Pipline Element - #2 by flythe

In the values.yml used when installing the helm chart, in the rasa section, override the property name and tag. Example for Google Cloud Registry :

rasa:
    name: "eu.gcr.io/PROJECT_NAME/REPO_NAME"
    tag: "TAG_OF_YOUR_IMAGE"

Long answer :

I’m gonna answer my own question. So to give a big picture of what I was trying to do.

I wanted to use my own existing NLU server (based on Flask server). Why ?

  • Because my company use some internal ressources that can not be ported (for now ?)
  • Because we have the thing already working
  • Because we want to use the dialog manager part of Rasa

Being new to Rasa, Kubernetes, Helm made things really hard but the one step installation on the cloud was appealing so I sticked to it.

My first attempt : Doing it properly

I tried to use rasa-x as a subchart and gave it the url of my service through the extraEnvs property in rasa section of the values.yml file. This dream was shattered by the following comment on the helm github:

Second attempt : Doing it dirty

I try my best to avoid forking things but using solution like helmfile or an other tool on top of helm is no go for me. It is already hard to understand for a newby, adding a layer of software is too much.

So I decided to forked the repo and add my own service in the chart by doing exactly like app-deployment.yaml and app-service.yaml. I can now give the url of my service by adding the following lines in the rasa-deployment.yml file where env vars are defined:

- name: "CUSTOM_PROCESS_HTTP_URL"
  value: "http://{{ include "custom-process.host" $ }}:{{ $.Values.customProcess.port }}"

And in _helpers.tpl

{{- define "custom-process.host" -}}
  {{- include "rasa-x.fullname" . -}}-custom-process
{{- end -}}

With this, I can get back in the custom component the URL of my service and proxy requests. “train” method is not implemented, because we train our NLU with our tools.

Last step was to build my own rasa docker image like described in Rasa X: Custom Pipline Element - #2 by flythe

To use this custom image, in the values.yml used when installing the helm chart, in the rasa section, I overrided the property name and tag. Example for Google Cloud Registry :

rasa:
    name: "eu.gcr.io/PROJECT_NAME/REPO_NAME"
    tag: "TAG_OF_YOUR_IMAGE".

Everyting worked as expected =) (I should maybe add some dependencies in order to make rasa wait for my service to be up)

Conclusion :

Pros:

  • Documentation is good !
  • Using helm chart is easy
  • Most of the things works out of the box (I did not look into SSL and ingress)

Cons:

  • I was missing some documentation on the kubernetes side about how to add a custom component. Having to extend the docker image is not a good solution in my opinion.
  • Not being able to subchart rasa because of missing functionnaly in helm was saddening ^^" (but seems like there is something in progress, LUA scripts ?)

Idea:

~ Learning how to deploy the stack on GCP was a bit hard but manageable ^^" (first time also on GCP, AWS is too expensive for testing/PoC purposes). Some tutorials on how to deploy the stack to main kubernetes providers could be interesting ^^