Running Rasa with TensorFlow on Kubernetes

Hi Guys, i try to get a small Rasa application for matching intents running on Kubernetes. I use a pretty simple dockerfile to build my image, which i pull from a private registry to my pod in kubernetes. The Deployment in Kubernetes works. I can access the pod via kubectl exec command. But for example if i try to check the rasa version with ‘rasa --version’ i get an Illegal Instruction error all the time. I already heard that this is the case if avx is not supported by the vm and it´s caused by TensorFlow. Is there perhaps a better way of deploying Rasa to Kubernetes? I just started with Rasa, Docker and Kubernetes so i´ve run out of ideas.

Dockerfile

FROM python:3

RUN pip install --upgrade pip 
RUN pip install rasa

kubernetes_deployment.yaml

    apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nlp-recruiting-app
  namespace: nlp-recruiting
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nlp-recruiting-app
    spec:
      containers:
      - name: nlp-recruiting-app
        image: myimageongitlab
        command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        imagePullPolicy: Always
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: gitlabregistry-secret

Can you you post the detailed error? And can you try using FROM python:3.6?

thanks for your answer.

the error is “illegal instruction (core dump)”

when using “FROM python:3.6” the error gets more detailed

“2019-06-19 08:28:41.459576: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use SSE4.1 instructions, but these aren’t available on your machine. Aborted (core dumped)”

I guess it´s an error between Tensorflow 1.13 and Kubernetes in this case.

Does your CPUs support AVX instructions (referring to https://github.com/tensorflow/tensorflow/issues/17411#issuecomment-450512554) ?

It seems they don´t support the AVX instructions “grep flags /proc/cpuinfo” offered

“flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology eagerfpu pni cx16 x2apic hypervisor lahf_lm …”

Could it be an option to use “pip install rasa_nlu[spacy]” instead of “install rasa” or do i also need AVX-Support for that?

Hi,

To workaround the AVX issue, I ended up running the following commands after following the recommended installation instructions:

pip uninstall tensorflow -y
conda create --name glpi-rasax python=3.6.8
conda activate glpi-rasax
conda install -c anaconda tensorflow==1.13.1
conda deactivate glpi-rasax
export PYTHONPATH='${HOME}/anaconda3/envs/glpi-rasax/lib/python3.6/site-packages'
1 Like

@akelad @Tobias_Wochinger Hey guys. Any thoughts on this? I have a box with no AVX on it. Running docker. Box is Linux redhat, running Docker CE. I pulled your Rasa 1.1.14 docker image which installs tensorflow 1.13.1 - works great on windows and Amazon ubuntu 16.04, but the moment I try to run the container built on windows to the linux vm running with no AVX this comes up:

F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use SSE4.1 instructions, but these aren’t available on your machine. Aborted (core dumped)

So in the requirements documentation for “Running Rasa on Docker” it would be nice to mention that for folks that are using tensorflow embedding as the pipeline that the default rasa image will not work.

In that same image $pip uninstall tensorflow also does not work! Errors with python 3.6 in site packages etc.

Any ideas which python wheel works with Python 3.6.8, Tensorflow 1.13.1 Debian 9 on CPU with no AVX? (for non conda users as well please, but using equivalent of virtual env).

Thank you kindly for your support.

1 Like

Well, the answer to my own woes was to go through the awesome process of uzing Bazel, and building a tensorflow wheel 1.13.1 from source. Note the build image must be Ubuntu, as Debian docker image can’t build this permutation. I get couple of warning but it works (non-prod) use only. Tensor flow does provide an Ubuntu 16 image, so start with that, but then get a virtual env into it and install python3.6 and run from there. May need to install couple minor things. @akelad @Tobias_Wochinger I would recommend putting a note in the prerequisites documentation that CPU must support AVX or SSE4.1 instruction to work with the provided docker image. Without it, that docker image will need to customized and require uninstall of tensorflow in it, and installing the wheel that other folks compiled that matches the config or worst case , build from source as version of gcc also matters.

1 Like

Yeah I’m afraid there’s not much we can do about that since it’s the tensorflow package that requires those instructions. In general tensorflow usually should be compiled for the particular CPU instructions you’re running on. We’ll take your feedback about the docs into consideration though

1 Like

Well, that seemed to be what was needed. I ended up getting an Ubuntu 16.04 Tensorflow build image (Debian does not work) and has to configure Bazel to build on that server directly with no GPU instructions. Using many pre-compiled wheels also was problematic as a particular version of gcc is required. So yes…compile from source (not fun). So better check the VM ahead of time, as this is one place where Docker trips up - specifically for TensorFlow’s need for AVX2 and SSE4.2. Don’t think this problem occurs with Spacy though. Thanks

1 Like

Thanks for sharing your solutions. I also came across the idea of compiling tensorflow from source but dropped it because of lack of experience. As a workaround and to get rid of the dependency on tensorflow I currently use Rasa NLU 0.15.1 together with Spacy.