Why is tensorflow required to deploy REST channel

Hi!

I’m new to RASA and machine learning in general, meaning I got it up and running and created a relatively simple bot with it, but I’m still fuzzy on the details about how exactly it works.

Because the CPUs on my own computers do not support AVX, and I wanted to use the supervised pipeline, I am at the moment doing my development on a VPS.

I had the assumption that once I had trained a model I was happy with, I would then be able to deploy the bot (using the REST channel for instance) on my outdated laptops, in other words I assumed tensorflow would only be a requirement to train the bot, but not to make it run. So you can imagine I was slightly disappointed when

docker run -v $(pwd)/models:/app/models rasa/rasa:latest run

gave me the good old

The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine. :sob:

My question is therefore two fold:

  • Could someone explain why is tensorflow required at runtime ?
  • Is there anyway around it? :imp:
  • Or as an alternative, could one have a RASA core runtime depending on a tensorflow version prior to 1.5 - so it would not need AVX support - (I checked on Github, couldn’t find one), but using a model trained by the latest stable version of RASA and tensorflow ?

Thank you for your time!

Best regards.

1 Like

How does the warning:

The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine

stop you from using Rasa?

Hi, thanks for asking.

Because the docker container exits immediately after, I’m afraid this is more than a warning:

$ docker run -p 5005:5005 -v $(pwd)/models:/app/models rasa/rasa:latest run

2019-08-03 19:50:33.942955: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.

$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS

54cc5db882ae rasa/rasa:latest "rasa run" 3 minutes ago Exited (139) 3 minutes ago

Are you implying it should work?

Yes, it should work. I use tensorflow a lot and get that warning all the time.

I tried running your command and never got that warning. Here’s the screenshot:

image

The models were created with the following command:

docker run -v $(pwd):/app rasa/rasa init --no-prompt

Can you try starting the rasa server on the model created by rasa init?

I made a new empty directory and executed the command. Sadly, I get the same result:

$ docker run -v $(pwd):/app rasa/rasa init --no-prompt
2019-08-04 15:15:27.666522: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.
$ ls -la
total 0
drwxr-xr-x 2 KSD staff 68 Aug 4 17:14 **.**
drwxr-xr-x 7 KSD staff 238 Aug 4 17:14 **..**

Actually, even docker run -v $(pwd):/app rasa/rasa --version produces the same result…

I’m sorry.

It turns out I misread your error.

So, yes, the standard tensorflow binary was compiled to use AVX instructions so your CPU must support this instruction set.

The workaround is to compile tensorflow yourself…

But I would suggest try installing the tensorflow version from Anaconda locally. You can also look at this post for more background info. If that works, then you can modify the docker image to uninstall the tensorflow version that came with it and use the Anaconda version instead.

Ok, I will give it a shot.

I took a deeper look at the source code and I can now answer my own questions, hoping it will helpful to anyone who is starting to work with Rasa and might be wondering the same (!!! do correct me if some of the information below is incorrect !!!):

  • Why is tensorflow required at runtime ?

Contrary to my assumption, Rasa does not produce models that are independent of the machine learning library used to train them. So if you train a model using tensorflow, your model will become dependent of the tensorflow library.

  • Is there anyway around it?

Nope, this is at the heart of Rasa’s architecture.

When Rasa tries to start the REST server for instance, it will attempt to load the model and validate it using the rasa/nlu/components.py, validate_requirements method. This method itself calls rasa/nlu/registry, get_component_class in order to assert that the libraries required by the model are available on the system. If validation passes, then Rasa will load the appropriate pipeline of components - which have dependencies to machine learning libraries like tensorflow - to deal with the model (see rasa/nlu/model.py )

1 Like