Deployment of Rasa Chatbot with Rest API in GCP

Hi,

I tried to deploy my Rasa Chatbot following this guideline

To access Rasa over my own website I am providing REST API which I am locally accessing over http://localhost:5005/webhooks/rest/webhook

and I also implemented custom actions

Locally everything is running fine using these two commands:

rasa run -m models --enable-api --cors “*” --debug
python -m rasa_core_sdk.endpoint --actions actions

But now I would like to deploy everything and I created a Docker Image and uploaded this to the Google Cloud Platform but I am stuck how to access the API over the link GCloud Platform is providing for me.

Does anyone have experience with this?

Kind regards, Stephanie

Hi Stephanie.

I’m having the same issues so will be following the post.

Jerome

@Jmg007 I m not sure, whether Rasa Open Source Version is deployable at all. Right now I m trying to upload all parts of my application on an Ubuntu machine of AWS. But I m not a professional DevOps. Therefore I have problems with that as well…

Check out the pricing model. It is saying that for Rasa Open Source it is not deployable, what I can not really believe. There are for sure a lot of permissions and requirements inconsistencies and problems while deploying the current version of Rasa OS…

You can also see that the Deployment and Installation Support is exclusive for Rasa Enterprise Subscribers. I can imagine that is the reason, why deployment and installation posts in the Rasa Community does not get a lot attention of the Rasa dev team.

1 Like

Hi @StephanieHohenberg

Thanks for that image. It would definitely be good to get some clarification from the Rasa team. In the “Rasa Open Source” column the “Connect to messaging channels and apis” I think it means that Rasa will still run on AWS, Google Cloud, etc. But would be good to know.

I’m using Google Cloud Platform and Google App Engine which makes it pretty easy. What I did was download the Google Cloud SDK to my local machine. Using the SDK means I can run the gcloud commands and basically deploy my project from my desktop to Google Cloud. I’m not an expert but by running “gcloud app deploy” from the folder on my desktop containing the rasa project means that it is automatically built as a docker image (i’m pretty sure) that will run on app engine.

And then all that really needs to be done is to take care of the appropriate endpoints (which i’m still trying to work out as well).

The action endpoint url: “http://localhost:5055/webhook” probably just needs to be changed to the url of whatever server is running the actions. I’m just a bit confused with the endpoints.yml file whether the models url also needs to be set. And in the credentials.yml file the rasa url is url: “http://localhost:5002/api” and not sure exactly what that is pointing to.

And a major issue i’m having is figuring out what to include in the Dockerfile. At the moment my Dockerfile looks like:

FROM rasa/rasa; ENV BOT_ENV=production; COPY . /var/www; WORKDIR /var/www; RUN pip install rasa; RUN rasa train; RUN rasa run -m models --enable-api --cors “*” --debug; RUN rasa run actions; ENTRYPOINT [ “rasa”, “run”, “-p”, “8080”];

(the semi colons aren’t in the actual dockerfile it was just to make it clearer when ppl are reading the post)

This isn’t working though and when I try and upload using “gcloud app deploy” i’m getting an error at the end:

PermissionError: [Errno 13] Permission denied: ‘models/20200117-101125.tar.gz’ The command ‘/bin/sh -c rasa train’ returned a non-zero code: 1 ERROR ERROR: build step 0 “gcr.io/cloud-builders/docker” failed: exit status 1 e[0m

Its great that Rasa runs locally and those masterclass videos have been good. But obviously most people would want to run Rasa off a server whether that’s heroku, aws, google cloud, etc. It prob wouldn’t be too hard for the Rasa team to update the docs for the common platforms eg. AWS, Google and how to modify the endpoint.yml, credential.yml and running the docker files to get things working.

The license for the open source version is Apache. See https://github.com/RasaHQ/rasa/blob/master/LICENSE.txt . So you can deploy it if you can get it to work.

This thing has more dependencies than anything I’ve ever seen.

My Dockerfile used to look like this:

FROM rasa/rasa-sdk:latest
ENV BOT_ENV=production
COPY . /var/www
WORKDIR /var/www
RUN pip install rasa==1.3.0a1
RUN rasa train
ENTRYPOINT ["rasa", "run", "-p", "8080"]

as in the guidling of this medium article

But I got stuck a lot and after a lot of time wasting, I switched from Docker and GCloud to an AWS Ubuntu machine. But I am stuck there as well…

Hey Stephanie,

I got my project uploaded to google cloud platform but still not running properly - hopefully getting there tho.

I got some errors as well.

I noticed a couple of things (i definitely could be wrong)…

I don’t think the line “Run rasa train” needs to be in the Dockerfile. I think if you train the model locally then when you deploy it you can point to the models from the endpoints.yml file. So mine is pointing to

models: url: “http://my_project.appspot.com/?hl=en-GB/models/default_core@latest”

I also would remove the version of rasa==1.3.0a1 because it’s prob not the latest.

Basically what you want to do to run it is something like:

rasa run -vv --enable-api --log-file out.log --endpoints endpoints.yml --credentials credentials.yml

So that the endpoints to the model and the actions are pointing to your server url and the credentials can be uesd for other stuff eg. I’m using it for setting up websockets.

So my Dockerfile is pretty basic and looks like:

FROM rasa/rasa ENV BOT_ENV=production COPY . /var/www WORKDIR /var/www RUN pip install rasa ENTRYPOINT [ “rasa”, “run”, “-vv”, “–enable-api”, “–endpoints”, “endpoints.yml”, “–credentials”, “credentials.yml”, “-p”, “8080”]

I’m also not great with devops so i could be very wrong.

Cheers.