Core Server Security

I’m working on deploying Core + NLU in Docker containers. I am currently using the Docker images provided by Rasa.

I have a front-end (GitHub - mrbot-ai/rasa-webchat: A chat widget easy to connect to chatbot platforms such as Rasa Core) which works nicely. Because it’s client-side Javascript, the Core server has to be exposed which doesn’t seem like the best of ideas. When I move it behind an internal load balancer in AWS, it times out which makes sense.

How are other people doing this? I see they have JWT which is an idea, but how would I be able to have my Core server not exposed? I assume I need to put a layer in between the front-end and core which just shuttles the requests over to the Core server, but this is where I’m confused.

I see in the documentation it says this: We recommend to not expose the Rasa Core server to the outside world but rather connect to it from your backend over a private connection (e.g. between docker containers).

Any ideas are welcome!

we put an API GW in front, taking care of security.

You have some good ones . Rate limiting, JWT verification, certificate chaining are some of the techniques

https://konghq.com/kong/

Thanks Souvik! Do you use a custom Rasa Core setup, or just the base one? The front-end is doing web-sockets, and I’m trying to figure out how to stick an API gateway in between and forward the traffic to the Core server. We are using AWS, so maybe I could use API Gateway there because it has support for sockets now and other features.

Basically right now the front-end connects to /socket-io on the Core server. Would you setup a similar route in your API Gateway?

we use a custom rasa core setup. If you have AWS, perhaps best is to stick with their API GW which indeed can consumed packets with websockets using the websocket API

For us, we use REST, so wasn’t much of a critical security discussion. Keep in mind sockets come up with some security vulnerabilities of it’s own.

Thanks that’s very helpful - much appreciated and gives me some things to think about!

1 Like

Hi @brodie11,

Did you use websocket API from AWS between rasa-webchat and load balancer of rasa core? Could you please tell me what Route Selection Expression and Route Key you used in Websocket API? “user_uttered” didn’t work for me. Maybe I am missing something.

Thanks,

Achinta

I ended up going a slightly different route. I originally went with websockets, but our mobile team wanted to use REST; I’m less familiar with websockets too, and was concerned with how to support it if it wasn’t working.

I have built Docker containers for everything (front end, api, core, nlu, actions) and use AWS ECS to host it. I have an internal ALB that hosts the core, nlu and action containers - I use host based routing with Route53 subdomains to route the traffic around to the various containers. I have an Internet ALB that hosts the front-end and the API - it is locked down to our internal network though, so it’s not accessible by the world.

I decided on the api layer between the front end and core, so I don’t have to expose the core server. We are using AWS Cognito with SAML for identity and the api is using JWT tokens to validate the sender value that comes through from the front-end. That way if someone tries to tamper with the sender, the api will detect that it doesn’t match up to the JWT token. The other advantage of this was that I can re-use the same API for our mobile application, or anything else that wants to plug into it.

On ECS, I can vary the number of deployed containers per task definition, so I can scale differently. If one of the components takes a bigger hit, I can scale that one without having to duplicate all the other stuff. Let me know if that makes sense.

Thanks for the detailed reply @brodie11. Is it okay to have core, nlu and actions in the same container? Did you use any open source chat-widget (GitHub - scalableminds/chatroom: React-based Chatroom Component for Rasa Stack) which has REST instead of websockets?

I don’t have much experience with websockets but I will play with it a bit more to make it work (hopefully)