The best way to deploy Rasa X using multiple local linux instances

I want to deploy Rasa chatbot using 3 linux instances. What would be the best way to go about it? docker/ helm/ kube?

The main goal of the exercise is to achieve fault tolerance. Let’s say one of the linux machine goes down …is there any way to move the entire conversation to the any of the other machines? I just want to know the best approach for this.

P.S. I’m not even sure if I want to go with Rasa X or just Rasa Open Source?

Thanks in advance, Rishab

You might want to look into the helm chart installation using a Kubernetes cluster.

Start with a single node cluster, and then add the remaining two nodes. Or create a three node cluster to start with and just deploy on it.

The Masterclass also has some instructions for this:

Hope this helps!