The best way to deploy Rasa X using multiple local linux instances

I want to deploy Rasa chatbot using 3 linux instances. What would be the best way to go about it? docker/ helm/ kube?

The main goal of the exercise is to achieve fault tolerance. Let’s say one of the linux machine goes down …is there any way to move the entire conversation to the any of the other machines? I just want to know the best approach for this.

P.S. I’m not even sure if I want to go with Rasa X or just Rasa Open Source?

Thanks in advance, Rishab