Hardware requirements for production: from prototyping to deployment

Dear community users,

I am freshly new to the RASA community and really excited to become one of this team.

As a newbie, I have some practical questions regarding the hardware requirements regarding the local machine and servers to use in order to provide for adequate scaling production.

I saw in the installation guide of RASA X the following requirements:


  • Minimum: 2 vCPUs
  • Recommended: 2-6 vCPUs


  • Minimum: 4 GB RAM
  • Recommended: 8 GB RAM

Disk Space

  • Recommended: 50 GB disk space available
  1. Are these requirements sufficient for continued use of the AI chatbot whatever the number of simultaneous users ?

  2. I went a bit under-the-wood of the machine learning used by RASA and saw tensorflow employment for embedings and LSTM models for action prediction. Doesn’t those deep-learning models need higher hardware requirements as higher number of CPU cores and use of GPUs to accelerate this deep-learning ?

  3. How are the models trained? In a continuously fashion as soon as new training data are arriving or is the training realized apart from the continued use of chatbot for example one a month with new training data ? So, does the training need more ressources than those required for RASA X server and have to be computed apart from the server ?

  4. I have to develop some specific algorithm implying machine learning (potentially deep-learning) in order to further integrate this within the RASA chatbot. Does this part potentially be identified as specific action server ? If yes, does the action server could be the one used for RASA chatbot deployment or is there a need for a separate server with better hardware requirements ? If separate how to integrate action server with chatbot server ?

  5. With all these questions, I am struggling with the best way to configure hardware at the beginning to avoid useless expensive costs or under-scaling hardware and to be from the start in the best conditions for production. So, what would you suggest to me regarding needs for local machine and server? Is the server itself sufficient to realize all the steps from prototyping up to deployment with an additional basic laptop connected to server via ssh ? Or is i better to prototype and/or train machine learning/deep-learning model on a powerful local machine then deploy to the server ?

Many thanks in advance for experimented users/developers advices !


Would anyone have advices regarding my questions?


Hi Matthieu!

  1. This question is a bit hard to answer. This is what we recommend as a minimum for most projects, but you may need more resources for your particular use case.

  2. Hardware acceleration can help with the training, for deployment it’s a little less important. Most of the recommended models are recommended because they’re smaller and faster to run.

  3. Your models will only be retrained on request, for example when you use the rasa train command.

  4. You’ll need a seperate server for your actions, yes, but if you’re using a virtual machine then you can run multiple servers from it. Depending on what your action is doing, you may need to provision resources. (I know a lot of these answers are “it depends” but it really does depend. :slight_smile: )

  5. For prototyping, I generally work on my local computer (which is fairly robust) and then move to a hosted virtual machine when I’m ready to share my project.

Hope that helps!

Hi Rachael !

Thanks for your detailed answer, this helps a lot :slight_smile: