Multiple Rasa bots on one port

I have a company and I want to run 10 chatbots which have different models(say one for a hotel, other for a gym and another a weather bot, and so on) at a time which interacts using Rasa using REST APIs, after searching for a lot I was able to find these docs Using Rasa NLU as a HTTP server — Rasa NLU 0.12.0 documentation which solves my problem very well but after further research, I came to know that projects have been removed from RASA and was able to find two solutions,

  1. Run each bot on a separate port which I think can slow down the server
  2. Load and unload model each time a request is made which will make the responses very slow

I wanted to have a solution where I could run my server on one port and route my requests to various models somehow, there are many questions asking the same problem but nowhere I could find a straight forward answer to the problem so please let me know if it is possible or not? Thanks in advance and please let me know if any more info is required.

HI @shadow_ranger. Are you using Docker to run your bots? Overall, for completely separate assistants you should use different ports to make sure that the messages are being sent to the right assistant.

@Juste Thanks for the reply. Yes, I am using docker, but say if I have 10 bots then opening so many ports will increase the load on the system isn’t it? so is there some way to scale multiple assistants on one server using Kubernetes? I am not aware much about Kubernetes but if you say that is the way to go then I will look into it. I am looking for a solution wherein I can host 10 bots on a server without affecting the performance. Thanks

Hi @shadow_ranger. Is there a specific reason why you want to run all of the bots on one server? Our general recommendation is to use one server for one bot, in that case you wouldn’t even have to worry about the ports. One thing you have to think about is that your server will need more resources to scale your assistant as it grows

1 Like

Hey, have you found any solution, on how to serve multiple bots under single port?