RASA multiple bots on same server

Hey guys,

Im fairly new to RASA and was wondering if its possible to get mulitple bots running on the same server through them being exposed from different ports? If any one has any examples of this it would be great, thanks.

@danadoherty639 I think this shouldn’t be a problem, you can specify which port you want the bot to run on with the -p parameter. Keep in mind you’d want to run any different action servers on different ports too (also with -p), and then you would have to update the action_endpoint to reflect the new port for each action server.

Hi @danadoherty639

I kindly want to mention, that you might be interestedin a technique that ensures, that you can control your bots as a system service.

Further: Depending on your plans, you might want to consider running your bots with docker-compose I think that makes life a little easier with not that much effort to put in.

If you need help, feel free to ping me!

Regards

Hi guys thanks for getting back to me so quickly, the -p command works a treat locally, we want to set up HA infrastructure, with nginx, docker(container per instance or all instances in one container?) and AWS, to support bot-multi tenancy and host multiple different bots all with different conversational trainning data, is this the best approach? Could the event broker help with this? We have a model for each bot all trained with different conversational data, now we want to use the same backend with botkit to handle receiving and sending responses from the different instances of rasa. Any help on this topic is greatly appreciated.

Thanks, Dana.

Hey @danadoherty639, I would probably recommend a container per instance. What are your different instances for? Would an end user be talking to multiple bots in the same conversation or being directed to one bot at the start of the flow?

Hi @danadoherty639

in addition to @erohmensing ’ s suggestion I’d also suggest to use different containers for different instances of the bot. At least if those instances depend on different action-servers and/or nlg-servers, I’d strongly recommend it. Docker-Compose is also a good choice in terms of horizontal and vertical scalability. Currently we are using RabbitMQ for several services that need to communicate with the bot-instances, but unfortunately that does not free us of building a suitable bot-server-architecture.

Can you provide more details about the actual architecture? That might help us to help you!

Regards

So for each assistant do I have to change action end_point url ? with the same port that is used to run the server.

There are no custom actions in my current assistant. Where do i have to make changes in my under rest: or somewhere else.

Hlo Julian
I have developed a chat bot with RASA.
My Query is -
Suppose there are 2 users - U1 and U2
they queried for two different requests - Q1 and Q2
I can see both these queries in this terminal -
rasa run --enable-api -m models/dialogue -m models/nlu/current --debug --endpoints endpoints.yml --cors “*”
But in rasa run actions terminal , it executes sequentially
First Q1 then Q2 , if Q1 reached first

I want for different users , rasa run actions maintain different instances so that both queries are processed simultaneously. My bot’s action server is behaving sequentially . if 100 users will interact with my bot then performance and speed will be too bad kindly help

Hi @vi.kumar,

I will think about that and get back to you. I am not sure if those requests cant be handled simultaneously.

Ill get back asap.

Kind regards Julian

hi @JulianGerhard,
Thanks for replying soon
Waiting for your valuable input

Hi @vi.kumar,

I thought about your post for a while and I have a few questions. As far as I have seen, the ActionServer is implemented with Sanic - which is in terms of scalability and asynchronous communication a good choice. This server should absolutely be able to handle simultaneous requests.

The second thing worth to mention here is what the ActionServer actually does: It takes a jsonified Tracker and processes this Tracker based upon the mechanics implemented in the actions.py.

Now my questions:

  1. Did you ensure that the sender parameter was actually set to either U1 and U2 ? Because simultaneous processing for the same sender won’t be possible.
  2. Did you came to your conclusion only based upon your observations in the terminal? If not, what was your test scenario?
  3. If the answer for 2 is yes, did you every try to reproduce this behaviour on a remote host / system designed for production?

I am asking because I think you need to be careful with simultaneous. I think the server is actually able (and does it) to handle things in parallel, but your terminal isn’t. Imagine: How would a terminal print things simultaneously? There is no other choice than to display them sequentially. However, it might be the case that you are nevertheless right - then this needs to be checked with a proper scenario. Do you know how to establish such a scenario or do you need help?

Kind regards
Julian

Hello @JulianGerhard To your Questions -

  1. Yes my sender is different I am using emp_id as sender_id So for each user msg ,it is sent with sender_id = emp_id

  2. My Bot is in stage server and currently more than 50 users are using it . I found this problem when multiple users used my bot .

I am providing u more information -

My UI is my company Interface . I have hosted a web server in django framework , this web server gets the msg from the Company UI . Users type their msg in Company UI and sends the msgs to my django server . My django server then sends user msg to RASA via- REST API.

My Query is -
If one action(say “action_1”) is under progress and in between if another user called another action (say “action_2”) . I found that rasa action server first complete action_1 and then process action_2.
My requirements- I want rasa actions server to perform both actions parallelly , if this doesn’t happen then users will remain in waiting queue for a long time if number of users increase .
For reference, I am providing u the logs :
My rasa model server has these properties:
2020-08-26 17:39:46 DEBUG rasa.core.tracker_store - Attempting to connect to database via ‘sqlite://:***@/rasa_server.db’.
2020-08-26 17:39:46 DEBUG rasa.core.tracker_store - Connection to SQL database ‘rasa_server.db’ successful.
2020-08-26 17:39:46 DEBUG rasa.core.tracker_store - Connected to SQLTrackerStore.
2020-08-26 17:39:46 DEBUG rasa.core.lock_store - Connected to lock store ‘InMemoryLockStore’.\

For two queries : i am giving you logs of rasa server and actions server.
Here u can see there are two users with sender_id {sbm.kumar , vi.kumar}
And two Queries - “Find Issues” and “Prediction Issues”
for “ find Issues ” -
intent - ask_find_issues_part
action - action_find_issues_part\

for “ Prediction Issues ” -
intent - ask_predict_issues_part
action - action_predict_issues_part\

sbm.kumar request for “Prediction Issues” functionality which comes at 17:52:50
vi.kumar request for “Find Issues” functionality which comes at 17:52:53\

Logs of rasa Model server -
2020-08-26 17:52:50 DEBUG rasa.core.tracker_store - Recreating tracker from sender id ‘sbm.kumar’ 2020-08-26 17:52:50 DEBUG rasa.core.processor - Received user message ‘/ask_predict_issues_part’ 2020-08-26 17:52:50 DEBUG rasa.core.actions.action - Calling action endpoint to run action ‘action_predict_issues_part’.\

2020-08-26 17:52:53 DEBUG rasa.core.tracker_store - Recreating tracker from sender id ‘vi.kumar’ 2020-08-26 17:52:53 DEBUG rasa.core.processor - Received user message ‘/ask_find_issues_part’ 2020-08-26 17:52:53 DEBUG rasa.core.actions.action - Calling action endpoint to run action ‘action_find_issues_part’.\

Logs of rasa actions server -
At 17:52:39 - “ Prediction Issues ” functionality executes.
“api for predicting issues is called.” this api takes time to fetch result .
Until this api is completed there is no log of “ Find Issues ” custom action. U can check this from timestamp.
Once “action_predict_issues_part” completes then at 17:53:52 “action_find_issues_part” starts\

[2020-08-26 17:52:39 ] “POST /webhook HTTP/1.1” 200 1842 0.019000
action_predict_issues_part
api for predicting issues is called.
action_completed\

[2020-08-26 17:53:52] “POST /webhook HTTP/1.1” 200 12050 62.046102
action_find_issues_part
action_completed\

My Observations-
The behaviour of Rasa Model server is Async and Parallel ,
But behaviour of Rasa Actions server to execute custom actions is sequential\

Kindly tell me what i can do or am i missing anything to make you understand my query.

@JulianGerhard I am sure I am making some mistake but i am unable to get it . My bot is ready for its deployment if this sync issue is solved So kindly help

@JulianGerhard, I have exact similar issue as posted by Vivek. Is there an approach, based on which the action server can handle simultaneous pings/messages from multiple users at the same time. We have verified that these concurrent requests (with different sender id’s) are processed in a sequential manner by RASA action server/ custom actions.py. This would be a basic requirement for any bot to handle such requests.

Please respond.