How to run multiple models in same port to serve multiple apps in rasa core?

I want to train multiple models to serve different apps and run it on the same port in the local host. My api s should be able to fetch different responses from different models. I found documentation on how to do this in nlu portion but can’t find the same for rasa core. Any idea on how to proceed ?

3 Likes

You could run multiple rasa core server processes with different models loaded.

In Python it would be possible to write a server that loaded and used multiple models. Current server only supports one core model but that would be the starting point for a hack.

1 Like

I’m facing the same issue. Have you succeeded on this?

2 Likes

Hi,

I am doing a project where we need to create different models for different company. In a way that the stories of each companies are different, did anyone got a solution for such case?

hi, did you find a solution evetually? I am working on something similar right now. thanks in advance

Hi @demello, If you want to run multiple Rasa models under single port, Then it is not supported by Rasa under single server yet.

You’ll need to create a Python wrapper over Rasa library. You can take a look at this example of how multiple Rasa NLU models are being handled in a single FastAPI server.

Here’s the link: Handle Multiple NLU models under one port

2 Likes