How to run multiple models in same port to serve multiple apps in rasa core?

I want to train multiple models to serve different apps and run it on the same port in the local host. My api s should be able to fetch different responses from different models. I found documentation on how to do this in nlu portion but can’t find the same for rasa core. Any idea on how to proceed ?

2 Likes

You could run multiple rasa core server processes with different models loaded.

In Python it would be possible to write a server that loaded and used multiple models. Current server only supports one core model but that would be the starting point for a hack.

1 Like

I’m facing the same issue. Have you succeeded on this?

2 Likes