I want to train multiple models to serve different apps and run it on the same port in the local host. My api s should be able to fetch different responses from different models. I found documentation on how to do this in nlu portion but can’t find the same for rasa core. Any idea on how to proceed ?
You could run multiple rasa core server processes with different models loaded.
In Python it would be possible to write a server that loaded and used multiple models. Current server only supports one core model but that would be the starting point for a hack.
I’m facing the same issue. Have you succeeded on this?
I am doing a project where we need to create different models for different company. In a way that the stories of each companies are different, did anyone got a solution for such case?