Is it possible to use multiple NLU models with only 1 RASA service?

Is it possible to enable multiple NLU models with only 1 RASA Open Source service running?

That is to say, with only one service of “rasa run -enable-api --model models” without raising several services on different ports of the server.

Multiple models are needed because they are for different clients, each client uses one or more models, they have different training and purposes.

Thanks in advance for any answer.

No, you need a separate instance for each model.

Thanks :grinning:

Any recommendation to deploy multiple models on the same server or any cloud service to handle it?

Use Kubernetes and the Rasa helm chart. Discussed here.

@harima Hey, are you able to do this ?