I have a rasa core server run in a couple different containers on a kubernates cluster. The nature of the cluster continuously spins up new containers. It will not make sense to store the model on the container itself. My question is, where is the best place to store the model?
Awesome, so I can create agent models, store them in s3, store the s3 bucket url and location, and when I am ready to load the agent, I simply provide the url for both nlu and dialogue models?
Seems like the method load_from_server will do the job. Looks like I can create my own endpoint that will return a gzipped directory of the model. And this returns the agent with that loaded model.