Hi,
I am making a migration from Rasa 0.14 to Rasa 1.1.4 right now.
I’ll explain my current set up.
I have multiple models with different data (these correspond to the different modules of my chatbot) that I train using a script (manually doing it would be a mess) and store all the models in a single folder. I am storing them in a single folder so that I can run a single rasa HTTP server using the command python -m rasa_nlu.server --path nlu_models
from rasa.train import train_nlu
for module_name in modules:
module_directory = os.path.join(MODULES_BASE_DIR, module_name)
config_file = <path to config file>
nlu_data = <path to NLU training folder or file>
train_nlu(
config=config_file,
nlu_data=nlu_data,
output=module_directory,
fixed_model_name=module_name,
)
In Rasa 1.x the model files are zipped. So train_nlu will create a zipped model file in the provided output directory (module_directory) with a fixed name (<module_name>.tar.gz).
Regarding your second point: The core and nlu server were merged in Rasa 1.x. It is not possible to have multiple models loaded at the same time anymore. For more information please see Removing projects for Rasa NLU server. Feel free to leave your feedback in that thread.
I will try to use the code snippet that you provided and let you know the results.
It is not possible to have multiple models loaded at the same time anymore
So what are my alternatives? Currently I see two possibilities:
Add all the training data to a single model and then run a server for that model
Run multiple HTTP servers for different models
One problem with the first alternative is that different models might have similar intents which will get mis-classified if I concatenate the data.
One problem with the second alternative will be the resource overhead associated with each server, which will increase with the number of models (as the number of servers) increase.
Currently, I am inclined to implement the first solution. Is there any other alternative?