Pre load Rasa NLU models

Hi, I am just wondering what is the right way to pre load models when starting up NLU server

I receive an error doing this

python -m rasa_nlu.server --port $${RASA_PORT} --path models --pre_load $${RASA_AGENT}

RASA Agent is basically my project

rasa-nlu-fr_1          | 2018-08-31 13:52:37 WARNING  rasa_nlu.project  - Using default interpreter, couldn't fetch model: Unable to initialize persistor
rasa-nlu-fr_1          | Traceback (most recent call last):
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/", line 193, in _run_module_as_main
rasa-nlu-fr_1          |     "__main__", mod_spec)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/", line 85, in _run_code
rasa-nlu-fr_1          |     exec(code, run_globals)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 431, in <module>
rasa-nlu-fr_1          |     router._pre_load(pre_load)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 177, in _pre_load
rasa-nlu-fr_1          |     self.project_store[project].load_model()
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 141, in load_model
rasa-nlu-fr_1          |     interpreter = self._interpreter_for_model(model_name)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 197, in _interpreter_for_model
rasa-nlu-fr_1          |     metadata = self._read_model_metadata(model_name)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 212, in _read_model_metadata
rasa-nlu-fr_1          |     self._load_model_from_cloud(model_name, path)
rasa-nlu-fr_1          |   File "/usr/local/lib/python3.6/site-packages/rasa_nlu/", line 251, in _load_model_from_cloud
rasa-nlu-fr_1          |     raise RuntimeError("Unable to initialize persistor")
rasa-nlu-fr_1          | RuntimeError: Unable to initialize persistor

My model has a specific name like nl_model_v0.0.0

I want to pre load this model but not sure how the server arguments should be


Did you specify project name and model name in your curl request?

Try adding and see if it works?

e.g POST reuqest to http://localhost:{RASA_PORT} /parse

{ “q”:“{User utterance}”,
“project” : “{project name}”,
“model”:"{Model name} "

Omitting “model” seems to be throwing

It was about preload the models on server startup not on parse, we have a lot of vectors and loading them upon parse isn’t optimal. However pre load arguments allows to load projects which has models in it. For some reason it is not working for me

@akelad - I still face the issue here :sob:, I am not able to preload a specific model

You mentioned this in a Core issue somewhere too right? could you tag me in it again and i’ll take a look

Yeah no worries, I found someone posting a similar issue on GitHub afterwards. There is an enhancement request for this. I will give it a shot

1 Like

Is this issue solved @souvikg10 because I am facing the same issue of pre loading a project.

There is an open PR for this

1 Like