Rasa NLU Server: "error": "Unable to initialize persistor"

Hi all

I would like to have 1 instance of Rasa NLU server (through Docker) to handle multiple projects.

Here is my dir:

.
├── components
│   ├── __init__.py
│   ├── __pycache__
│   │   └── __init__.cpython-36.pyc
│   └── stanford
│       ├── com_stanford_nlp.py
│       ├── __init__.py
│       └── __pycache__
│           ├── com_stanford_nlp.cpython-36.pyc
│           └── __init__.cpython-36.pyc
├── log
│   ├── rasa_nlu_log-20181113-100055-1.log
│   └── stanford_nlp.log
├── models
│   ├── company_a
│   │   └── en
│   │       ├── metadata.json
│   │       └── training_data.json
│   └── model_config_for_server.yml
└── projects
    └── company_a
        ├── config
        │   └── config.yml
        └── data
            └── data.json

The command to run Rasa NLU server using Docker:

docker \
    run \
        --name rasa_nlu_server \
        -p 5001:5001 \
        -v $(pwd)/components/:/app/components \
        -v $(pwd)/models/:/app/models \
        -v $(pwd)/log/:/app/log \
        rasa/rasa_nlu:latest-tensorflow \
    run \
        python -m rasa_nlu.server \
            --port 5001 \
            --token 12345 \
            --write log/server.log \
            --response_log log \
            --path models \
            --pre_load all \
            --config models/model_config_for_server.yml

By the way, the content of model_config_for_server.yml is:

language: "en"
pipeline:
  - name: "components.stanford.com_stanford_nlp.Stanford_NLP"

When I run:

curl 'http://my_rasa_nlu_server:5001/parse?token=12345&q=hi&project=company_a'

Here is the error I got:

WARNING  rasa_nlu.project  - Using default interpreter, couldn't fetch model: Unable to initialize persistor
ERROR    __main__  - Unable to initialize persistor
Traceback (most recent call last):
  File "/app/rasa_nlu/server.py", line 245, in parse
    self.data_router.parse, data))
  File "/usr/local/lib/python3.6/site-packages/twisted/python/threadpool.py", line 250, in inContext
    result = inContext.theWork()
  File "/usr/local/lib/python3.6/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
    inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
  File "/usr/local/lib/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/usr/local/lib/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
    return func(*args,**kw)
  File "/app/rasa_nlu/data_router.py", line 271, in parse
    model)
  File "/app/rasa_nlu/project.py", line 261, in parse
    interpreter = self._interpreter_for_model(model_name)
  File "/app/rasa_nlu/project.py", line 366, in _interpreter_for_model
    metadata = self._read_model_metadata(model_name, model_dir)
  File "/app/rasa_nlu/project.py", line 383, in _read_model_metadata
    self._load_model_from_cloud(model_name, path)
  File "/app/rasa_nlu/project.py", line 422, in _load_model_from_cloud
    raise RuntimeError("Unable to initialize persistor")
RuntimeError: Unable to initialize persistor

Can anyone please tell me how to fix it?

Thank you very much in advance for all your help.

Hello, you need to specify in the request the model you want to use. For example:

curl -XPOST http://my_rasa_nlu_server:5001/parse -d '{"q":"hello there", "model": "current"}'

Hope it helps.

1 Like

Thanks @jeanmetz, it works!

To people who has this error, my full request is this:

curl -X POST http://my_rasa_nlu_server:5001/parse?token=12345 -d '{"q":"hi", "project":"company_a", "model":"en"}'

Here is the result I got:

{
  "intent": {
    "name": null,
    "confidence": 0.0
  },
  "entities": [],
  "text": "hi",
  "project": "company_a",
  "model": "en"
}

The intent null is expected as I am just testing the server. :smiley:

1 Like

Glad to help!

Hi, I’m in a similar situation but I can’t manage to load the models (I have my own NLU models trained with keras. I have no problems to load the models with Agent.load, but now I can’t pass to the server). My dir is (i was based in an example with this architecture):

|--nlu_utils
|   -glove_utils.py
|   -intent_classifier.py
|   -entity_extractor.py
|--nlu_models
|   -intent_model.h5
|   -entity_model.h5
|   -glove_model.h5
|-nlu_config.yml

where my nlu_config is:

language: "en_core_web_md"

pipeline:
- name: "nlu_utils.glove_utis.GloveNLP"
- name: "nlu_utils.int_proba.IntentProba"
- name: "nlu_utils.ner_dl.DLEntityExtractor"

and if I run python -m rasa_nlu.server --config ./nlu_config.yml --path nlu_models/ --pre_load all it starts the server but not loads any model.

If I do curl localhost:5000/status the answer is:

{
  "max_training_processes": 1,
  "current_training_processes": 0,
  "available_projects": {
    "entity_model.h5": {
      "status": "ready",
      "current_training_processes": 0,
      "available_models": [
        "fallback"
      ],
      "loaded_models": [
        "fallback"
      ]
    },
    "intent_model.h5": {
      "status": "ready",
      "current_training_processes": 0,
      "available_models": [
        "fallback"
      ],
      "loaded_models": [
        "fallback"
      ]
    },
    "glove_model.h5": {
      "status": "ready",
      "current_training_processes": 0,
      "available_models": [
        "fallback"
      ],
      "loaded_models": [
        "fallback"
      ]
    },
   
    }

It’s not looking where the models are, could anyone help me please? I’m lost

I’ve checked this behaviour and actually could reproduce the same problem. So, indeed --pre_load all is not working as expected. I’ll investigate further for a solution. If anyone out there knows the solution, please help us :slight_smile:

@jeanmetz you may try --pre_load [all], I remembered that somewhere in your doc it use [all] instead of all (without square-bracket)

Hi @alucard001, --pre_load [all] isn’t accepted as argument by the argument parser. On the other hand, if the argument is given as --pre_load '[all]' the parser will accept as a valid value, but the loader will not find any project with the name [all] and will then not load the model available.

The response to GET on /status resource is:

{
  "max_training_processes": 1,
  "current_training_processes": 0,
  "available_projects": {
    "current": {
      "status": "ready",
      "current_training_processes": 0,
      "available_models": [
        "nlu-a",
        "nlu-b"
      ],
      "loaded_models": []
    }
  }
}

Conclusion: the problem still occurs.

I did some digging about the --pre_load option for rasa_nlu.server. The functionality we want is not supported yet. The issue is actually ambiguity in the rasa nlu documentation, which stats the following:

--pre_load PRE_LOAD [PRE_LOAD ...]
                        Preload models into memory before starting the server.
                        If given `all` as input all the models will be loaded.
                        Else you can specify a list of specific project names.
                        Eg: python -m rasa_nlu.server --pre_load project1
                        --path projects -c config.yaml

It actually refers to projects and not models.

There is an open pull request to solve this issue: https://github.com/RasaHQ/rasa_nlu/pull/1410

hopefully it gets approved soon.

I face with this problem too, and I solved with change models name folder. Suppose we have two bots in path projects with this folder:

base_folder   
└───projects
│   └───bot1
│      └───model_20190303-234432
│   └───bot2
│      └───model_20190303-224432

So we can run server using python -m rasa_nlu.server --path projects.

Note : folder inside model folder must follow format model_YYYYMMDD-hhmmss.

Hope it help!

Hi

I have tried out in similar fashion but not able to run multiple bots I am getting error as “error”: “No project found with name ‘default’.”

When you inference it, model which you load is bot1, so it will be load models on that folder