Rasa_nlu.server cannot load model?

Hi all

Another question about rasa_nlu command line

My root dir: /root/rasa_nlu. Under /root/rasa_nlu, there is a folder called projects. So the full path is /root/rasa_nlu/projects

now when I do: tree projects, here is what I got:

[root@myserver rasa_nlu]# tree projects/
projects/
└── company_a
    ├── config.yml
    ├── data
    │   └── data.json
    └── models
        └── default
            └── en
                ├── metadata.json
                └── training_data.json

Here is how I start rasa_nlu server:

python -m rasa_nlu.server --port 5001 -w log/server.log --response_log log/ --path projects/company_a/models/default/en/

And the result:

/usr/lib64/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-11-13 15:18:41+0800 [-] Log opened.
2018-11-13 15:18:41+0800 [-] Site starting on 5001
2018-11-13 15:18:41+0800 [-] Starting factory <twisted.web.server.Site object at 0x7f5a79631470>

Which seems fine, but when I issue a curl request:

curl 'http://192.168.10.79:5001/parse?q=hi&project=default'

I got a 404 in the result

2018-11-13 15:18:46+0800 [-] "192.168.11.23" - - [13/Nov/2018:07:18:46 +0000] "GET /parse?q=hi&project=default HTTP/1.1" 404 54 "-" "curl/7.58.0"

OK now what you may say the path is wrong, which I think you are right. So I change to:

python -m rasa_nlu.server --port 5001 -w log/server.log --response_log log/ --path ./projects/company_a/models/

Same request as above, and now I got:

[root@lnxcent7chatbotnlp rasa_nlu]# python -m rasa_nlu.server --port 5001 -w log/server.log --response_log log/ --path projects/company_a/models/
2018-11-13 15:25:00+0800 [-] Log opened.
2018-11-13 15:25:00+0800 [-] Site starting on 5001
2018-11-13 15:25:00+0800 [-] Starting factory <twisted.web.server.Site object at 0x7f70bd7014e0>
2018-11-13 15:25:03+0800 [-] 2018-11-13 15:25:03 WARNING  rasa_nlu.project  - Using default interpreter, couldn't fetch model: Unable to initialize persistor
2018-11-13 15:25:03+0800 [-] 2018-11-13 15:25:03 ERROR    __main__  - Unable to initialize persistor
2018-11-13 15:25:03+0800 [-] Traceback (most recent call last):
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/server.py", line 245, in parse
2018-11-13 15:25:03+0800 [-]     self.data_router.parse, data))
2018-11-13 15:25:03+0800 [-]   File "/usr/lib64/python3.6/site-packages/twisted/python/threadpool.py", line 250, in inContext
2018-11-13 15:25:03+0800 [-]     result = inContext.theWork()
2018-11-13 15:25:03+0800 [-]   File "/usr/lib64/python3.6/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
2018-11-13 15:25:03+0800 [-]     inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib64/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
2018-11-13 15:25:03+0800 [-]     return self.currentContext().callWithContext(ctx, func, *args, **kw)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib64/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
2018-11-13 15:25:03+0800 [-]     return func(*args,**kw)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/data_router.py", line 273, in parse
2018-11-13 15:25:03+0800 [-]     model)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/project.py", line 261, in parse
2018-11-13 15:25:03+0800 [-]     interpreter = self._interpreter_for_model(model_name)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/project.py", line 366, in _interpreter_for_model
2018-11-13 15:25:03+0800 [-]     metadata = self._read_model_metadata(model_name, model_dir)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/project.py", line 383, in _read_model_metadata
2018-11-13 15:25:03+0800 [-]     self._load_model_from_cloud(model_name, path)
2018-11-13 15:25:03+0800 [-]   File "/usr/lib/python3.6/site-packages/rasa_nlu/project.py", line 422, in _load_model_from_cloud
2018-11-13 15:25:03+0800 [-]     raise RuntimeError("Unable to initialize persistor")
2018-11-13 15:25:03+0800 [-] RuntimeError: Unable to initialize persistor
2018-11-13 15:25:03+0800 [-] "192.168.11.23" - - [13/Nov/2018:07:25:03 +0000] "GET /parse?q=hi&project=default HTTP/1.1" 500 47 "-" "curl/7.58.0"

I tried:

projects/company_a/models/ - not work (w/ trailing slash)
projects/company_a/models - not work

projects/company_a/models/default/en/ - not work (w/ trailing slash)
projects/company_a/models/default/en - not work

projects/company_a/models/default/ - not work

Can you please tell me what’s wrong and how to fix it?

Thank you very much for all your help.

OK. For those who are heavily struggling with the path and project things, here is what works:

Assume working dir is: /root/test_dir, within this dir, here is the structure:

test_dir/
├── components
│   ├── __init__.py
│   └── stanford
│       ├── com_stanford_nlp.py
│       └── __init__.py
│
├── models
│
└── projects
    └── company_a
        ├── config
        │   └── config.yml
        └── data
            └── data.json

Points to note:

  • The purpose of this folder structure is to run 1 instance of Rasa NLU to serve multiple models for completely different companies/projects.
  • components is a custom created directory, which I created to store all custom components written in Python by me. stanford/com_stanford_nlp.py is a custom components also, which means, everything under (and include) components are customized and created by me, i.e. completely unrelated to RASA. In case you want to know, components/stanford/com_stanford_nlp.py content is this:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

from rasa_nlu.components import Component

import logging
import logging.handlers

class Stanford_NLP(Component):

    name = "stanford_nlp"
    provides = []
    requires = []
    defaults = {}
    language_list = None

    def __init__(self, component_config=None):
        # TODO: These log did not work. Fix it.
        logging.info("====================== com_stanford_nlp: logging begin ======================")
        logging.info("__init__[component_config]: %s", component_config)

        super(Stanford_NLP, self).__init__(component_config)
  • projects is another custom, completely unrelated to RASA directory created by me. The purpose of this dir is to store multiple client’s files. So in this case I use an example project name called company_a, which means you can create company_b, my_fancy_com etc.

  • models (Required): The purpose of this dir is to store all created models by Rasa NLU. Again, Custom created by me, completely unrelated to RASA.

  • projects/<company_name>/

    • data (Required): The purpose of this dir is used to store all json or .md data. Custom created by me, completely unrelated to RASA
    • config (Required): The purpose of this dir is to store all config files (if any) for this particular project. Custom created by me, completely unrelated to RASA.

In case you want to know, config/config.yml content is this:

language: "en"
pipeline:
  - name: "components.stanford.com_stanford_nlp.Stanford_NLP"

Now here comes Rasa NLU (Docker):

The command to put all the above together. Run this command Inside test_dir

[root@my_lovely_computer test_dir]# docker run \
    --name my_rasa_nlu_trainer \
    -v $(pwd)/projects/company_a/config:/app/config \
    -v $(pwd)/projects/company_a/data:/app/data \
    -v $(pwd)/models:/app/models \
    -v $(pwd)/components:/app/components \
    -v $(pwd)/log:/app/log \
    rasa/rasa_nlu:latest-tensorflow \
    run \
        python -m rasa_nlu.train \
            -c config/config.yml \
            -d data/data.json \
            -o models \
            --project company_a \
            --fixed_model_name en

It will create:

test_dir/
├── components
│   ├── __init__.py
│   ├── __pycache__
│   │   └── __init__.cpython-36.pyc
│   └── stanford
│       ├── com_stanford_nlp.py
│       ├── __init__.py
│       └── __pycache__
│           ├── com_stanford_nlp.cpython-36.pyc
│           └── __init__.cpython-36.pyc
├── log
│   └── stanford_nlp.log
├── models
│   └── company_a
│       └── en
│           ├── metadata.json
│           └── training_data.json
└── projects
    └── company_a
        ├── config
        │   └── config.yml
        └── data
            └── data.json

Another points to note:

  • Those __pycache__ files are created when you run python. Nothing related to Rasa and me.
  • models: As you can see, the docker command will create a directory under models using the value defined in --project. And then Rasa will create another directory using EITHER the value defined in --fixed_model_name OR model_<time_stamp> (This is a directory) if you do not use --fixed_model_name
  • log dir: This is created after running the above docker command. And of course it is created by me, completely unrelated to RASA. The command to create this is in components/__init__.py. Inside this file:
  • To re-run the training using existing docker container, run docker start -a my_rasa_nlu_trainer (https://stackoverflow.com/a/37886136/1802483)
import logging
import logging.handlers

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s - %(levelname)s - %(message)s',
                    handlers=[
                        logging.handlers.TimedRotatingFileHandler(filename="./log/stanford_nlp.log", when="D", interval=1)
                    ])

However

Even if I put logging script inside com_stanford_nlp.py, the logging still didn’t work.

That’s why I said Rasa NLU misses a lot of things in the Doc. :smiley:

Seem this is a duplicate question of Rasa NLU Server: “error”: “Unable to initialize persistor” Isn’t it?

I think you can solve your problem by specifying in the request the model you want to use. For example:

curl -XPOST http://my_rasa_nlu_server:5001/parse -d '{"q":"hello there", "model": "current"}'

Hope it helps.

Yes. Kind of. That is my mistake. What I wrote here is my experience on this problem. But this is not completely solved. Even after I wrote a lot of things above, I still get the error “Unable to initialize persistor”. So I would like to ask for help.

Thank you.

why iam getting only null and greet intents.
I used python -m rasa_nlu.server --path projects
to start server but i am unable to load my intents instead it is fall backing only to greet and null intents