RasaNLUHttpInterpreter: takes from 1 to 4 positional arguments but 5 were given

I am using the RasaNLUHttpInterpreter as stated here to start my server. I give the class all the 4 parameters required (model-name, token, server-name, and project-name). However, I always get the error, that apparently I am handing over 5 arguments (what I don’t really do).

The error occurred since I updated my Rasa-Core and NLU to the latest version. However, as in docs, I feel that I use the method correctly. Does anyone have an idea what I am doing wrong or what’s happening here?

Here is my run-server.py where I use the RasaNLUHttpInterpreter:

import os
from os import environ as env
from gevent.pywsgi import WSGIServer

from server import create_app
from rasa_core import utils
from rasa_core.interpreter import RasaNLUHttpInterpreter


utils.configure_colored_logging("DEBUG")

user_input_dir = "/app/nlu/" + env["RASA_NLU_PROJECT_NAME"] + "/user_input"
if not os.path.exists(user_input_dir):
    os.makedirs(user_input_dir)

nlu_interpreter = RasaNLUHttpInterpreter(
    'model_20190702-103405', None, 'http://rasa-nlu:5000', 'test_project')

app = create_app(
    model_directory = env["RASA_CORE_MODEL_PATH"],
    cors_origins="*",
    loglevel = "DEBUG",
    logfile = "./logs/rasa_core.log",
    interpreter = nlu_interpreter)

http_server = WSGIServer(('0.0.0.0', 5005), app)
http_server.serve_forever()

I am using: rasa_nlu~=0.15.1 rasa_core==0.14.5

Hello @threxx can you show me the error you’re getting when you launch your server ?

rasa-core_1          | Traceback (most recent call last):
rasa-core_1          |   File "run_server.py", line 23, in <module>
rasa-core_1          |     'model_20190702-103405', None, 'http://rasa-nlu:5000', 'test-project')
rasa-core_1          | TypeError: __init__() takes from 1 to 4 positional arguments but 5 were given
chat-ui_1            | Starting the development server...
chat-ui_1            |
schul_cloud_cui_rasa-core_1 exited with code 1

I think that you’re not providing a valid enpoint or the path to you nlu model is not correct @threxx

here is the output for NLU in the console when I start the server:

rasa-nlu_1           | 2019-07-02 12:13:58+0000 [-] Log opened.
rasa-nlu_1           | 2019-07-02 12:13:58+0000 [-] Site starting on 5000
rasa-nlu_1           | 2019-07-02 12:13:58+0000 [-] Starting factory <twisted.web.server.Site object at 0x7f1e1b31b198>

Doesn’t that mean it is running on 5000? I mean it worked before the update with that server. What would be the correct one?

But that does make sense to me, I mean why would it say, that I am giving 5 arguments if one is not correct? Wouldn’t that be another error msg?

I tried this now:

nlu_interpreter = RasaNLUHttpInterpreter(
    model_name = env["RASA_NLU_MODEL_NAME"],
    token=env["RASA_NLU_SERVER_TOKEN"],
    server = env["RASA_NLU_SERVER_ADDRESS"],
    project_name = env["RASA_NLU_PROJECT_NAME"])

the env variables are from the dockerfile. When I do this, I get the following error:

rasa-core_1          | Traceback (most recent call last):
rasa-core_1          |   File "run_server.py", line 20, in <module>
rasa-core_1          |     project_name = env["RASA_NLU_PROJECT_NAME"])
rasa-core_1          | TypeError: __init__() got an unexpected keyword argument 'token'
chat-ui_1            | Starting the development server...
chat-ui_1            |
schul_cloud_cui_rasa-core_1 exited with code 1

However, this was once the code that worked before the update to Rasa-Core 0.14 and NLU to 0.15. What did change? What am I doing wrong?

Hi @threxx @Ahmed

I think the problem lies in reading the documentation of the version 0.9.4 of rasa. With 0.14.5 the following is valid:

*class* rasa_core.interpreter. RasaNLUHttpInterpreter ( *model_name: str = None* , *endpoint: rasa_core.utils.EndpointConfig = None* , *project_name: str = 'default'* )

This can be read here - docs and here - source.

Maybe you should modify your code according to docs. If you need help with that, please let me know.

Regards Julian

@JulianGerhard thanks for that documentation. It helped a bit.

I updated my code like this:

nlu_interpreter = RasaNLUHttpInterpreter(
    model_name=None,
    endpoint="http://rasa-nlu:5000",
    project_name="test_project")

And that’s my new error message:

rasa-core_1 | 2019-07-03 09:10:39 WARNING server - Failed to load any agent model. Running Rasa Core server with out loaded model now. load() got an unexpected keyword argument 'action_factory

What is that “agent model”? And the “action factory”?

Hi @threxx

actually you should have trained a model haven’t you? If you are using the models folder for it, there should be a subdirectory called “nlu”? Can you maybe post a screenshot of that directory?

Regards

This is my directory, I have a “model” folder in the rasa-core folder and “models” in my project folder in the rasa-nlu. Which one should I use?

Which command was used for training?

docker-compose run rasa-core python -m rasa_core.train -d data/schul_cloud/domain.yml -s data/schul_cloud/stories.md -o model/schul_cloud

and then

docker-compose run rasa-nlu python -m rasa_nlu.train -c config.yml -d data/schul_cloud -o projects --project schul_cloud

Phew, interesting setup.

Okay - could you please try to use:

nlu_interpreter = RasaNLUHttpInterpreter(
    model_name='schul_cloud',
    endpoint="http://rasa-nlu:5000",
    project_name="test_project")

I am quite sure that it won’t work, but maybe the error message leads us the way.

The setup is not how it should be right? I was thinking about starting all over again as after the update it all got worse and difficult.

The error was the same:

rasa-core_1 | 2019-07-03 09:38:07 WARNING server - Failed to load any agent model. Running Rasa Core server with out loaded model now. load() got an unexpected keyword argument 'action_factory'

I could give you access to the GitHub repo if that would help? But I don’t really want to take so much of your time.

Hi @threxx

to be honest, that would be great… I doubt that it it will be easy to find the problem via a forum ping-pong. Most of the problems heading into this direction are solved by understanding the big picture behind the bot and then discuss the best way to implement it.

How about collaborating via GitHub and start it by discussing the functionalities? If you want to do this, just write to juliangerhard21@gmail.com.

As soon as I figured out the problem in this specific case, I’d post the solution here anyway such that every user as a return on invest reading this. :slight_smile:

Regards

Hey @JulianGerhard,

that would be awesome if this really doesn’t take lots of your time! I am going to write you.

Thank you so much!

BR

Hello @threxx & @JulianGerhard,

I would like to know if you managed to solve this issue and if so, what was the solution? Because I see @threxx also asked this on Stackoverflow and we don’t want to leave that post with no solution - let us know if you need any more assistance in fixing this issue.

Hi @MetcalfeTom

I helped managing the project first, now I will investigate the error. I am going to do this today and leave feedback here and on stackoverflow.

Regards
Julian

1 Like

Hi @MetcalfeTom @threxx

I have set up the original environment and to be honest - I couldn’t reproduce the error. Both rasa_nlu and rasa_core worked as expected. The problem might have its origin in the case specific setup from threxx. I will investigate this further - maybe there was a dependency incompatibility.

Tomorrow I will run another test with the original data but I doubt that this caused the error since it seems to be something library specific.

I’ll keep you updated. Should I post this intermediate-result to stackoverflow yet?

Regards

1 Like