How to run training and replace current RASA model from python script

Hi guys! Nice work at this thread!

RASA api has endpoint to set current model with put request

http://localhost:5005/model

{
"model_file": "/absolute-path-to-models-directory/models/20190512.tar.gz"
}

Replace the currently loaded model

Sometimes, i upload model file using SFTP (filezilla) and use this API endpoint to replace the current loaded without restart the server

Thanks @itsjhonny Interesting. I tried what you had and was able to get just the single line “model_file” to work. I had been trying the whole object

{
    "model_file": "/rasa/mlflow-rasa/models/20220615-144600-cruel-guide.tar.gz",
    "model_server": {
        "url": "http://localhost:5005/model",
        "params": {},
        "headers": {},
        "basic_auth": {},
        "token": "",
        "token_name": "",
        "wait_time_between_pulls": 0
    }
}

FYI I got that format from Rasa Open Source Documentation

1 Like

Hi @itsjhonny , & jonathanpwheat Im trying to put model like this,

import json
import requests
obj = {
"model_file": "C:/Users/Akash/Desktop/MyChatbot/models/core-20220616-185933-soft-cycle.tar.gz"
}
print(obj)
r = requests.delete('http://localhost:5005/model')
print(r)
# print content of request
print(r.content)

r = requests.put('http://localhost:5005/model', data = json.dumps(obj))
# check status code for response received
# success code - 200
print(r)

 

# print content of request

print(r.content)

and running this python script, I’m sure I’m going wrong but dont know whats the answer It would be great if you can tell me what i can do with this I’m trying this on localhost Thanks

@ShirudeAkash I didn’t have to delete the old model, it just swapped out the new one.

The only difference I see is that I have headers in my call, everything else looks prett much the same

My code actually trains a new model then swaps it out in the same script. So I get the model info from the training_results.model which is actually models/whatever-the-mode-is-named.tar.gz

then I build the model_file complete absolute path and use that value in my data json object

model_path = training_results.model
model_file = '/rasa/mlflow-rasa/{}'.format(model_path)

rasa_server_url = 'http://localhost:5005/model'
headers = {'Content-Type': 'application/json'}
data =  {
    "model_file":'{}'.format(model_file)
    }

request = requests.put(rasa_server_url, headers=headers, data=json.dumps(data), verify=False)

You mention in your comments you’re getting a 200 code, are you certain it didn’t swap out the model?

I ran rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints-local.yml --credentials credentials.yml --debug so I could see it working when the new model was injected

@ShirudeAkash I was incorrect and updated my post. I ran rasa (in server mode) with

rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints.yml --credentials credentials.yml

And used a separate chat client to interact with it.

You can see the change here in the console of the server

2022-06-16 13:13:00 INFO     rasa.core.processor  - Loading model models/20220615-164453-pallid-island.tar.gz...
2022-06-16 13:13:01 INFO     rasa.nlu.utils.spacy_utils  - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:13:02 INFO     rasa.nlu.utils.spacy_utils  - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:13:27 INFO     root  - Rasa server is up and running.
2022-06-16 13:15:52 INFO     rasa.core.processor  - Loading model /home/jwheat/Code/NearlyHuman/rasa/mlflow-rasa/models/20220616-131544-piercing-platform.tar.gz...
2022-06-16 13:15:52 INFO     rasa.nlu.utils.spacy_utils  - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:15:53 INFO     rasa.nlu.utils.spacy_utils  - Trying to load SpaCy model with name 'en_core_web_md'.
2 Likes

hi @jonathanpwheat , thanks for the help, I have successfully solved the issue,

2 Likes

hi @ShirudeAkash `` import requests,json

#model_path=models

model_file=‘rasa_actions/models/20220622-082403-chilly-surfer.tar.gz’

rasa_server_url=‘http://localhost:5005/model

headers = {‘Content-Type’: ‘application/json’}

data = {

"model_file":model_file

}

request = requests.put(rasa_server_url, headers=headers, data=json.dumps(data), verify=False)

print(request.status_code)

I am getiing response 400(rasa_actions/models/20220622-082403-chilly-surfer.tar.gz' could not be loaded.).What i need to change after running the command `rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints.yml --credentials credentials.yml
`

it worked now

import requests,json
#model_path=models
model_file='models\\20220622-084642-blaring-bumper.tar.gz'
rasa_server_url='http://localhost:5005/model'
headers = {'Content-Type': 'application/json'}
data =  {
    "model_file":'{}'.format(model_file)
    }

request = requests.put(rasa_server_url, headers=headers, data=json.dumps(data), verify=False)
print(request.status_code)

After 3 days the put request worked nothing help from rasa-http documentation

thanks everyone @jonathanpwheat

2 Likes

Sorry was just seeing this and spotted the incorrect model_file path right away.

Glad you got it sorted out!

1 Like

In postman you have to pass in the body parameters like{ "model_file":"models\\nlu-20220627-133551-edee-g.tar.gz", "rasa_server_url":{ "url":"http://localhost:5005/model" } } For linux server yu have to change like this forward slash{ "model_file":"models//nlu-20220627-133551-d-e.tar.gz", "rasa_server_url":{ "url":"http://localhost:5005/model" } }

How to train model very fast I am using GPU but it’s very slow @ShirudeAkash

@jonathanpwheat How to train model very fast I am using GPU but it’s very slow.My server name is AWS Marketplace: AWS Deep Learning AMI (Ubuntu 18.04) As epochs takes long time to load model should I use NPU or FPGA

Hi @kalpa916, I’m sorry I don’t have an answer to that, I don’t utilize GPU training.

I did find that some people pointed to this ( Install TensorFlow 2 ) - and also said they had to reinstall tensorflow-gpu and it started working with the rasa model training.

Someone mentioned having to have tensorflow-gpu installed instead of, not along with tensorflow

1 Like

Thanks @jonathanpwheat. Can you give one answer on these for 30 intents (20-22 examples on each)… whether it is taking 5 to 6 minutes on a decent size server

30 intents shouldn’t be an issue, however the time to train will depend on the number of NLU examples you have for each intent. 10 examples for each should be fine, but if you have 100s of examples, it’ll take a while.

I have a small talk “skill” I built that takes about 45 minutes to train on its own. It has 89 intents and on average 40-50 examples for each

1 Like

Thanks @jonathanpwheat