Hi @itsjhonny , & jonathanpwheat
Im trying to put model like this,
import json
import requests
obj = {
"model_file": "C:/Users/Akash/Desktop/MyChatbot/models/core-20220616-185933-soft-cycle.tar.gz"
}
print(obj)
r = requests.delete('http://localhost:5005/model')
print(r)
# print content of request
print(r.content)
r = requests.put('http://localhost:5005/model', data = json.dumps(obj))
# check status code for response received
# success code - 200
print(r)
# print content of request
print(r.content)
and running this python script, I’m sure I’m going wrong but dont know whats the answer
It would be great if you can tell me what i can do with this
I’m trying this on localhost
Thanks
@ShirudeAkash I didn’t have to delete the old model, it just swapped out the new one.
The only difference I see is that I have headers in my call, everything else looks prett much the same
My code actually trains a new model then swaps it out in the same script. So I get the model info from the training_results.model which is actually models/whatever-the-mode-is-named.tar.gz
then I build the model_file complete absolute path and use that value in my data json object
You mention in your comments you’re getting a 200 code, are you certain it didn’t swap out the model?
I ran rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints-local.yml --credentials credentials.yml --debug so I could see it working when the new model was injected
@ShirudeAkash I was incorrect and updated my post. I ran rasa (in server mode) with
rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints.yml --credentials credentials.yml
And used a separate chat client to interact with it.
You can see the change here in the console of the server
2022-06-16 13:13:00 INFO rasa.core.processor - Loading model models/20220615-164453-pallid-island.tar.gz...
2022-06-16 13:13:01 INFO rasa.nlu.utils.spacy_utils - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:13:02 INFO rasa.nlu.utils.spacy_utils - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:13:27 INFO root - Rasa server is up and running.
2022-06-16 13:15:52 INFO rasa.core.processor - Loading model /home/jwheat/Code/NearlyHuman/rasa/mlflow-rasa/models/20220616-131544-piercing-platform.tar.gz...
2022-06-16 13:15:52 INFO rasa.nlu.utils.spacy_utils - Trying to load SpaCy model with name 'en_core_web_md'.
2022-06-16 13:15:53 INFO rasa.nlu.utils.spacy_utils - Trying to load SpaCy model with name 'en_core_web_md'.
I am getiing response 400(rasa_actions/models/20220622-082403-chilly-surfer.tar.gz' could not be loaded.).What i need to change after running the command `rasa run -m models --enable-api --log-file out.log --cors "*" --endpoints endpoints.yml --credentials credentials.yml
`
In postman you have to pass in the body parameters like{ "model_file":"models\\nlu-20220627-133551-edee-g.tar.gz", "rasa_server_url":{ "url":"http://localhost:5005/model" } }
For linux server yu have to change like this forward slash{ "model_file":"models//nlu-20220627-133551-d-e.tar.gz", "rasa_server_url":{ "url":"http://localhost:5005/model" } }
Hi @kalpa916, I’m sorry I don’t have an answer to that, I don’t utilize GPU training.
I did find that some people pointed to this ( Install TensorFlow 2 ) - and also said they had to reinstall tensorflow-gpu and it started working with the rasa model training.
Someone mentioned having to have tensorflow-gpu installed instead of, not along with tensorflow
Thanks @jonathanpwheat.
Can you give one answer on these
for 30 intents (20-22 examples on each)… whether it is taking 5 to 6 minutes on a decent size server
30 intents shouldn’t be an issue, however the time to train will depend on the number of NLU examples you have for each intent. 10 examples for each should be fine, but if you have 100s of examples, it’ll take a while.
I have a small talk “skill” I built that takes about 45 minutes to train on its own. It has 89 intents and on average 40-50 examples for each