First, I ran rasa train
to obtain my model: models/20200804-025637.tar.gz
.
When I run rasa shell -m models/20200804-025637.tar.gz
or rasa x
, my bot performance is as expected. However, if I deploy Rasa to a server via rasa run -m models/20200804-025637.tar.gz --enable-api
, the performance is not as expected — it’s much worse than its interaction with shell. Here is my code to interact (mimicking the shell interaction) via Rasa’s HTTP API:
conversation_id = input("Conversation ID: ")
url = f"my-server:5005/conversations/{conversation_id}/"
messages_url = url + "messages"
predict_url = url + "predict"
while True:
message = input("Input: ")
payload = {
"text": message,
"sender": "user",
}
messages_response = requests.post(messages_url, json=payload)
if messages_response.status_code != 200:
raise Exception('Networking error.')
predict_response = requests.post(predict_url)
predict_data = json.loads(predict_response.text)
print (predict_data["scores"][0]["action"])
It seems like the NLU performance is as expected; however, the Core and its predicted next action does not seem to follow. What am I missing?