Hi! I am new with Rasa but i think I finally somehow get it and built my first bot. Now I want to access my bot through an API (sending messages through rest and get bot response) and I think I need to do this using the webhooks API since I can’t deploy Rasa X on a server. I can only run it on my local machine. My problem is that I need more classification information of a sended message but the webhooks API only delivers a response by the bot. By classification information I mean the attributes I get when I test messages using “rasa shell nlu” in the terminal. I hope you can understand. Does anyone have some advice? Also maybe webhooks is the wrong approach to run Rasa on my local machine for sending messages and getting responses? Is there any other API? I think I still lack some information about how to deploy my Bot and let users have a conversation with it…
I think I get that you send a message using webhooks and then get the tracker information for the same conversation_id using /conversation/:id/tracker. Is the only way?
I think I need to do this using the webhooks API since I can’t deploy Rasa X on a server
You could use either the REST or Webhooks interface from your local machine.
get the tracker information for the same conversation_id using /conversation/:id/tracker
Yes, this is the correct endpoint you would call after a message was sent to any endpoint (REST or Websocket). The docs for this endpoint are here.
Hi Greg, thank you! The REST and Webhooks interface are the same, right? It is both /webhooks/rest/webhook
.
Yes!
I implemented calling this API for getting the tracker information. But somehow it takes a lot time until the information of the latest messages appears in the tracker. Even though I am waiting two seconds after the message was sent using the webhooks api, the classification information still doesn’t appear in the tracker.
2 secs is a long time. You should review the hardware requirements.
I’m not sure what you mean by the classification information still doesn't appear
. Can you provide the endpoint you are calling and the response details (status code and result)?
I mean the classification information (intents, entities etc.) in the conversation tracker for the latest message that is sent by the user. When I keep calling the API to get the latest tracker sometimes (not always) it takes up to two seconds until the information for the latest message appears. Which is strange because the answer (by the bot) to the user is sent right away and this answer should be based on the tracker, right?
The hardware requirements are met, but i already thought of this as well. I’m running Rasa on Docker on Linux. Are there any configs I have to make? docker stats
tells me that each Container can use up to 8 GB RAM. During training the model, the worker container has a CPU usage of up to 300%. Does that seem okay?
It’s generally the worker that needs the memory during training and your training CPU is normal. It should take all the CPU and memory it can get for training.
Thanks for clarifying the two second response. Rather than make API calls to try and keep up with the tracker, I think you’d be better off taping into the live tracker event stream that is already being sent to RabbitMQ. I did this for a recent personal project of mine and described it in a blog post here.
Greg
Hi Greg, that’s a great idea! I will tap into that. Thank you very much