Hi
After doing the rasax upgradation, I realize that rasa chatbot response is getting slower. Moreover it is difficult to create a compete chatbot story. And challenging to add correction in conversation in interactive mode.
I’ll chime in to say, that when I am in interactive mode, I can’t actually change the responses that the bot gives AFTER its been posted, meaning I have to enter the actual story under stories, and rewrite them in there
Docker version: Docker version 18.09.2, build 6247962
Python version: Python 3.6.9
RASA version : Rasa 1.3.6
Host operating system: Windows 10
4.9.125-linuxkit
Size of model: 9169KB
Size of data: 15KB
While doing the conversation in rasa, the response of rasa is quite slow.
For example if I ask rasa “please help to buy pizza”
Chatbot takes long time to response the question (around 15 to 60 sec)
Sometime rasa does not respond at all (keep running …)
Once we transfer the user conversation to interactive mode. It is quite difficult to make correction on “action or utter”. Sometime rasa reshuffle all the dialogues or sometime duplicate just previous dialogue. For example
How are you
How are you
I am gemmy
I am gemmy
I’m facing the same issue. Bot takes 1–3 seconds on an average to reply. My bot has about 10 intents, 2 forms, 160 nlu inputs and 2 actions.
Environment:
I’m facing the same issue as well on rasa shell, rasa interactive, and pythonically in Jupyter notebook, starting from rasa.core.agent import Agent to await agent.handle_text(message). The bot takes 2-6 seconds to reply. My bot has 6 intents, 10 entities (2 of them from duckling and spacy), 6 featurized slots, 3 actions, and 1 form action.
Environment:
Windows 10 Home
Rasa 1.9
Python 3.7
I observed the actions server on debug mode and it seems like the custom actions run quiet fast, the NLU part takes 0.08-0.09s in jupyter notebook by importing the NLU model using from rasa.nlu.model import Interpreter. The slow part is right after I input into the bot by using any of either method above (shell, interactive, or jupyter), the action server receives request to run action from rasa core 1-2 seconds after I input the message. Everything run locally with endpoint http://localhost:5055/webhook
@tomgun132 please create a new post for your question. My question to the OP about rasa shell was to compare about their slow time in Rasa X, your issue with the python API is not the same.
Hi @kiranlvs93 thanks for the update. by slow you mean ~2 seconds? If so, this is a known issue we’re working to resolve – in the meantime you can apply a workaround by disabling telemetry at this endpoint: HTTP API
I met the same problem, it’s very painful to work with rasa in local machine. I don’t know why it’s so slow, since CPU and memory usage are all low. However, in our production machine with an official docker image rasa/rasa-sdk:3.3.0, it’s very fast. I’m wondering if there are any optimizations in this image that I can reproduce in my local machine?