I’m just reading about NLG and it looks great for our use case. I just want to validate with you one thing: We’re using callback channel. If NLG responses are stored on the same machine where the callback is sent afterwards, is it safe to save one step and ignore callback?
Because how it would work:
User writes a message. Message is received into our app.
Message is sent to Rasa server
Rasa server predicts the response
Rasa server sends a request to NLG endpoint (our app) “hey, give me a response text for this”
NLG responds, rasa takes a text
Rasa sends a text to the callback endpoint (our app)
Response is being resent to the user in the callback endpoint.
So now - is it safe to omit steps 5, 6 and 7? So this would happen:
User writes a message. Message is received into our app.
Message is sent to Rasa server
Rasa server predicts the response
Rasa server sends a request to NLG endpoint (our app) “hey, give me a response text for this”
NLG responds with text, but also sends the response text to the user directly
Rasa sends the text to the callback endpoint, but it’s being ignored
Just seems to me like there is one extra and needless step
Based on the workflow you’ve described, I assume you want to send the callback-to-the-user along with a request for the NLG server itself, so that NLG can answer back. That would be against the ideas of split responsibilities and micro-services imo.
What is the main concern here? I don’t assume your users to experience any delays, since you’ve mentioned that the bot and the NLG run on the same machine.