I’m attempting to build a complex Rasa contextual conversational AI model. In the outputs from the bot, I want the speech to be able to be dynamically generated to use things like synonyms so the bot doesn’t always output the same exact phrase so it doesn’t feel so repetitive. I’m already using an action server but, I see that Rasa also supports using an NLG server. My question is: Is there a benefit to using both an action server and an NLG server (in my case they’d be the same server) or should I just make every action be a custom action. I guess what I’m mainly asking is, is there a benefit to having rasa actions be responses that can get the content of those responses from NLG server or is that basically the same to just having every action be a custom action and I output the responses in the custom action?
I know, that question is really old now. But maybe we can make this out. I am working at the same problem at the moment but with a bit more I tried already and know since.
For example the Rasa AI, in normal case answers with the sentences, or actions, you have in the domain hard coded.
If you know want to use an external NLG server, you loose this. For example if you would take an API to Chatgpt’s NLG sever youre Rasa AI now doesn’t response with the sentences in the domain, it works then only with THIS NLG server you use… All or nothing, most.
But I try to use a template driven NLG server.
The templates are in the domain, too. Look in the Rasa doc/help. You can write them like your normal responses in the domain. And then it takes a bit work and software and your Rasa AI can generate free sentences for responding your questions, build from the templates out of your domain. The AI use them as a kind of mini examples, as far as I understood, and is then able to generate full sentences from them as base, more or less good…
And the last possibility is to program your own NLG server. Never tried anything like that.
Could be interesting
Until now I use an replacement. A custom action in python, inspired by the old “AI” Eliza. I wrote a new rule:
If my NLU Fallback classifier would normally start, cause my AI doesn’t understand what you say, or doesn’t know the intent, whatever, then this action starts and wait for user input (true).
This action works a bit like the original Eliza.
It takes the last user input, the AI didn’t understood.
Then searches in a huge list for a tuple of a user input like the own it got last, and use then the answer from that tuple.
So it can ask you to things, it doesn’t even know.
But it’s not perfect and only a Time-by replacement for a really NLG server.
Do you have until now found a solution?