Hi @striki-ai. Welcome to the Forum!
What exactly do you mean by “extending”?
By extending I mean to handle intents with low confidence sending them to Blender and resending back its responses. Somethng like custom Policy.
This should be possible with a custom policy, in principle, but I don’t think it is a good way forward. With neural NLG models you don’t have much control over the bot’s response, and they are generally good at generating plausible-sounding nonsense (see GPT-3: Careful First Impressions), which is usually not what you want to send to users. I think it is more important to create helpful and factually correct assistants than engaging, human-like ones.
@striki-ai Were you able to integrate with ParlAI blender?
Could you explain more about how this can be done? for example, blenderbot is available via huggingface library…so if we were to use it as a fallback, how do we achieve it within Rasa?
It is a seq2seq model, so will need to handle the dialog mgmt and history on rasa side and just send a prompt and chat history for next response for each chat. So am trying to understand 2 aspects -
- How to bring a huggingface seq2seq model into Rasa using action functions/custom components/custom policy?
- Can the model be hosted within or should it be hosted elsewhere and just called to, via an endpoint?
Thanks in advance!
The simplest method, I think, is to run this as a custom action. Once a fallback is triggered you might point it to this custom action which can run arbitrary python code. This includes anything from huggingface. Custom actions are explained in our docs as well as on our youtube channel.
I should stress though. Having an algorithm generate a response can (and will) result in unsafe behavior. As explained here, there’s a lot that can go wrong.