I’m developing a chatbot using Rasa stack. At beginning this bot will be just a FAQ bot, but in the future we will have actions and it’s the why we choose the Rasa stack. I read many itens in the forum about using Rasa with a FAQ use case, but I didn’t like the recommended approaches. My idea is write my own policy to handle this, anyone have experience developing a policy like this? I don’t found examples of any type of policy, just the default ones.
Another question, do you think it’s a good approach for solve this? The main goal is create a structure that allow saving questions and answers in some database that users can extend without have to change Rasa intents and generate news models.
Just to understand your goal:
You want to design a policy that focus on reinforced learning for your bot?
Whatever user typed to the bot, the bot will store them for future training. As the trainer, you can correct your bot’s stored intent.
I am also interested in that, haha. Hope someone can explain it too.
Currently, I see most existing chatbot companies handle it by integrating a supervised learning page for the back-end user. When the confidence is below a threshold default fallback will show up and the user’s message will be stored for manual allocation later on .
Hi @cassiofariasmachado, @gcgloven
I would also be interested in the development of a custom policy. As far as I have seen, there is no way to easily derive one like possible with the pipeline elements.
However, if you take a closer look at e.g. the FallbackPolicy, it doesn’t seem to be very complex to design and implement one - custom but working.
The question is: Why do you want to solve your problem with a policy? Wouldn’t it be easier to simply store those question/answer pairs for later use e.g. with an external tracker store that provides a proper API? What do you mean by “that user can extend without have to change Rasa intents” ?
No, I just don’t want use Rasa for our FAQ questions, just for the actions with side affects, but I think your idea can be achieved with a custom policy also.
We don’t want to use Rasa for our FAQ questions. We want, for example, search by the response on another provider and if it found some answer with a defined score, this should be the answer returned to the user.
do you mean to give the user the proper response asynchronously (e.g. later the day) or do you mean that you want to manage the responses to a given intent externally such that you could manage its content aside from rasa?
The second one, I want manage the responses aside from Rasa.
Solution: We decide to create an action that control if question of user can be answered by a FAQ or the bot should simple respond with a default fallback like “sorry, I could’t understand”.
FAQ_THRESHOLD = 0.3
def __init__(self, faq_service: FaqService):
self.faq_service = faq_service
def name(self) -> Text:
def run(self, dispatcher: CollectingDispatcher,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
latest_user_message = tracker.latest_message.get('text')
faq = self.faq_service.search_answer(latest_user_message)
confidence = faq.get('confidence', 0)
answer = faq.get('answer')
if confidence > self.FAQ_THRESHOLD and answer is not None:
# Configuration for Rasa NLU.
# Configuration for Rasa Core.
- name: MemoizationPolicy
- name: KerasPolicy
- name: MappingPolicy
- name: FallbackPolicy
@akelad @Juste Bump on this one. Is it possible to plug in our own custom policy into the pipeline similar to how NLU pipeline can take a custom component? I.e. given a particular condition we want to the prediction of a particular action that gets executed at all times. I.e. bypass of the bot dialogue prediction engine.
The answer to my own question is yes here as well. In general anything in rasa can be plugged in, channels, trackers, policies
I had the similar idea to develop a FAQ bot, which is using a custom policy to route all the questions to a FAQAction. I was really confused that if it’s a best practise