Insert form within faq pattern with retrieval intents

I’m trying to do something like the faq pattern from the docs, but inserting forms between the initial retrieval intent and its response. Something like this:

- rule: form faq start
  steps:
  - intent: form_faq
  - action: my_form
  - active_loop: my_form

- rule: resume after form
  condition:
  - active_loop: my_form
  steps:
  - action: my_form
  - active_loop: null
  # eventualy we could call custom actions here
  - action: utter_form_faq

The idea is that this pattern would allow me to handle a broad range of questions that require similar information without having to declare essentially the same rule or story over and over.

The example above reaches the final utter_form_faq action, but fails to match the response with the intent that was presented initially.

The pattern described in the docs puts the response inmediately after the intent. How does this work? How does the response selector selects the response when utter_form_faq is called? I thought it sorta saved the last intent on every separate retrieval intent space, and just used that to map an appropiate response.

Is what I want to do currently possible? It’d be so cool if I could, I can even call the simpler retrieval intent rule pattern inside the above pattern, allowing simple context switching when answering the form, without having to write separate rules for every context switch. This would be easier to scale.

Hey @FearsomeTuna,

The response selector uses the latest message to select the matching response based on the last intent returned. See the code here

So, it’s looking at the last message sent from your form rather than the form_faq question. You should be seeing a message like Couldn't create message for response action when talking to your bot with debug mode on.

Also - you shouldn’t have to write separate rules for every context switch, so there may be something else going on there if you want to dig into it? I do know of one bug right now where context switching doesn’t work when requested_slot is the same between the forms switching (PR under way).

I think you could achieve what you want if you mimic the response selector behavior in a custom action? Store the intent name in a slot at the initial step and use it to loop through response templates in the custom action. Let me know if that doesn’t make sense!

@desmarchris thank you for your reply. I had ended up doing something similar to what you suggested. I saved the initial user message and then injected it artificially using the UserUttered event. Something like:

- rule: form faq start
  steps:
  - intent: form_faq
  - action: action_save_user_message
  - action: my_form
  - active_loop: my_form

- rule: form faq finish
  condition:
  - active_loop: my_form
  steps:
  - action: my_form
  - active_loop: null
  - action: action_inject_user_message
  - action: utter_form_faq

With this, I can use retrieval intents as “normal”, and let the response selector do its job. I can also call other simpler faqs during the form (for example to explain some concept used in the questions). The downside is that I’m getting the history “dirty” which might make it more difficult for my bot to generalize. And if I understand you suggestion correctly, looping through response templates shouldn’t be that hard, and it keeps the history as it was.

I actually did something somewhat similar to try to achieve a more general pattern (and I have a question about it too). Instead of injecting the message again, if we use some naming convention for templates of a specific retrieval intent, we can easily map the retrieval intent “sub-intent” to the desired response without looping, and even have multiple response templates for each intent based on the data that was set on the form, which allows for more rich and specific interactions. Say we have a categorical slot, we could use a custom action with code like this:

retrieval_intent = tracket.get_slot('last_retrieval_intent')
sub_intent = tracket.get_slot('sub_intent')
categorical_slot = tracker.get_slot('categorical_slot')

dispatcher.utter_message(template="utter_" + retrieval_intent + "_" + sub_intent + "_" + categorical_slot)
return []

The question I have regarding this approach (and your suggestion too) is that they seem to not use the response selector at all. The last approach doesn’t even need to declare responses in the retrieval intent space at all, and I’m wondering if maybe I should reduce epochs for the response selector to 1 or something like that, since it doesn’t have to learn anything. Maybe even eliminate it from the pipeline. We just want the grouping ability of retrieval intents.

So I tested this, and it seems ResponseSelector is still necesary for intent clasification to work appropiately with retrieval intents. For starters, there’s no corresponding key in the message for the retrieval intent, so I wouldn’t know how to retrieve the specific sub-intent, but also intents for that retrieval intent don’t seem to be classified correctly. Maybe other classifiers ignore retrieval intents altogether. So I just kept the corresponding ResponseSelector with the default 100 epochs.

1 Like

Hmm so is it working as you’d like now? Also just curious, how many forms are you lumping into this pattern? I’d like to see how much work this is saving for you.

@desmarchris yes, the snippet I showed is working fine for now. For now I have more of a proof of concept, so I don’t have that many use cases yet. I have one form, about five rules, and a couple custom actions for setting retrieval intent and sub intent, as well as one custom action that chooses template based on name convention (for each retrieval_intent space). Within these last actions, I can also compose the final answer from smaller, reusable responses, calling different templates according to data.

Even with usual forms and simple faq pattern that allows simple context switching, I initially still had to write separate stories or rules for each question/template combination (more so if multiple templates were needed for a question). And of those stories, mostly just the first and last line would be the only thing that changed.

Right now some 4 question needing specific info can be handled, and about 5 more simple question/answer style questions. The thing is, each form can retrieve more data from APIs from a single id or similar piece given by the user. So with this approach each pattern ‘unit’ can abstract a set of possible questions that depend on similar variables and need to be answered with specific responses for each combination of those variables. In this case, the question pattern is the same I wanted to focus on just adding the relevant content (intents and domain responses for each case) keeping the same logic infrastructure in place (and actually using zero stories).

Still, while it seems very reusable and easy to scale, it must be said that this approach has no chance of generalizing (we would be basically using Rasa just for intent classification only), it’s not terribly natural or flexible with conversation flow, and for now it’s not apparent to me how extend this to richer non-form multi-turn interactions, since rules operate on 1 turn. We could try chaining multiple rules, but I have been trying stories and memoization policies (the AugmentedMemoization version —actually, rules seem like shorter and higher priority memoization, but I might be wrong on that). These have the potential to generalize and allow richer flow, though abstraction and reuse seem trickier.

1 Like

Cool, well post back if you have any additional findings! And regarding your last point, that makes sense to me. I would say that I’d hope this approach is being used for only a portion of your bot (the similar forms). That way you’re still using stories + rules as they’re meant for the rest of the bot