Alternatives to rocketchat trigger words

I don’t know if this is the correct space for this topic, please correct me if it isn’t

Issue

We’ve been developing our bot for close to a year now and we are pretty close to releasing our production bot. The issue is the following.

Ideally, we would have Rasa in a channel where user-user conversation co-exists along with user-bot conversation. At first we relied upon trigger words at the beginning of the message like so: _Hey

Our issue is that when the intents were written and sample data was generated we based it on natural language like Hey. Now, we have some issues with certain intents where for example give me a list of the failed awx jobs is recognized as the correct intent but _give me a list of the failed awx jobs goes into action_default_fallback with a confidence score of less than 0.2.

The simplest solution is to have dedicated channels where every interaction is user-bot (something we really want to avoid because of how many new channels we would have to be in) or to add the keywords at the beginning of every intent in the training data along with the existing data which would push our intents to double of what it is now.

None of these solutions seem ideal to us.

Has anyone faced a similar issue before? Is there a way to tell Rasa to ignore certain characters? Or to send only what comes after the first character?

As a side note, it is interesting that even though there is no _Hey or _No in the training data set, some intents are recognized while others like _list failed jobs are not.

Update

Further investigation led to something interesting. The NLU inbox indicates that the intents are actually being recognized as they should so the action_default_fallback isn’t caused by what we previously thought (the symbol messing with the intent recognition),

To give a little bit more insight here are excerpts of both the conversation through RocketChat and the interactive learning tool:

rocketchat

  - intent: greet
  - action: action_hello_world
  - intent: failed_jobs
  - action: action_default_fallback

interactive tool

  - intent: greet
  - action: action_hello_world
  - intent: failed_jobs
  - action: action_listFailedJobsAWX

Even though the recognized intents are the same in both scenarios the first one answers differently. Why could this be? Both stories where taken from the same model

Hi @jsolis,

_give me a list of the failed awx jobs goes into action_default_fallback with a confidence score of less than 0.2.

action_default_fallback is triggered when the action policy prediction is too low. So it would be interesting to see what intent the message is being classified as.

Even though the recognized intents are the same in both scenarios the first one answers differently. Why could this be? Both stories where taken from the same model

Hmm this is interesting, I think it would help to look a the tracker state for each conversation and see what is different. See: Rasa & Rasa Pro Documentation

With regards to the problem you are trying to solve, do you need a way of always classifying certain messages as a specific intent so you can hand over to a human? As you are suggesting something like an underscore I guess it’s a conscious decision? If so maybe something like the keyword classifier: Components would help, as you could search for a specific word before classifying.