Using Entities for short phrases

So I have been using Entities for key values as per standard use; however, I have also implemented Entities as short phrases to construct multi variated intent examples (as shown in pictures). Can anyone tell me if this poses any potential performance threats? So far, I have not had many issues with it, but I am a little wary of how it will perform down the road.

At the moment, the only issues that occur are when I talk to the bot and use an example that is very short/to the point.

Example:

  • Bot interaction: who is the pe for project 12345?
  • NLU Example: [who is](who_is_was) [the](our_the) [project engineer](requested_info)
    [for](of_on_for) [12345](project_number)

Synonyms

  • who_is_was: who is, whos, who’s, who was
  • our_the: our, the
  • requested_info (categorical slot): project engineer, pe, engineer
  • project_number (regex): [0-9]{5}

I cannot use the synonym “pe” for the bot interaction. I have to type out “project engineer” instead. I cannot think of any other issues that occur, but I am sure there are a few other that are similar to this.

I cannot use the synonym “pe” for the bot interaction.

Why not?

Could you explain what you’re trying to achieve with viewing these phrases as entities? It looks like you’d like to abstract the phrases to their meaning, but I don’t think this is necessary – the intent classifier should learn these abstractions for you.

1 Like

My goal with the synonym phrases is to have them being able to swap out with other combinations of word/phrases so that no matter if a user says “whos the project engineer for 12345” vs “who is the project engineer on 12345” the bot will know what the intent is. I was originally manually typing out every variation of that sentence. I had to write each and every single variation because if I didn’t, the bot would not know what I am saying. So with the synonym phrases, I can cover the 14 billion different ways a user can construct a sentence without having to write each and every 14 billion variations. If there is a better way of doing this then I would love to know. Before I started doing this I was having so many issues with the bot not knowing what my intent was. The smallest changes in an intent would cause the bot to have no idea what the user was asking. For instance, if an intent example “who is the project manager (on/for/of) 12345” did not have a variation including “on” and the user said on instead of for/of, then the bot would not recognize the intent; even it there were tens or hundreds of examples very very similar to it that included “on” in the example.

Hopefully, this made sense. By creating these synonym phrases I eliminated the need to write out a hundred billion different intent examples for every intent. The other advantage I believe I get out of this is that I can always go back later down the road and add slots to the synonym phrases so I will know exactly what and how the users interact with the bot so that I can create higher detailed custom actions should I want to.

I just tried interacting with the bot, after removing the entities, and it still works. I guess I misunderstood the training videos. I thought, in order to have use synonyms, you had to have use entities. Is that not the case?

What does your config look like? I’m surprised you had to type these sentences out… that suggests the intent classifier isn’t learning much :slight_smile:

Sorry, what do you mean by removing the entities? You do need entities for synonyms, so I’m not sure what’s happening. Did you retrain after removing the entities?

Here is my config file:


I removed all the entities for synonym phrases from the NLU file and then retrained the bot. So, instead of the intent examples being built out like this:

  • [who is](who_is_was) [the](our_the) [project engineer](requested_info) [for](of_on_for) [12345](project_number)

They are now built out like this:

  • [who_is_was] [our_the] [project engineer](requested_info) (of_on_for) [12345](project_number)

*EDIT*
So I have finished restructuring the NLU and I now my only issue is with some of the synonyms not being recognized. Like previously, “pe” does not get recognized as a synonym so the slot does not get filled for (requested_info). This is not the only occurrence of unrecognized synonyms. @fkoerner Do you have any idea why this is happening?

Okay, I’d suggest you up your epochs for DIET and see if that helps with intent classification. You can try more like 300 epochs. You can split your NLU into train/test to see if this helps.

“pe” does not get recognized as a synonym

Is the problem that “pe” isn’t recognized as a synonym, or that it isn’t picked up as an entity in the first place? It might sound like I’m splitting hairs, but the former is a question of mapping (which would indicate something wrong about how you specified the synonym, or even a bug), whereas the latter is an issue of entity recognition (which would either mean something wrong with the data or the model, or both).

You could check this by disabling synonyms (just comment them out) and seeing whether “pe” is picked up by the DIETClassifier.

Awesome, I set the epochs to 300 then tested it. The results were more accurate than before, but “pe” still did not work. I’m sorry, but I am not really sure if the issue is that “pe” is not being recognized or if it just is not getting picked as an entity/slot-value. I am not really sure how to determine if the DIETClassifier is picks up words/character.

I would test this by commenting out the “pe” synonyms and running the bot in interactive mode. Try giving it a sentence that includes “pe” , preferably one that is similar in sentence structure to the nlu training data. That should tell you whether “pe” is picked up as an entity, it should something like:

Is the intent 'request' correct for 'Who is the [pe](requested_info) for project [12345](project_number)?' and are all entities labeled correctly? (Y/n)

Another thing to check – do you have nlu examples that include “pe” as a labeled entity?

1 Like

@fkoerner Wow. So apparently all I had to do was directly add "pe" into a few examples for each of the intents. That now makes me question why all the other synonyms, that are not directly used in an example, are working. Maybe it has to do with the length of characters? Who knows… Well, at least I now know that if a synonym isn’t working, all I have to do is just directly add it into the NLU.

Also, @fkoerner you have helped me resolve most of the issues I started threads for. I just want to thank you for all the help you have been. I am extremely grateful for you. Have a wonderful day!

1 Like

@john.mcquaid glad to hear it’s working now! You’re right, it’s hard to say why the others are working. It could be that the model learns to extract them based on the sentence structure or like you said the length of characters may affect the featurization. But when in doubt adding more examples generally helps.

And you are very welcome! I’m here to help and we’re just as grateful to have users like you. I hope you have a wonderful day as well, and good luck with your bot :rocket:

2 Likes

Hello @fkoerner, I have 1 last issue I have been trying to resolve. I was hoping you could help me with it since I haven’t had any success with the individual trying to assist me. I don’t want to sound mean or rude, but I don’t think he/she understands what I am talking about. I am not the best communicator, so it would make sense that he/she is having difficulties understanding.

Here is the thread I have open for it: Why is my FollowupAction not working?

Sure, I can give it a shot!

The behaviour you’re seeing does seem weird, but I’m hoping some more information will clear things up.

  • Is the dispatcher uttering the correct message, then the undesired action?
  • What type of action is the the undesired action (a custom action, or one of the default ones)?
  • Do the custom action and the the undesired action appear in your training data (story or rules) together?
  • Does the other part of the flow (if {condition A} do {response A}) work correctly?

Also, it might be useful to see the output of rasa interactive for the broken flow, so we can see where the prediction of the undesired action is coming from

Ah, good questions; I should have thought to specify that. There are no rules created for this. The custom action is following the correct conditional paths, and the dispatcher utters the correct message. The issue I have having is that the story set up for this goes as follows:

- intent: search_for_employee_name
entities:
- employee_name: Emilio Chapa
- action: search_for_employee_name
- action: utter_anything_else

Action: utter_anything_else is a default action. What I need to be able to do is have the custom action: search_for_employee_name force the story to change the next action(utter_anything_else) to the default rasa action: action_listen whenever the buttons are created.

Okay, so I think if I am understanding correctly, you want action_listen when you need more information from the user which you plan to gather from them with buttons, and utter_anything_else when you don’t need more information from the user.

Something seems a bit fishy, since the prediction of the followup action should override the prediction of any other action. So now it would be important to know whether action_listen is predicted, and if so, when. I could imagine that the order of events gets messed up somehow, or that the next action utter_anything_else is predicted immediately after, though this also shouldn’t happen. Could you please run rasa interactive for this flow, and share the output?

I think this can be fixed with another story that includes the other flow, and if not, we can still work around this with a slot (I’ll happily get into those details), but I would like to know why this isn’t working.

rasa shell --debug should also work.

Sorry for the late response, I was out on a business trip yesterday. I just finished running interactive mode. For whatever reason, the bot predicted/allowed me to respond using the buttons this time. Whether that is due to the direct call return [FollowupAction(name="action_listen")] or that the bot simply knows to stop pre-defined story lines when buttons are initialized, I am not sure. I am also not sure why it suddenly started working but that’s cool. Anyway, I tried running running the same thing in rasa x and it, too, worked as intended. So now I am completely perplexed, the only thing I have done differently, from the countless times I have tried getting this to work, is that I simply ran it in the interactive mode.
(It may also be worth noting that I did NOT add the “interaction” into the story after interactive mode was complete, but I figured that is self explanatory after the prior notation that nothing else was different.)

Again, thank you so much for all the help you have provided!

Phew, well I’m very glad to hear it’s working now! Sometimes these things happen.

Running in interactive mode by default also retrains the model, so perhaps this has something to do with it. But in any case, important thing is that it’s working now. You’re welcome, and happy bot-building!

1 Like