What do we get when we add entities to a story

Hi,

So, I can’t seem to wrap my head around the idea of putting entities inside of a story.

Firstly,

“A story is a representation of a conversation between a user and an AI assistant, converted into a specific format where user inputs are expressed as intents (and entities when necessary)…”

When are entities necessary?

Secondly,

“While writing stories, you do not have to deal with the specific contents of the messages that the users send. Instead, you can take advantage of the output from the NLU pipeline, which lets you use just the combination of an intent and entities to refer to all the possible messages the users can send to mean the same thing.”

Ok so I do not have to deal with the specifics while writing a story, but should populate intents inside nlu file with entities;

but right after that paragraph.

“It is important to include the entities here as well because the policies learn to predict the next action based on a combination of both the intent and entities”

So now it is important to do what erlier was unnecessary?

I’m really confused after reading that, can somebody please demystify this?

When should one add entities to the story?

1 Like

Great question!

First, a clarification: “specific contents of the messages” here means that you don’t need to use the raw unstructured text, instead you can use the entities & intents that have been detected/extracted by the NLU pipeline. The entity is not raw “unstructured” text, it’s text that’s been extracted & tagged and converted into a structured data format.

Now to answer your broader question: why would you want to include entities in stories?

Stories are training data to help your assistant learn what to say next. Entities can help you decide what to say next. For example, if I’m booking a trip between New York and London it would not be a good idea to offer a rail option. However, if I’m booking a trip between London & Paris than offering to book a rail trip would be a good idea. In both cases the intent is the same (booking a trip) but the entities affect what a good next turn would be. Conversely, if you’re asking to book a trip, it’s more likely that you’ll be naming places you want to go to than, say, shampoo brands (which would both be entities).

If you’re using the TED policy in your pipeline, it will actually learn to detect entities & predict the next action at the same time using multi-task learning, which takes advantage of the fact that there’s information shared between entities & the next turn.

Does that help clear it up a bit?

3 Likes

Yes! Thanks for the clarification and examples :slightly_smiling_face:

But, wouldn’t we need to check all the possible ways to travel between the provided cities in both cases (with and without entities) with custom actions? With that in mind, wouldn’t we always need to double check these things in any example (when intents are the same)?

Since I’m new to the framework, code example would completely solve my issue :grin: