Nested Intent Classification


I am looking to build a nested intent classification system. Here is what I mean by that:

For example, I have an intent classifier with 3 intents (emotion, information, function), now if my classifier classifies the input sentence “I am feeling sad today” it will pick the emotion intent. I want to use this sentence to do another level of intent classification for like 5 emotions (sad, happy, angry, etc). Does RASA have some inbuilt functionality for it or has anybody done it before?

Just looking for some direction, would appreciate any help. Thank you!


This question seems interesting. You could directly use these 5 emotions as intents right? Each emotion is a different intent.

Also, what is your use case? Maybe if we know that we can provide more help.

@srikar_1996 yes those 5 emotions would in turn be intents, think of it as tree that might keep splitting into deeper intent classifiers. The naive way of doing it would be to have 5 emotions as intents in the first place, but imagine having 1000 emotions? Or multiple other things like emotions that I might want to handle, that would certainly break the classification accuracy and we would need very accurate examples for training in that case.

It just for a personal project that I am working on, the user input goes through these layers of classifiers so that I can identify and send an appropriate response for it. In case it is not an emotion, it is something else I might want to direct it to another intent classifier to identify what kind of a response I need to generate.

you can create a broader category like:

  1. Happy
  2. Sad
  3. Angry
  4. other… and add entity synonyms for these 3 in your file. like joyful,great,good can be a happy emotion and so on… but then you have to add a lot of synonyms for an emotion.

@Akshit I understand that, but when you think of it from a scalability issue, imagine I need to do 10 different things like handling emotions then I would need to streamline the input at these various step ie. use different classifiers to streamline the input. I want to look/identify an emotion in the sentence only if I can first identify that the user is trying to express any emotion, because if not then the input sentence might be something like “Tell me a good sushi bar” which doesn’t really need to get the emotions involved at all.

I hope you understand what I am trying to explain here.

Oh okay, It makes sense. Actually, this would even help my use case. Hope someone will be able to give some thoughts on this.

@akelad @tmbo @juste_petr What are your thoughts on this?

Hey @samarth12! It’s strange how I’ve had the same idea a few days back when thinking out a problem here. I’m planning on using three entities as a way to categorize a technical issue. So I ask the user a question and by its answer (here I’m using an intent wildcard i.e. inform) I extract the text:

  1. its underlying necessity
  2. its feeling towards a
  3. problem-object

The way I thought of mapping those is by using featurized Slots, in a way that I would write such a story:

- utter_ask_problem
* inform{"feeling": "order", "necessity": "support", "problem-object": "hardware"}
- utter_hardware_orientation
- utter_ask_specify
* inform{"specific-object": "keyboard"}
- utter_inform_steps
* affirm

So that by each slot combination I would have different stories, handling each case specifically. I would have to elaborate on these slots definitions as to rightfully influence Action choosing but that’s the jist of it.

1 Like

@samarth12 I think this is a very interesting (and hard) problem. In general, I don’t think the intent classification model is scalable enough to handle exactly these cases (lots of different intents, very slight difference between intents).

Currently, we are looking more into getting rid of intents all together instead of improving them, as there is an issue inherent in the way this is setup: the training data for intents is limited, but the differences are small. This combination is a worst case scenario for a classifier. Avoiding the information reduction to intents will allow us to directly feed the sentence representation into the dialogue model (which will also allow for “contextual intents”). There are some issues with this approach as well, but it seems it is the better choice than to go down the intent rabbit hole.

Would love to hear your thoughts on this!


@tmbo Yes scalability was my concern while I was working on my project. There is no way a single intent classification system can handle intents anything further than a very basic demo bot. That is what lead me to think of a nested classification system which would somewhat help in this scenario.

I understand your point, and that is very true. I am not sure what you mean by your approach, would those sentences still be tagged or are you talking about an unsupervised/semi-supervised learning approach here. But I agree handling the context in the sentence representations is a very important aspect moving forward, I have worked with this sentence representation called ELMo, they do a better job at identifying the context better than the traditional approaches, I think this might be helpful if you already haven’t looked at this I would love to hear and get a better understanding of what you’re proposing instead of intents.

Also, what is the best way of moving forward with doing a nested intents system? Would you suggest using custom handmade models trained on specific datasets for this?