Using the CRFEntityExtractor with the DIETClassifier

Hello everyone,

I actually have a bot that use lookup table. That’s why I need to use the CRFEntityExtractor to extract my entities. My intents are quite unbalanced, that’s why I would like to use the batch_strategy: sequence of DIETClassifier to help with this.

Here is my Pipeline:

language: "fr"
pipeline:
  - name: WhitespaceTokenizer
  - name: RegexFeaturizer
  - name: LexicalSyntacticFeaturizer
  - name: CountVectorsFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: "char_wb"
    min_ngram: 1
    max_ngram: 4
  - name: "CRFEntityExtractor"
  - name: DIETClassifier
    batch_strategy: sequence
    epochs: 50
  - name: EntitySynonymMapper

The CRFEntityExtractor detect correctly the entities, however the DIETClassifier will not detect the correct intent (because it doesn’t know the entities from the lookup table)

Is there a way to specify which entity extractor we want to use to the DIETClassifier ? If not, is there anyone that have an idea how to handle unbalanced Intents and lookup tables ?

1 Like

The CRFEntityExtractor detect correctly the entities, however the DIETClassifier will not detect the correct intent (because it doesn’t know the entities from the lookup table)

Just to confirm, you’re worried that DIET doesn’t do the intents correctly or the entities? The way DIET handles intents is independant of the entities that are extracted from the CRFEntityExtractor.

DIET will not extract the entities that are in my lookuptable because only CRF does it.

The DIETclassifier fail to recognize the correct intent if it cannot extract the entities associated with the Intent, even if CRF found it.

It is sad that the DIETclassifier can not use the information of other entity extractor for his own classification.

How could I use lookuptable and DIET classifier together?

@koaning was my reformulation clear ?

Hi @PaulB , i’m also curious about your request ,so here is what i think (please correct me if i’m wrong) from my understanding each component in the nlu pipeline will hand it’s result to the next one , so if CRFEntityExtractor will give it’s results to DIETClassifier , shouldn’t you consider setting entity_recognition to false if you don’t want the DIETClassifier influencing the entities extracted ?

1 Like

Hello @pandaxar, Thank you for your suggestion. I already tried with the pipeline you suggest, but it’s not working. DIET seems to ignore previously extracted entities of CRF.

Perhaps you could disable the entity recognition part of the DIETClassifier?

  - name: DIETClassifier
    batch_strategy: sequence
    entity_recognition: False
    epochs: 50

Here is the exact pipeline I tried for your solution @n2718281 and @pandaxar

Using a diet classifier with no entity extraction and a crf before in the pipeline

    language: "fr"
    pipeline:
      - name: WhitespaceTokenizer
      - name: RegexFeaturizer
      - name: LexicalSyntacticFeaturizer
      - name: CountVectorsFeaturizer
      - name: CountVectorsFeaturizer
        analyzer: "char_wb"
        min_ngram: 1
        max_ngram: 4
      - name: "CRFEntityExtractor"
      - name: DIETClassifier
        entity_recognition: False
        batch_strategy: sequence
        epochs: 50
      - name: EntitySynonymMapper
    policies:
    - name: AugmentedMemoizationPolicy
    - name: TEDPolicy
      max_history: 5
      epochs: 100
    - name: FormPolicy
    - name: MappingPolicy
    - name: FallbackPolicy
      nlu_threshold: 0.1
      ambiguity_threshold: 0.04
      core_threshold: 0.3

Anyway, DIET is not using the entities extracted by CRF, so all the information of the lookup table is not used to help DIET with intent extraction.

Bonjour PaulB, I’m keen to help you as i’ll understand how things work too . Could you tell us what’s like the f1 score for entity extraction of the NLU pipeline used ?(remove the core policies for now)

Hello @pandaxar,

Here are my results for entity extraction with CRF

  "micro avg": {
    "precision": 0.9542097488921714,
    "recall": 0.8765264586160109,
    "f1-score": 0.9137199434229137,
    "support": 737
  },
  "macro avg": {
    "precision": 0.938600900853585,
    "recall": 0.8516666782981712,
    "f1-score": 0.8925078225991568,
    "support": 737
  },
  "weighted avg": {
    "precision": 0.9543690335363008,
    "recall": 0.8765264586160109,
    "f1-score": 0.9131447598564554,
    "support": 737
  } 

The report for DIET entity extractor is all on 0 because it’s disabled.

hm , the scores are pretty high , so how come the entities don’t get extracted ? can you enable diet for entity extraction and share with the us its scores again ? (does the pipeline without CRFEntityExtractor fails to extract entities in lookup tables ?) (if so will switching the position between the two components solves this issue ?) (edit1 :also ,did you include the lookup tables in the learning data ? xd it would be hilarious if it wasn’t) (edit2: hm , it seems CRFEntityExtractor and DIETClassifier aren’t used both in any rasa repositories)

There was a post on the forum probably last week where someone used the CRFEntityExtractor for entity extraction and the DIETClassifier for intent classification but I didn’t find it.

I found it:

@n2718281 so what seems to be the problem for @PaulB ?

Hi there! I haven’t found any answer on how to specify an entity extractor to extract only specific entities? I’ve spent some time now looking for this answer and I haven’t found any documentations about it nor examples that could lead to the solution for version 3.0.

I am particularly needing to use CRFEntityExtractor for entity1 from lookup tables, Duckling for entity2 and the rest of them are fine with DIET. I have seen official Rasa3.# videos mentioning that extractors can be assigned to specific entities but the implementation is nowhere to be found. source 3c85ea0d45a25218d433855b302be4196cfbf0ac_2_690x349

In Rasa, when using both the CRFEntityExtractor and DIETClassifier in your pipeline, the DIETClassifier typically relies on the entity information provided by the CRFEntityExtractor to understand the context of the conversation. However, you’ve mentioned that the DIETClassifier is not detecting the correct intent because it doesn’t know the entities from the lookup table.

As of my last knowledge update in September 2021, specifying which entity extractor to use with a particular classifier within the pipeline was not a built-in feature in Rasa. Instead, Rasa usually relies on the order of components in the pipeline.

Here are some suggestions to handle unbalanced intents and lookup tables:

  1. Data Augmentation: If you have limited training data for unbalanced intents, consider augmenting your training data. You can create more examples for underrepresented intents to help the DIETClassifier learn them better.
  2. Balancing Data: Try to balance your training data by adding more examples for the underrepresented intents. This can help the classifier perform better.
  3. Threshold Adjustment: Adjust the confidence threshold for intent recognition. You can set a lower threshold for the DIETClassifier to be more inclusive of intents. However, be cautious with this approach, as it may increase the likelihood of incorrect intent classifications.
  4. Custom Actions: If the lookup table is critical for your bot’s functionality, consider using custom actions to handle entities from the lookup table separately. You can write a custom action that processes the lookup table entities and sets relevant slots or context.
  5. Slot Filling: Use slot filling to capture important information from user messages and make it available to the DIETClassifier. This information can be used to help the classifier determine the intent.
  6. Redefine Training Data: Carefully review and redefine your training data to ensure that it captures the conversational patterns and entity use cases specific to your bot.

Please don’t pollute threads with ChatGPT-generated responses…

1 Like