LLM mapping as fallback for slot mappings

So I use Rasa CALM 3.15.6.
I wanted to make response times faster for my bot. I have multiple yes/no questions in the bot. I wanted to let their slots be filled with the from_intent mapping instead of from_llm. I have declared the needed intents already and training the model works. But it’s not clear to me if my setup works they way I want it to.

my_slot:
    type: bool
    influence_conversation: true
    mappings:
      - type: from_intent
        intent: affirm
        value: true
      - type: from_intent
        intent: deny
        value: false
      - type: from_llm
        allow_nlu_correction: true

config.yml

recipe: default.v1
language: de
assistant_id: stern-factory
pipeline:
- name: WhitespaceTokenizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
  analyzer: char_wb
  min_ngram: 1
  max_ngram: 4
- name: DIETClassifier
  max_iter: 100
  solver: lbfgs
  tol: 0.0001
  random_state: 42
  ranking_length: 10
  entity_recognition: False
- name: NLUCommandAdapter
- name: CompactLLMCommandGenerator
  llm:
    model_group: rasa_command_generation_model
  flow_retrieval:
    active: false
policies:
- name: FlowPolicy

Is there a way to set a threshold, so that if the classified intent is underneath it, I just execute the llm mapping? In my e2e tests it sometimes just maps the wrong value. That’s especially the case if the user doesn’t answer with a simple yes or no but rather with a “yes i am sober” for example.

I also looked through the whole documentation but couldn’t find anything that explained exactly how this works.

Or is there a way to do this even easier? The problem I had was that the from_llm mapping didn’t set the slots when the user answered with just “yes” or “no”.