For my purpose, I need different NLUs depending on the current context (because depending on the context, the same text should be categorized differently).
Thus, I need around 20 different nlu models. And in theory, I would only need 20 different DIETClassifiers, NOT 20 instances of the - same - language model because those weights are not touched by rasa during training.
Loading the different trained nlu with Agent.load(“…”) loads the LM dependency again everytime. Can one reuse the same LM for different NLU pipelines with some kind of singleton logic? Perfect would be a way to point at a specific language model featurizer from within the config.
I am also open for other solutions for the requirement of having business-logic-contextual NLU.
Of course, so the intent classification should be used (from voice → ASR) to recognize the chosen answer option of the CURRENT (context) question of a medical questionnaire. Those options are mostly very similar but then still probably used for very different answers:
is it easy for you to climb the stairs?
always (e.g. “I have no problems with climbing the stairs” would probably categorized as “never” by a model that has no context specific training samples, so I think it would be best to train different classification heads for each case as business logic decides which is the current question and should not be predicted by the model)
that sounds like the best way in rasa but can i use forms without using the dialogue management component of rasa? so only as a filter for intents in my own code?
import asyncio
from rasa.core.agent import Agent
agent = Agent.load("my_model_path")
result = asyncio.run(agent.parse_message(message))
# how to use forms in this example?
and don’t you think the nlu training would suffer from very similar entities like (“mostly good”, “good”, “very good”, “quite good”… I mean semantically many of them are quite synonyms and only differ in spelling and thus could maybe confuse the model)