NLU pipeline - Inspecting the Message-object yourself

Hi there!

For rasa 2.x there was this script from Intents & Entities: Understanding the Rasa NLU Pipeline:

from rasa.cli.utils import get_validated_path
from rasa.model import get_model, get_model_subdirectories
from rasa.core.interpreter import RasaNLUInterpreter
from rasa.shared.nlu.training_data.message import Message

def load_interpreter(model_dir, model):
    path_str = str(pathlib.Path(model_dir) / model)
    model = get_validated_path(path_str, "model")
    model_path = get_model(model)
    _, nlu_model = get_model_subdirectories(model_path)
    return RasaNLUInterpreter(nlu_model)

# Loads the model
mod = load_interpreter(model_dir, model)
# Parses new text
msg = Message({TEXT: text})
for p in interpreter.interpreter.pipeline:

We could see the pipeline efect on a sentence.

Do you know if there is something like it to 3.x ?

Thanks a lot

Hi @nonola ,

You can refer to this thread

Thanks for your quick answer @anoopshrma. How can I use it with an example?

I have this:

async def read_item(modelId: str, query: str):
    modelName = f'{modelId}'
    agent_nlu = Agent.load(modelName)
    message = await agent_nlu.parse_message(query)
    # print(message)

    return {"prediction_info": message}

texto = read_item("models/nlu-20230315-120305-aquamarine-midpoint.tar.gz", "Como é que entrego o IRS ações!")


Print result is : <coroutine object read_item at 0x7f1696929a40>

Did it return the response very fastly or did it take the time to return the response

try running the FastAPI server and then hitting the API endpoint

As they are async method, So either we use the given fastAPI server or asyncio to run the following. I would suggest you simply run the fastAPI server and then try hitting the endpoint