NLU pipeline - Inspecting the Message-object yourself

Hi there!

For rasa 2.x there was this script from Intents & Entities: Understanding the Rasa NLU Pipeline:

from rasa.cli.utils import get_validated_path
from rasa.model import get_model, get_model_subdirectories
from rasa.core.interpreter import RasaNLUInterpreter
from rasa.shared.nlu.training_data.message import Message


def load_interpreter(model_dir, model):
    path_str = str(pathlib.Path(model_dir) / model)
    model = get_validated_path(path_str, "model")
    model_path = get_model(model)
    _, nlu_model = get_model_subdirectories(model_path)
    return RasaNLUInterpreter(nlu_model)


# Loads the model
mod = load_interpreter(model_dir, model)
# Parses new text
msg = Message({TEXT: text})
for p in interpreter.interpreter.pipeline:
    p.process(msg)
    print(msg.as_dict())

We could see the pipeline efect on a sentence.

Do you know if there is something like it to 3.x ?

Thanks a lot

Hi @nonola ,

You can refer to this thread https://forum.rasa.com/t/using-rasa-nlu-with-python-code-and-without-api/56949/4?u=anoopshrma

Thanks for your quick answer @anoopshrma. How can I use it with an example?

I have this:

@app.get("/predictText")
async def read_item(modelId: str, query: str):
    
    modelName = f'{modelId}'
    agent_nlu = Agent.load(modelName)
    message = await agent_nlu.parse_message(query)
    # print(message)

    return {"prediction_info": message}

texto = read_item("models/nlu-20230315-120305-aquamarine-midpoint.tar.gz", "Como é que entrego o IRS ações!")

print(texto)

Print result is : <coroutine object read_item at 0x7f1696929a40>

Did it return the response very fastly or did it take the time to return the response

try running the FastAPI server and then hitting the API endpoint

As they are async method, So either we use the given fastAPI server or asyncio to run the following. I would suggest you simply run the fastAPI server and then try hitting the endpoint