from rasa.cli.utils import get_validated_path
from rasa.model import get_model, get_model_subdirectories
from rasa.core.interpreter import RasaNLUInterpreter
from rasa.shared.nlu.training_data.message import Message
def load_interpreter(model_dir, model):
path_str = str(pathlib.Path(model_dir) / model)
model = get_validated_path(path_str, "model")
model_path = get_model(model)
_, nlu_model = get_model_subdirectories(model_path)
return RasaNLUInterpreter(nlu_model)
# Loads the model
mod = load_interpreter(model_dir, model)
# Parses new text
msg = Message({TEXT: text})
for p in interpreter.interpreter.pipeline:
p.process(msg)
print(msg.as_dict())
We could see the pipeline efect on a sentence.
Do you know if there is something like it to 3.x ?
Did it return the response very fastly or did it take the time to return the response
try running the FastAPI server and then hitting the API endpoint
As they are async method, So either we use the given fastAPI server or asyncio to run the following. I would suggest you simply run the fastAPI server and then try hitting the endpoint