Using nlu model with Interpreter.load in Rasa 3.0

In Rasa 2.X, we are able to load the trained nlu model with following code to parse data.

from rasa.nlu.model import Interpreter

interpreter = Interpreter.load('models/nlu-xyz/nlu')
result = interpreter.parse('hello world',only_output_properties=False)
embeds = result.get("text_sparse_features")

However, I’m getting error due to Interpreter is removed in Rasa 3.0

ImportError: cannot import name 'Interpreter' from 'rasa.nlu.model'

May I know any other approach in Rasa 3.0 can achieve the same thing?

1 Like

+1 @nik202 May you help us ?

@isgaal can you share some more information?

Thanks for your help @nik202 ,

I only want to use the NLU part of Rasa, so I would like to be able to load an NLU model from a script and be able to read the model predictions. In version 2.x I was using the following script:

#reading RASA 2.X#
model_directory = "./model_20191202-171621"

sentence = "i want to buy a ticket"

from rasa_nlu.model import Metadata, Interpreter

interpreter = Interpreter.load(model_directory)
meta = Metadata.load(model_directory)
output = (interpreter.parse(str(sentence)))
print(output)

How to convert this script to rasa3.0 ?

We’re getting this error :

ImportError: cannot import name 'Interpreter' from 'rasa.nlu.model'

@isgaal Using NLU Only

Is it possible to do it without command line or server ?

@isgaal I never tried a different approach can not comment.

Okay, thanks a lot for your help

Hi,
Yes you can do it without command line or server, with both Rasa 2 and 3, try:

from rasa.core.agent import Agent

agent = Agent.load(model_path='/path/to/my/model.tar.gz')

# For Rasa 2 (I tried it with 2.8.8)
# agent.parse_message_using_nlu_interpreter(
#                 message_data='Hello there')
# For Rasa 3
agent.parse_message(
                message_data='Hello there')

Beware, the output is async, so you might need to await it or something like that:

# Not a great solution, but shows the idea
import asyncio
asyncio.run(agent.parse_message_using_nlu_interpreter(
                message_data='Hello there'))

My output (for Rasa 2, but it should be the same for Rasa 3):

In [5]: asyncio.run(agent.parse_message_using_nlu_interpreter(message_data="Hello there"))                                                                                                    
Out[5]: 
{'text': 'Hello there',
 'intent': {'id': -5370814688626815577,
  'name': 'greet',
  'confidence': 0.9921479821205139},
 'entities': [],
 'intent_ranking': [{'id': -5370814688626815577,
   'name': 'greet',
   'confidence': 0.9921479821205139},
 #... More intent ranking 
],
 'response_selector': {'all_retrieval_intents': [],
  'default': {'response': {'id': None,
    'responses': None,
    'response_templates': None,
    'confidence': 0.0,
    'intent_response_key': None,
    'utter_action': 'utter_None',
    'template_name': 'utter_None'},
   'ranking': []}}}

Hope that helps!

1 Like

Thanks, @E-dC for the solution.

Ty @E-dC

You were right !

Here is my function :

import asyncio
from rasa.core.agent import Agent
from rasa.shared.utils.io import json_to_string

class Model:

    def __init__(self, model_path: str) -> None:
        self.agent = Agent.load(model_path)
        print("NLU model loaded")


    def message(self, message: str) -> str:
        message = message.strip()
        result = asyncio.run(self.agent.parse_message(message))
        return json_to_string(result)

mdl = Model("./models/nlu.tar.gz")
sentence = "hello"
print(mdl.message(sentence))

The solution was in this Rasa function

2 Likes

Hey mate, thanks so much for the solution. With your code, I am able to load models in Rasa 3.3.0 (with Py 3.7.10), which aren’t trained with LanguageModelFeaturizers. The class is shown as below:

import rasa
from rasa.core.agent import Agent
from rasa.shared.utils.io import json_to_string      #only for testing with example 

class Load_Rasa_NLU:
    def __init__(self, model_path:str) -> None:
        self.agent = Agent.load(model_path)
        print("NLU model loaded")

    def nlu_processing(self, message: str) -> str:
        message = message.strip()
        results = asyncio.run(self.agent.parse_message(message))
        return json_to_string(results)

For loading models:

nlu_model_path = "what_ever_a_path"
nlu_model = Load_Rasa_NLU(nlu_model_path)

But once I try to load a model, which is trained with, such as BERT, I receive such an error:

---------------------------------------------------------------------------
HFValidationError                         Traceback (most recent call last)
~/anaconda3/envs/rasa3.2/lib/python3.7/site-packages/rasa/engine/graph.py in _load_component(self, **kwargs)
    394                 execution_context=self._execution_context,
--> 395                 **kwargs,
    396             )

~/anaconda3/envs/rasa3.2/lib/python3.7/site-packages/rasa/engine/graph.py in load(cls, config, model_storage, resource, execution_context, **kwargs)
    216         """
--> 217         return cls.create(config, model_storage, resource, execution_context)
    218 

~/anaconda3/envs/rasa3.2/lib/python3.7/site-packages/rasa/nlu/featurizers/dense_featurizer/lm_featurizer.py in create(cls, config, model_storage, resource, execution_context)
     97         """
---> 98         return cls(config, execution_context)
     99 

~/anaconda3/envs/rasa3.2/lib/python3.7/site-packages/rasa/nlu/featurizers/dense_featurizer/lm_featurizer.py in __init__(self, config, execution_context)
     64         self._load_model_metadata()
---> 65         self._load_model_instance()
     66 

~/anaconda3/envs/rasa3.2/lib/python3.7/site-packages/rasa/nlu/featurizers/dense_featurizer/lm_featurizer.py in _load_model_instance(self)
    150         self.tokenizer = model_tokenizer_dict[self.model_name].from_pretrained(
--> 151             self.model_weights, cache_dir=self.cache_dir
...
--> 405                 ) from e
    406             else:
    407                 logger.error(

GraphComponentException: Error initializing graph component for node run_LanguageModelFeaturizer5.

The language model is directly cloned from from its huggingface repo, and stored in the root directory of the rasa project. (parallel to ./data or ./models).

Can you please help me to confirm this problem in Rasa 3 or update the API for evaluation of nlu models in jupyter notebook?

Cheers :)))

Hi @wyw231 , I’m not entirely sure, but I don’t think that Agent can load arbitrary NLU model, they must specifically be NLU models trained with Rasa (see the Agent class: https://github.com/RasaHQ/rasa/blob/main/rasa/core/agent.py#L286).

You write

Have you tried training a NLU model with Rasa, but using your Huggingface model in the parameters? See: Components, especially section “Configuration”

Hope that helps

Hey mate, thanks for your lightning-fast reply, I found another solution by posting requests to http/localhost API started by rasa run --enable-api -m models/nlu-xxxx.tar.gz.

The nlu-model I trained added LMF as the dense token featurizer, with base-architecture of BERT but weights from GBERT. The model works perfectly with command prompts rasa shell models/xxxx, but cannot be loaded via agent.load() as other nlu-models without LMF. I haven’t figured out how it came to that, but the new methods works really good. I just put the steps here:

1: Start a rasa localhost or remote service by running the command prompt provided by rasa

rasa run --enable-api -m models/nlu-your-models.tar.gz
  1. Build the connection between the rasa service and your python scripts
 host_url = "http://localhost:5005/model/parse"   #to determine the host port, refer to endpoint.yml

def nlu_respond(phrase: str) -> dict:
    nlu_data = json.dumps({'text': f"{phrase}"})
    nlu_resp = requests.post(host_url, data=nlu_data).json()
    return nlu_resp
  1. After created this function, you can basically use the nlu model’s feedback as you like to.
 nlu_respond("Hello, how's going?")

Thanks again for your help mate, have a great week :))

2 Likes