Trying implement custom component with sentiment nltk only

I have created a special component sentiment.py with nltk only and my pipeline is only

pipeline:

- name: "sentiment.SentimentAnalyzer"

but i have implemented a run_nlu function to run with the interpreter from rasa_nlu.training_data import load_data from rasa_nlu.model import Trainer from rasa_nlu import config from rasa_nlu.model import Interpreter,Metadata

def run_nlu():

 interpreter = Interpreter.load('config.yml')

print(Interpreter.parse("It's a wonderful application which i am using"))

if __name__ == '__main__':

run_nlu()

but every time i find errors as can’t load the components or

message = Message(text, self.default_output_attributes(), time=time) AttributeError: ‘str’ object has no attribute ‘default_output_attributes’

Hi @MohamedLotfyElrefai,

welcome to the community!

  • Which rasa version are you using?
  • I’m sure there is a longer stack trace - can you please share it with us ?

Also, can you please try a lower case interpreter (so interpreter.parse(...) instead of Interpreter.parse)?

Hi @Tobias_Wochinger

rasa.__version__
    '1.4.5'

this is my script for sentiment.py

from rasa.nlu.components import Component
from rasa.nlu import utils
from rasa.nlu.model import Metadata

import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import os

class SentimentAnalyzer(Component):
"""A pre-trained sentiment component"""

name = "sentiment"
provides = ["entities"]
requires = []
defaults = {}
language_list = ["en"]

def __init__(self, component_config=None):
    super(SentimentAnalyzer, self).__init__(component_config)

def train(self, training_data, cfg, **kwargs):
    """Not needed, because the the model is pretrained"""
    pass



def convert_to_rasa(self, value, confidence):
    """Convert model output into the Rasa NLU compatible output format."""
    
    entity = {"value": value,
              "confidence": confidence,
              "entity": "sentiment",
              "extractor": "sentiment_extractor"}

    return entity


def process(self, message, **kwargs):
    """Retrieve the text message, pass it to the classifier
        and append the prediction results to the message class."""

    sid = SentimentIntensityAnalyzer()
    print(message.text)
    res = sid.polarity_scores(message.text)
    key, value = max(res.items(), key=lambda x: x[1])
    # convert to output of rasa
    entity = self.convert_to_rasa(key, value)
    #output of the rasa model
    message.set("entities", [entity], add_to_output=True)

def persist(self, model_dir):
    """Pass because a pre-trained model is already persisted"""

    pass

and for run the nlu_model.py for rasa i have made this script:

from rasa_nlu.training_data import load_data

from rasa_nlu.model import Trainer

from rasa_nlu import config

# from rasa.nlu.config import RasaNLUModelConfig

# from rasa.nlu.model import Trainer

from rasa_nlu.model import Interpreter,Metadata

# from sentiment_pretrained_model import SentimentAnalyzer

def run_nlu():

interpreter = Interpreter.load('config.yml')

print(interpreter.parse("It's a wonderful application which i am using"))

if __name__ == '__main__':

#train_nlu('./data/data.json', 'config_spacy.json', './models/nlu')

run_nlu()

I think the problem is initiate the class of interpret without a specified model in spacy pipeline with only the script of sentiment based and it’s only my pipeline. when i run the above nlu_model.py i get this error

File “/home/ai/anaconda3/envs/rasa/lib/python3.6/site-packages/rasa/nlu/model.py”, line 73, in load f"Failed to load model metadata from ‘{abspath}’. {e}" rasa_nlu.model.InvalidModelError: Failed to load model metadata from ‘/home/ai/nlp/chatbots/rasa/using_models/version1.0/config.yml/metadata.json’. [Errno 20] Not a directory: ‘config.yml/metadata.json’

The problem is that you never train a model, just load the config file, and then expect a model to parse your messages.

Your code should do the following steps:

  1. Train a model
  2. Load this model
  3. Use this model

However, you can also skip coding this and just use the shell rasa train and rasa run --enable-pi and use the rasa http API to get the parsing results. I think this would be the more robust approach.

@Tobias_Wochinger What if i have a trained model and just i want to load it not to train it or using a trained model i have implemented before and only want to interpret with it. As in the my example i am using a trained model in sentiment from nltk what if i want to make only my pipeline spell checker.

@MohamedLotfyElrefai did you ever find a solution? I am trying to extend the rasa image and installing the vader lexicon through the docker file. However still does not seem to find the lexicon…