Implementation of a pretrained argumentative model

Hello everyone :wave::vulcan_salute:

I am a Masterstudent at the University of St.Gallen and currently developing a Chatbot for argumentative learning. Throug your amazing masterclass I’ve managed to create the basic chatbot functions, intents and also got my bot up and running on my own server with rasax. What’s missing right now, is that I need to implement a pretrained model as a custom action. This model automatically recognises claims and premises of user and gives feedback for their learning journey. The code for model is the following:

model.py (652 Bytes)

The code for the student feedback is the following:

app.py (3.7 KB)

Right now I’m looking for help to figure out, what code is relevant for my chatbot, that I developed with rasa. That means which lines of code can I implement without any changes and which lines do I need to adapt or even write completely new.

Your help would be highly appreciated. Thank you and greetings from Switzerland.:raised_hands:

Hi @TKueng, would request you to kindly follow this blog post - The Rasa Masterclass Handbook: Episode 6 to understand how to implement custom actions. Feel free to ask specific questions post that. Thanks

Hello @dakshvar22

Thank your for your reply. My specific questions right now would be, how I can load the pretrained model into the action file. I tried to integrate to integrate the original function into the run function of the action file.

    def load_bert_model():
        """Load in the pre-trained model"""
        global model
        model_path = './models.py'
        model = Inferencer.load(model_path)

     def run(self,
        dispatcher: CollectingDispatcher,
        tracker: Tracker,
        domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
    load_bert_model()

Do you have any thoughts on that, is this the right way? Cause right now I’m getting this error:

NameError: name 'load_bert_model' is not defined

Also I wanted to know with which argument can I load the text of the Users. Is this Text, like I used in the functions below?

 elements = predict_components(Text) 
 feedback = prepare_feedback(Text, elements)

Thank you really much for your help!

@TKueng Can you share your action file as well?

Hello @dakshvar22 Of course! Current action file looks like this:

from typing import Any, Text, Dict, List

from farm.infer import Inferencer
from rasa_sdk import Action, Tracker
from rasa_sdk.events import SlotSet
from rasa_sdk.executor import CollectingDispatcher

class ActionGiveFeedback(Action):

def __init__(self):
    self.model = Inferencer.load("C:\\Users\\tobia\\OneDrive\\Desktop\\arguebot_test\\Model")

def name(self) -> Text:
    return "action_give_feedback"

def predict_components(text: str):
    text_to_analyze = [{'text': '{}'.format(text)}]
    result = self.model.inference_from_dicts(dicts=text_to_analyze)

    annotated_text = [[i['label'], i['start'], i['end']] for i in result[0]['predictions'] if i['probability'] > 0.75]

    count = 0
    count_claim = 0
    count_premise = 0
    elements = []
    for ann in annotated_text:
        if ann[0] != 'O':
            elements.append({
                'id': count,
                'label': ann[0].lower(),
                'start': ann[1],
                'end': ann[2]
            })
            if ann[0].lower() == 'claim':
                count_claim += 1
            else:
                count_premise += 1
        else:
            continue
        count += 1

    return elements, count_claim, count_premise


def prepare_feedback(text: str, elements: tuple):
    feedback_text = "Hier kommt das Feedback zu Deiner Argumentation, " \
                    "Claims werden *fett* und Premises _kursiv_ dargestellt:\n\n\n"
    before = 0
    for e in elements[0]:
        start = e['start']
        end = e['end']
        marker = '*' if e['label'] == 'claim' else '_'
        feedback_text += text[before:start]
        feedback_text += marker
        feedback_text += text[start:end]
        feedback_text += marker
        before = end
    if before == 0:
        feedback_text += text

    if elements[1] > elements[2] or elements[1] < 2:
        if elements[1] < 2:
            feedback_text += "\n\nIch würde dir empfehlen, deinen Text noch argumentativer zu gestalten. " \
                             "Versuche mindestens zwei Claims mit relevanten Premises zu stützen\n"
        else:
            feedback_text += "\n\nIch würde dir empfehlen, deinen Text noch argumentativer zu gestalten. " \
                             "Versuche Deine Claims besser mit relevanten Premises zu stützen\n"
    else:
        feedback_text += "\n\nIch empfinde Deine Argumentation als gelungen! " \
                         "Du hast mehrere Aussagen gemacht und diese mit relevanten Premises gestützt. Weiter so!\n"
    return feedback_text

def run(self,
        dispatcher: CollectingDispatcher,
        tracker: Tracker,
        domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:

    elements = predict_components(Text) 
    feedback = prepare_feedback(Text, elements)
    dispatcher.utter_message("{}".format(feedback))
    return []

The current error occurs when I try to load the BERT-Model:

File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\runpy.py”, line 193, in run_module_as_main “main”, mod_spec) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\runpy.py”, line 85, in run_code exec(code, run_globals) File "C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk_main.py", line 33, in main() File "C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk_main.py", line 29, in main main_from_args(cmdline_args) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk_main_.py”, line 20, in main_from_args args.ssl_password, File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk\endpoint.py”, line 116, in run app = create_app(action_package_name, cors_origins=cors_origins) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk\endpoint.py”, line 68, in create_app executor.register_package(action_package_name) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk\executor.py”, line 222, in register_package self.register_action(action) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk\executor.py”, line 157, in register_action action = action() File “C:\Users\tobia\OneDrive\Desktop\arguebot_test\actions.py”, line 15, in init self.model = Inferencer.load(“C:\Users\tobia\OneDrive\Desktop\arguebot_test\Model”) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\farm\infer.py”, line 133, in load model = AdaptiveModel.load(model_name_or_path, device, strict=strict) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\farm\modeling\adaptive_model.py”, line 125, in load language_model = LanguageModel.load(load_dir) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\farm\modeling\language_model.py”, line 108, in load language_model = cls.subclasses[config[“name”]].load(pretrained_model_name_or_path) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\farm\modeling\language_model.py”, line 323, in load bert.model = BertModel.from_pretrained(farm_lm_model, config=bert_config, **kwargs) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\modeling_utils.py”, line 414, in from_pretrained elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path): File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\file_utils.py”, line 143, in is_remote_url parsed = urlparse(url_or_filename) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\urllib\parse.py”, line 367, in urlparse url, scheme, _coerce_result = _coerce_args(url, scheme) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\urllib\parse.py”, line 123, in _coerce_args return _decode_args(args) + (_encode_result,) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\urllib\parse.py”, line 107, in _decode_args return tuple(x.decode(encoding, errors) if x else ‘’ for x in args) File “C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\urllib\parse.py”, line 107, in return tuple(x.decode(encoding, errors) if x else ‘’ for x in args) AttributeError: ‘WindowsPath’ object has no attribute ‘decode’

Pretty stuck with this error for a while now…:sweat:

I see farm.infer is your self-written custom module. Isn’t the error being raised from there when you make the call to Inferencer.load?

157, in register_action action = action() File “C:\Users\tobia\OneDrive\Desktop\arguebot_test\actions.py”, line 15, in init self.model = Inferencer.load(“C:\Users\tobia\OneDrive\Desktop\arguebot_test\Model”)

You need to double check your custom code. I would suggest running Inferencer.load outside the rasa project once.

1 Like

Hello @dakshvar22

So you were right there was a problem with model. It could not load the pretrained model, because of different python, pytorch and farm versions. So what I did now, is that I retrained the model with adequate versions and I can actually load the model into my chatbot now. But when I try to trigger the custom action I get this error, which doesn’t make a lot of sense for me…

2020-03-23 10:50:23 INFO     rasa_sdk.executor  - Registered function for 'action_give_feedback'.
Exception occurred while handling uri: 'http://localhost:5055/webhook'
Traceback (most recent call last):
File "C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site- packages\sanic\app.py", line 942, in handle_request
response = await response
 File "C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\sitepackages\rasa_sdk\endpoint.py", line 86, in webhook
result = await executor.run(action_call)
File "C:\Users\tobia\AppData\Local\Programs\Python\Python36\lib\site-packages\rasa_sdk\executor.py", line 280, in run
   events = action(dispatcher, tracker, domain)
  File "C:\Users\tobia\OneDrive\Desktop\arguebot_test\actions.py", line 83, in run
elements = predict_components(Text) #Todo text muss noch geholt werden
   NameError: name 'predict_components' is not defined

But as you can see in my code above I defined the name predict_components function… Any thoughts on this issue??

predict_components is supposed to be a method of the class ActionGiveFeedback while I don’t see a self variable in its function signature. After doing that you need to invoke it as self.predict_components(text)

Thanks @dakshvar22 Close to my breakthrough. Gonna quickly post my updatet code…

from typing import Any, Text, Dict, List

from farm.infer import Inferencer
from rasa_sdk import Action, Tracker
from rasa_sdk.events import SlotSet
from rasa_sdk.executor import CollectingDispatcher 

class ActionGiveFeedback(Action):

    def __init__(self):
       self.model = Inferencer.load("C:\\Users\\tobia\\OneDrive\\Desktop\\arguebot_test\\ModelTob")

    def name(self) -> Text:
        return "action_give_feedback"

    def predict_components(self, text :str):
        text_to_analyze = [{'text': '{}'.format(text)}]
        result = self.model.inference_from_dicts(dicts=text_to_analyze)
 
        annotated_text = [[i['label'], i['start'], i['end']] for i in result[0]['predictions'] if i['probability'] > 0.75]
        count = 0
        count_claim = 0
        count_premise = 0
        elements = []
        for ann in annotated_text:
            if ann[0] != 'O':
                elements.append({
                'id': count,
                'label': ann[0].lower(),
                'start': ann[1],
                'end': ann[2]
                })
                if ann[0].lower() == 'claim':
                     count_claim += 1
                else:
                     count_premise += 1
         else:
             continue
         count += 1
  return elements, count_claim, count_premise

def prepare_feedback(self, text: str, elements: tuple):
    feedback_text = "Hier kommt das Feedback zu Deiner Argumentation, " \
                    "Claims werden *fett* und Premises _kursiv_ dargestellt:\n\n\n"
   before = 0
    for e in elements[0]:
        start = e['start']
        end = e['end']
        marker = '*' if e['label'] == 'claim' else '_'
        feedback_text += text[before:start]
        feedback_text += marker
        feedback_text += text[start:end]
        feedback_text += marker
        before = end
    if before == 0:
        feedback_text += text

    if elements[1] > elements[2] or elements[1] < 2:
    
        if elements[1] < 2:
            feedback_text += "\n\nIch würde dir empfehlen, deinen Text noch argumentativer zu gestalten. " \
                             "Versuche mindestens zwei Claims mit relevanten Premises zu stützen\n"
        else:
            feedback_text += "\n\nIch würde dir empfehlen, deinen Text noch argumentativer zu gestalten. " \
                             "Versuche Deine Claims besser mit relevanten Premises zu stützen\n"
        else:
        feedback_text += "\n\nIch empfinde Deine Argumentation als gelungen! " \
                         "Du hast mehrere Aussagen gemacht und diese mit relevanten Premises gestützt. Weiter so!\n"

   return feedback_text

   def run(self,
        dispatcher: CollectingDispatcher,
        tracker: Tracker,
        domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
   
       elements = self.predict_components(???)
       feedback = self.prepare_feedback(???, elements)
       dispatcher.utter_message(feedback)
       return [feedback]

1’000’000 Dollar question now for me is, if I want to have the input text of the user as argument(placeholder=???), what is the correct term for that? With Text or text the function is not working…

Hi @TKueng,

you can simply use:

last_utterance = tracker.latest_message["text"]

to get the desired text!

Kind regards
Julian

Thanks @JulianGerhard

You’re really a true Rasa Superhero. The code is working now! But now I got a different problem…

When I trigger my custom feedback the following is happening:

I get this Timeout Error in the command prompt where I called rasa shell and get the chatbot answer that I wanted to have in the command prompt where I called rasa actions.

I ignored this error and hoped that I would get the response on the rasa server, but I don’t get actually an answer there:

Any suggestions that I get answer of the chatbot in the rasa shell command prompt or on my Server?

Best regards und danke vielmals!

Tobias

@JulianGerhard @dakshvar22

Okay what I’ve found now this code line in the console.py file and set it to 25.

DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS = 25  

No I get my answer back in the right console.

But how can I add this time for an answer on the Rasa X tool of my server? I think this is the last step before my chatbot would be fully functioning!

Best regards,

Tobias