I really need some help on this. The command
interpreter = Interpreter.load(model_directory1)
is taking too much time (2 minutes) to load model (it builds model from scratch , downloading bert model vocab and other configurations). It happens every time azure function is called. It takes nearly 2 minutes. Is there a way by which I can save this interpreter object and just consume it in a Azure Function.
Following are the files, code and error that I am using. I am using a pre-trained model and named it’s directory as “nlu_new”. I am able to get predictions
requirements.txt ------------------------------------------------------------------------>
azure-functions
rasa
rasa[transformers]
init.py ------------------------------------------------------------------------------------------->
import logging
import azure.functions as func
from rasa.nlu.model import Interpreter
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
message = req.get_json()
logging.info(message)
msg1=message.get("text")
model_directory1='./nlu_new'
interpreter = Interpreter.load(model_directory1) ## this should be an extracted model
result = interpreter.parse(msg1,only_output_properties=False)
new_result=dict(text=result['text'],intent=result['intent'],entities=result['entities'],intent_ranking=result['intent_ranking'])
logging.info(new_result)
return func.HttpResponse(
json.dumps(new_result),mimetype="application/json",
status_code=200)
Model Trained on ---------------------------------------------------------- >
language: en
pipeline:
- name: HFTransformersNLP
model_weights: "bert-base-uncased"
model_name: "bert"
- name: LanguageModelTokenizer # splits the sentence into tokens
- name: LanguageModelFeaturizer
- name: DIETClassifier