Updated URL @nik202
I made a custom connector and in my CollectingOutputChannel I post some data to an endpoint. In that endpoint I want to do some processing on the intent end entities extracted by my nlu model.
Data extracted by nlu model example:
"text": "some text", "intent": { "name": "some_intent", "confidence": 1 }, "intent_ranking": [ { "id": 3545025623098539000, "name": "some_intent", "confidence": 1 }, { "id": -6116029395920568000, "name": "some_intent_2", "confidence": 0 } ], "entities": [ { "entity": "entity1", "start": 0, "end": 3, "confidence_entity": 0.9999949932, "value": "2", "extractor": "DIETClassifier", "processors": [ "EntitySynonymMapper" ] }, { "entity": "entity2", "start": 11, "end": 16, "confidence_entity": 0.9997606874, "value": "today", "extractor": "DIETClassifier" } ] }
In my RestInput I get the request from my service in this format:
'{ "sender": "test_user", message: "Hi there!", metadata: {}}'
Between these two operations, rasa does some processing for extracting the entities and intents, which I can see in the logs:
rasa.core.processor - Received user message ‘Hi there!’ with intent ‘{‘id’: 172637494295832716, ‘name’: ‘greet’, ‘confidence’: 0.5666244626045227}’ and entities ‘’
My intention is to get the data extracted by nlu model (see example above) and send it to my output channel which will send this payload forward to my endpoint which does the processing.
The way I’m doing this currently: I load the last trained model into my custom channel like this
class CustomCallbackInput(RestInput):
"""A custom REST http input channel that responds using a callback server.
Incoming messages are received through a REST interface. Responses
are sent asynchronously by calling a configured external REST endpoint."""
@classmethod
def name(cls) -> Text:
return "customcallback"
@classmethod
def from_credentials(cls, credentials: Optional[Dict[Text, Any]]) -> InputChannel:
return cls(EndpointConfig.from_dict(credentials))
def __init__(self, endpoint: EndpointConfig) -> None:
self.callback_endpoint = endpoint
self.model = RasaNLUInterpreter('./models/current_nlu')
The problem with this approach is that when I run the channel, I have two models loaded, the one from rasa.core and this one, from my custom channel. I only want to use the model from rasa.core and send the predictions to the output channel, but I don’t know how to extract those predictions