Providing conversation context to the NLU using microservices

Well, you have 60,000 TCP ports to work with, so I think even one IP/URL should be enough to support all of them (then again Microsoft thought in the 80’s that no one would ever need more than 64K of RAM in a PC). But yes, if you run out of TCP ports, you would need to create another URL for the next set of microservices. It’s not really a limitation though.

I really like this line of thinking but I’m curious @lgrinberg how your initial setup (firstname, lastname, etc) would cope if the person decided to give all of their information at once? (as users are won’t to do sometimes, ignoring prompts etc). I suppose one would either need to

a.) ignore/don’t deal with that possibility b.) have an even higher level NLU interpreter (meta nlu) to determine the context rather than assume it from the Core setup, or c.) have your action measure the sentence length and decide accordingly.

What do you think?

In my initial setup, meaning the one I used for the demo, I did not account for this possibility. So let’s think about this. The whole reason we’re using microservices and relying on context for entity and intent identification is because we have ambiguous responses. If the user wants to respond to the “What is your name” question by giving us his full name, e.g. ’ my name is John Smith’, it should be easier for us to understand his response and we could identify it as ‘inform_name_lastname’ with the name and last_name entities, instead of the generic inform intent. That would be the first way to deal with it that you identified in a).

The other way to do it would be to include this possibility in the microservice itself. I.e. the ‘first_name’ microservice NLU model would be smart enough to also understand 'my name is John Smith" and to extract ‘John’ as first name and ‘Smith’ as last name. That would be similar to your approach b).

The third way would be for the ActionSNLU invoked after the generic ‘inform’ intent to NOT send the user’s message to the appropriate NLU microservice determined by the context. Instead you’d run some logic, either sentence length as you suggested, or maybe presence of multiple entities or something else, to figure out if this is a simple one word answer or something that you could use some other NLU service for. That would be like approach c).

I think depending on your situation anyone of those could be more appropriate. In general though, it seems to me that since the whole reason to use context-based microservices is to help the NLU model disambiguate, where the response is not that ambiguous, your general NLU model should be able to classify the intent properly and extract the correct entities (option A).

I hope that makes sense.

I’m super-new to Rasa, but would it be feasible to insert a context symbol into training data and then create a custom NLU component at the beginning of the pipeline that inserted the same symbol based on context?

e.g., training data looks like:

NAME_TOKEN Georgia(name) PLACE_TOKEN Georgia(place)

Then for runtime, you create a custom component that inserts the magic token at the beginning of the text based on the current intent?

Or maybe there is a less intrusive way by subclassing an NLU component like CRFEntityExtractor? Could define legal entity types for each bot context (manually or just by processing stories.md). Then as the entity extraction component retrieves the predicted entities from spacy, check to see if the top prediction is is a legal entity for the context. If not, check the second prediction and if it is, then use that. Wouldn’t solve 100% of cases, but probably vast majority. (again, apologies for not having dug into in detail yet)

Hi, did you happen to complete it? If so, I’d like to take a look at it please. I need to be able to access the tracker in one of my custom nlu components so I can determine which sender sent that message (sender_id).

No, we never did, but there was PR in Rasa.https://github.com/RasaHQ/rasa/pull/3681. I see you already found it. I’m not sure what version of Rasa it was in, but considering it was merged almost a year and a half ago, it should definitely be in one of the more recent versions.

Thanks! I figured it out.

Hi alex,

I am also currently working on the contextual NLU which want to take tracker information as the input of NLU, but not sure how you made to call tracker as a parameter to the parse() method, could you tell me which function you called to get tracker information, rasa_sdk, rasa.shared.core function? Thanks a lot!