Poor prediction with forms: please advise

I have a form that asks for answers that are each similar in format. The answers are all numbers that look the same, and it is really confusing intent classification (this is the main problem). They are usually single digits, so regexes do not help, and are classified as entity=number. However, core is not identifying that one intent is different than another.

If I make them the same intent (i.e. inform number, how can I write a single action for that one intent, even if I have multiple slots and questions?

How can I tell my form that an inform_X intent is way more likely when the requested_slot is asking for X, as opposed to Y, even though X and Y intents have similar training data (numbers)?

For example, two intents that trigger fulfillment of different slots are:

intent:inform_partyVehics

  • One
  • 2 because 1 had to leave early
  • 3
  • 1 motorcycle
  • 2
  • 1 mini van

intent:inform_vehicleCount

Rasa_core is predicting inform_vehicleCount after inform_peopleCount, even though in my training data inform_vehicleCount never follows inform_peopleCount. It is only followed by inform_partyVehics. inform_partyVehics should be predicted with higher confidence after the vehicleCount slot has already been filled.

Here is an example of the training data in core to show you what I mean:

* hello
    - utter_hello
    - count_episode
    - form{"name": "count_episode"}
    - slot{"requested_slot": "seenBefore"}
* form: affirm
    - form: count_episode
    - slot{"seenBefore": true}
    - slot{"requested_slot": "seenPastDay"}
* form: deny
    - form: count_episode
    - slot{"seenPastDay": false}
    - slot{"requested_slot": "vehicleCount"}
* form: inform_vehicleCount{"number": 5}
    - form: count_episode
    - slot{"vehicleCount": ["5", 5]}
    - slot{"requested_slot": "countTime"}
* form: inform_countTime{"timeAbs": "4 pm"}
    - form: count_episode
    - slot{"countTime": "4 pm"}
    - slot{"requested_slot": "peopleCount"}
* form: inform_peopleCount{"number": 2}
    - form: count_episode
    - slot{"peopleCount": ["2", 2]}
    - slot{"requested_slot": "partyVehics"}
* form: inform_partyVehics{"number": 1}
    - form: count_episode
    - slot{"partyVehics": ["1", 1]}

policies :

  • name : FormPolicy

  • name : MemoizationPolicy

max_history : 3

  • name : KerasPolicy

epochs : 50

max_history : 5

  • name : FallbackPolicy

nlu_threshold : 0.4

core_threshold : 0.3

The NLU and core are, for simplicity reasons, completely disjoint. In fact, they are so disjoint that you can run NLU as a server when building your bot, so that Core and NLU can only communicate via HTTP requests (Core sends a user message to the NLU and the NLU spits back a JSON with intents and entities together with their confidences). You cannot change the behavior of the NLU based on the history of the conversation, the only thing you have is the message. So if the only thing your user will tell you to express the intent “inform_X” or “inform_Y” is a number, I would create an intent called “inform_number” and put the numbers in there. What you can do however is have one entity called “number” but two slots called “number_X” and “number_Y” (or whatever name makes sense for you, of course). Then, you can look at the tracker to see what was the latest question that the bot uttered to the user, and use the entity extracted to set that entity value to the proper slot. Something like

if latest_bot_utterance == "ask_for_X":
    slot_set = SlotSet("number_X", number_X)
elif latest_bot_utterance == "ask_for_Y":
    slot_set = SlotSet("number_Y", number_Y)
else:
    slot_set = None
    // put some debugging logic here in case the script fails

return [slot_set] if not slot_set else []

Hi Patrick, thanks for responding and for confirming what I was afraid was the answer. I have already started recreating the NLU in the way you describe when I came to that conclusion. Although I imagine it could be beneficial for some use case, I think it’s rather unfortunate that core and NLU are so disjoint. Thanks

Thanks for reminding for me and everyone. I have created the NLU, mantigames in the way you said. Once again, thanks so much.

I saw that you edited your question, and I would like to answer it but I am still reading about forms for the moment, so I would rather not speak right away. But I had the same reaction at the beginning (why should they be disjoint?) and essentially I think that it should stay like that until Natural Language Generation becomes more easy to use (right now it is faster to type stuff by hand or write mechanical scripts like that of Chatito than use anything that is AI related to generate text data… at least for the use cases I considered). Because the more complex your AI model is, the harder it is to train it and the more data you need. If you start with a big conversational database like Google’s, everything is possible. But if you start with nothing and type data by hand or almost by hand, keeping those things separate minimizes the amount of data you need to train and start having something up and running.