Is the dependency parse from spacy used somehow currently in Rasa to get better do entity extraction?
At the moment no, we don’t implement it. How do you know it would increase the accuracy of entity extraction?
I don’t obviously. What I meant is that in general the idea of interpretation in a conversation setting is to perform some sort of semantic parsing of the sentence, to extract all the relations and their related entities into a machine friendly logical form, like lambda calculus or something of that sort.
Rasa’s interpreter is short of that and simplifies the problem into predicting intent and extracting entities. However even in the data camp course lectured by Alan Nichol, there is a discussion and an exercise on using dependency parse from spacy to inform better about how to fill the correct slots with the entities. A good example of that I guess would be “I want to fly from NY to SF”. In order to get right the from and to entities in rasa right now you need to train a custom NER for “from_city” and “to_city”, or to apply some heuristics on top of regular entity extraction, in order to correctly extract the entities and put them in the right slots. Instead, you could imagine extracting the dependency between “NY” and “SF” from the dependency parse of the sentence and using that to fill the slots.
So basically I am wondering if there are plans to move towards real parsing of sentences in addition to just intent classification+entity extraction strategy to be able to better capture the information.