RASA is consist of RASA NLU + Core, I have tested around I understand some part about it. I try to put it into sample practise, and its working perfect.
I plan to bring it into the next level, I wish to create a FAQ system based on RASA stack with help of “tensorflow” backend.
I got over 1200+ pair of Questions and Answers. 1st, NLU would take role to understand and classify the intent along with entity extraction. 2nd it will pass the json response to RASA core where Answers will map or reponse back to the users. It sounds simple, but as I go and check the RASA it give something different. Normally, RASA core will response to the User back based on pre-define story along with ==> “utter_”. Pre-defined story is good, but for small amount of dataset only. we have to write it manually.
How to deal when dataset or Knowledge based is growing larger such as 1000+ or 5000+, We cannot manually mapping it. I try to look around but cannot find any proper way to deal with it yet.
Previously, I used [Retrieval Model] Sklean Tfidf-vectorizer as bags of word along with consine-similairy to compare and return the most similar question index, when index is found Answer will select based on index, but this kind of solution is not effective since the meaning will lost and much more problem.
Anyone got such a good solution for this ??