Extending FAQ scenarios?

Response retrieval model allow simple QA scenarios. The problem seems to be from what I tested so far two fold.

First the annoying one :wink: … you have to sync the editing of 4 different files … it is really hard keep track. It would be probably better to have single file (probably different extension that hold all the information. This way also no need to repeat declarations.

Second … it seem that it support one-to-one OR many-to-one relations i.e.

Question ==> Answer
Questions ==> Answer

but not :

Question <== Answers
Questions <== Answers

i.e. probably a training on all permutations of all Q <=> A pairs can do it.

q1,q2 <=> a1,a2

q1,a1
q2,a1
q1,a2
q2,a2

What I’m saying is the current structure support MAPPING from multiple INTENTS/QUESTIONS to single ACTION/RESPONSE, but not from multiple ACTIONS/RESPONSES to single INTENT/QUESTION.

I wrote this quick script to generate the files from single file : qa2md.py (1.0 KB) )

Example item :

====
# variables, data_types
* what are the standard data types ?
* give me list of Python supported types
- numbers, string, list, tuple, dictionary
1 Like

Thanks for your feedback on the response selector! I’ll take it back to the team as it’s still an in development feature :slight_smile:

@sten This sounds great. I’m just now investigating the response selector for FAQ type things. I downloaded your script, I’ll give it a whirl.