Thanks @fkoerner for you thoughts and examples!
You helped me to clear some points but rightly you are reminding some challenges I didn’t think about.
(1) Splitting a story in small-talks turns
If I well understand, you are suggesting to split stories in small chunk/sequences of turn takings, mixing “no setting-specific” (reusable in different scenes) sequences with setting-specific ( = scene-specific). Ok.
I see advantages of the splitting, but I have a question here: breaking a story in separated set of sequences, doesn’t this reduces the RASA conversation engine ability to predict next step in a story?
(2) Using forms
For sure forms are the way to go for a real goal-oriented situation where the bot has to collect data and processing them for a contextual action. But I’m still perplexed to use forms, because dialog authors are no-programmers (I’ll possibly mentor language teachers, phd students in linguistics, people more focused on language teaching without any dev skill, so I would want to minimize coding (form) actions, that could offuscate the dialog behavior for no coder people.
I maybe would prefer that any “slot” info collection could be explicit in turn taking and not encapsulated in the python actions code.
That just because the goal of the project is not just to supply a set of “exercises of conversation” obtained by any (programming) means. Instead I would like to create an open-data growing set of conversation examples (stories in a broad sense), to be enriched, updated and maintained by teachers (say no-coders conversation authors) .
But this is just a draft feeling.
(3) Dialog interferences / Out of context user requests
Thanks for pointing out the issue. This complicates things. At first i could forget the problem, but you are right when you say that the user/learner could ask, by example in the post office scene, something about purchasing at grocery, and that could be managed gracefully by the bot in that “setting”.
One could set-up rules using slots, ok but I’m perplexed because in this way conversations become more and more less readable.
I believe that the application I want to build is in facts a multi-bot scenario, where each scene is all in all a pretty different situation, a different bot (last but not least, with a different bot-persona).
Maybe do I have to rethink the dialogs simulator as a joint of fully separated bots? I confess I’m a bit confused now.
(4) A metabot conversations design approach?
The application in object is for a language learner person that has to study Italian language at very basic level (L2/A1). This increases the cognitive complexity too. For my previous experience working on CPIabot, a research project for the Italian public adult schools for foreigners, I know that the A1 / Pre-A1 is very challenging because user are really confused not just because the target language learning but also because cultural common senses, cognitive abilities, etc.
To “guide” user in scene I’m thinking about letting aware the student of the conversational agent as an assistant (with his own personality) that drive / helps learner in the scenes, as it was a real-life friend that want to help in each situation, suggesting how to do step-by-step in each interlocutor request/ask (the post office employee, the grocery merchant, etc.)
So one idea come in my mind is to follow (each/some) bot utterances / requests with a sort of assistant suggestion). From the UI interaction perspective (say on Telegram) what is still not clear to me is how to clearly separate the actor-utterances from the assistant suggestions/tips. With an initial icon?
scene 2: At the post office
Good morning! Ho can I help you?
send a parcel, pick up a package, etc. Example:
I want to send a parcel
Does it make sense? Any suggestion to improve the UI/UX?
BTW, any suggestion by anyone is absolutely welcome!