Oh yes this would be an awesome idea. This will save so much time.
You can do that now. rasa data validate
.
story: check_confirmation path steps:
checkpoint: check_asked_question intent: greet action: utter_greet intent: mood_great action: utter_happy action: action_check_confirmation intent: check_confirmation
I guess it would be time for a deeper integration of the action server into RasaX … most integrations are done via rest calls … so why not having a generic Restcall and some parameters to hand over likeit is done in postman as an example? with a matching table for the json object and obviously some debugging and test features
The TED policy is brilliant, and the video explaining it is also exceptional, Rasa Algorithm Whiteboard - TED Policy. This video is enjoyable, I have watched it twice.
At minute 10:20, there is a statement that is about looking forward in dialogue history is a kind of cheating. I think it’s reasonable when decision of action considers few next possible actions (so called “short planning”). So I thought we would have significant benefit if we can improve TED algorithm to be able to look forward cleverly and effectively.
Idea for Rasa Stack feature. I don’t know if this exists but it would be very useful if previous conversations with a bot were stored and could be analysed to improve future dialoges.
Rasa as a Multi-Paradigm CAI Framework
Speaking of moving beyond intents … This is a ridiculously big ask, I know, but here I go anyway … I would love to see Rasa eventually able to incorporate very different architectures for conversational AI which look to be very promising, such as the Semantic Machine’s dataflow approach and open-sourced models such as Soloist (also from MS).
You will notice, there are a lot of companies out there who give Rasa a try and then end up building something from scratch. We need to find a way to make Rasa flexible enough to handle more than one paradigm of conversational AI in the same open source project. Otherwise competitors will arise that use a new paradigm like dataflow synthesis and become a serious problem for Rasa.
Task-Oriented Dialogue as Dataflow Synthesis
Some info about Semantic Machines’ approach:
- Blog: https://www.microsoft.com/en-us/research/blog/dialogue-as-dataflow-a-new-approach-to-conversational-ai/
- MS Semantic Machines group and video: Semantic Machines - Microsoft Research
- 2020 Paper: Task-Oriented Dialogue as Dataflow Synthesis | Transactions of the Association for Computational Linguistics | MIT Press
- Dataset: SMCalFlow
- GitHub - microsoft/task_oriented_dialogue_as_dataflow_synthesis: Code to reproduce experiments in the paper "Task-Oriented Dialogue as Dataflow Synthesis" (TACL 2020).
- Short monolog explanation: Developer Tech Minutes: Semantic Machines - YouTube
- Pub videos: Task-Oriented Dialogue as Dataflow Synthesis - Microsoft Research
- Short paper video: Task-Oriented Dialogue as Dataflow Synthesis - YouTube
- Long paper video: Task-Oriented Dialogue as Dataflow Synthesis: A Deeper Dive - YouTube
- SM Principles of CAI: Frontiers in Machine Learning: Machine Learning Conversations - YouTube
Goal Stack
Here’s a suggestion for building in some higher-level intelligence into a conversational rasa agent in a simple way.
I just read the suggested strategy for handling context switching, which I think makes sense given the purely simple-ML-based dialog management Rasa is currently designed to support.
But it seems like this approach will require us to write enough stories that, in the worse case scenario, we must create examples for all possible pairs of intents if the chatbot is supposed to handle even a single level of nesting / context switching. What if there are many possible intents and you want to allow for more than a single nested context switch? The size of this example space grows exponentially.
There is a better way to structure the dialog behind the scenes that does not create an explosion of context complexity or required training examples.
I think you will find that we need to move beyond a purely-ML-based dialog management strategy sooner or later. It will help to remember some techniques from other areas of AI and computer science. In agent-based AI, such as in the good old cognitive modeling architectures from the 1990’s like SOAR and ACT-R, intelligent decision-making was built on top of simple CS concepts like goals and search. I will save the topic of search for another day. For now, consider goals, which I think is still a very useful construct to incorporate into a dialog agent. An intent really just indicates a goal state that the user and chatbot are trying to reach in a series of steps.
A simple place to start applying goals as a first-class concept would be in context switching. Simply maintain a goal stack in Rasa. Then, if the current intent is interrupted, put the current goal (which corresponds to that intent) on the goal stack. Once the nested goal is handled, go back to the goal on the top of the goal stack. If the nested goal is interrupted with third intent before it is completed, put the second goal on the stack, and handle the third goal. Continue to pop goals off the stack until they are all fulfilled. And so forth, und so weiter. Now we only need to provide training examples for each goal/intent, individually, not in combination with all the other intents.
A good example of the current requirement to supply more training examples than should be necessary is in this TED in Practice whiteboard: Rasa Algorithm Whiteboard - TED in Practice - YouTube
You mean like Tracker Store?
Unchanging test set for comparing metrics
I’ve noticed that as I add training data such as intent in the nlu.yml
file, my test metrics change – not necessarily because my model performs any better or worse, but because “support” also changes. (In fact, it seems the story support decreased when I only changed the number of epochs to train the DIET classifier…?) It seems that training data is being used automatically as test data, and perhaps sampled randomly each time I run an experiment. While I can see value in this (we get a larger test set and that means more accurate test metrics), it also defeats one of the purposes of keeping training and test sets separate. E.g., it’s hard to do the machine learning counterpart of true “test-driven development”.
In addition to seeing the metrics on the growing test set (derived from the training set), I would also like to see the metrics on a fixed test set that I define so I can compare apples to apples. And no randomization. When I change anything about my chatbot – including the training data – I want to see what affect that has on the accuracy of my chatbot as compared to the previous version. I cannot get a clear reading of any changes in accuracy if the test set is also changing.
I see in the docs that we can change the NLU file source during testing, but I’m not sure if it’s possible to remove it…? Even if I set it to a test part of a train-test split and keep that test set fixed, many of the metrics I care about change, as well as their support.
Please increase the font size for the confusion matrices created by rasa test. The text is almost unreadable.
Actually, setting a random seed in config.yml for all the ML components seems to have fixed this problem for me.
Except when I change a hyperparameter of DIET like num epochs, then the support changes again … This is driving me nuts.
Something to be able to make number based or letter based menus!
Not sure if anyone Rasa reads this thread anymore, but: a public roadmap.
A public roadmap of the project is a minimum necessity for a developer community to decide how to contribute and how to/if to adopt a project in their work.
insight data of Rasa’s development:
It appears the project has slowed down significantly for a year now, and with no public roadmap for us developers, early (but small) adopters and contributors it adds layers of doubt that go against building an open source community. No videos on your youtube channel for 7 months now too. a check up on this would be a great step forward: Rasa Roadmap - #7 by HermanH
Karen, great that you’ve mentioned this appearatly slowing down.
I also wonder what has become of the Rasa ambition, for during some years there were a lot of great Rasa initiatives, like the video’s, blogs, and - not at least - the L3 conferences. I’ve also noticed, that some posts on the forums remain unaswered.
Of-course, when it only concerns Rasa Open Source, the community’s got it’s responsibility, not only the Rasa compagny.But when ik comes to major things, like Rasa Enterprise, five levels, end-to-end I hope the Rasa Guys and grilsls will still be active.
Not only for existing members and customurs, but surely also for those investigating, hesitating.
Hello @Juste and all rasa users. May I suggest that you improve the Instagram integration please ? Three days that i am trying to do it, and it is really really a pain. I have to follow unofficial tutorial and it is not working. It is not as simple as messenger, telegram, slack, … So please, can you make it easy for us and make a tutorial on your doc on that ? Thanks a lot for your great work
I second the previous recent opinions. Rasa Inc. cleary focuses on features for paying parties and literally leaves the Open Source version fall behind - not even explaining if a Rasa Open Source 4.0 is planned or what will come in future releases. For now Rasa is still my choice, but given the intransparent pricing policy and the questionnable future of the source code, I keep my eyes open for alternatives already.
How to use Rasa chat bot to talk to another one.