Share your ideas for projects or improvements for Rasa Stack!
Do you have an interesting idea for a chatbot, a new Rasa Stack feature or a Conversational AI field as a whole? This is the best place to share them and discuss them all with other creative minds of Rasa Community!
Not sure if this topic is the right place to ask this, but was interested to discuss / seek advice about handling user spelling mistakes for text input to Rasa NLU.
It’s something I looked at a few months back but ended up putting to one side.
On one level, you could ignore spelling mistakes and simply include the misspellings in your training data. But I suspect that becomes unsustainable fairly quickly. Also, whilst it’s okay for intents, it doesn’t cope where you need to then use the entities for things (ie subsequent lookups)
I did have a basic spell corrector that I ran text through before sending the corrected text to Rasa. It was okay but had limitations (longer sentences and in particular longer words took dramatically longer amounts of time).
Due to those limitations, I’d also wondered about only correcting entities once returned by Rasa, but then you’ve typically lost useful context. I didn’t get around to adding neighbouring words back, but that might be a fair compromise.
Idea for Rasa Stack feature. I don’t know if this exists but it would be very useful if previous conversations with a bot were stored and could be analysed to improve future dialoges.
That is, go through what the user said, which intents and entities were extracted and how the bot responded.
Then approve or mark errors and use that to train the system.
I’m only aware of the current online training functionality where I manually have to write everything from the start.
It is seems complicated to manually go through every logged message and paste it into the nlu.md file.
Cheers!
Rasa needs to have a solid and more stable fallback handle , more like a “Global Fallback Intent” not only for totally out scope messages but also for slightly similar input’s (to the phrases already present) but still out of scope messages , my bot misses at this phase alot. I know tracking confidence threshold is a solution but it is not that stable when comes to using spacy , it is very unstable for intent confidence mapping score! and when comes to the the slightly similar but out of scope messages their bot usually misses alot.
It would be nice to have a way to be able to deal with a subset or a single member of a list-type Slot.
So if you have a list of options and you want to go through each one or have the user pick a subset, you wouldn’t require a custom action to form a response or to set a different slot. Rather you could just do this:
in the domain file:
utter_general_answer:
"The answer is to present {list_slot}
And control how much of the list is presented in the stories:
slot{“number”: “3”}
utter_general_answer{“number”: “3”}
So that would require a new slot type integer, and the ability of rasa core to interpret slots passed to the utter_actions in the stories as being integers corresponding to the lists invoked in that utter action.
So far instead, I have been coding a lot of custom actions to do this. I’m going to use a lot of slots/utter_actions instead, but that’s not very satisfying either.
Simple request, can you provide a switch, namely, platform, that allow the code cope different environmments, e.g.
Linux
Windows
Raspberry Pi
etc
The reason is I experience code incompatibility using rasa_core on Windows 10, it is trivial path charater (blackslash) specific to Windows, but annoying enough I need to go into source code to fix.
I was wondering whether we could remove most templates from Rasa Domain files and have them in a separate file. This would remove the need for re-training the model just to change a misspelling or bad non utf-8 character from a Slot template.
I know the featurizer uses the names of utterances as training data but for all other actions/utterances and templates, it would be a good idea to separate the files. A quick edit to domain.yml (trained copy) after training should allow modification of templates.
Hi!
I’m going through the same problem of handling spelling mistakes in conversations. Can you please share how to best approach this problem or what solutions you came up with? I would be very grateful to know how to solve this.
It would be nice if copying and pasting messages into the rasa shell would be possible (at least on Windows 10 it’s not), also, pressing the up arrow to load previous messages would make testing much easier.
It really bothers me that I need to define intent components in so many places. I have the training language in one file and the bot’s response (frequently a one off) in another file instead of being able to define it all in one place, the intent. And then, instead of assuming I want those intents automatically, I have to register the intents in another place. I have to define an entity, its lookup values, and register them both as entities and slots, and define which ones I care about for an intent (instead of gathering that from the training language inside the intent) all in different places with no real cross dependency evaluation. This is a bit of a maintenance nightmare for complex systems and could be greatly simplified and streamlined.
One feature could be integrating question-answer model so that bot can answer any question asked from within a given reference document.
Kind of traditional question answer system.
Also elastic search based support for longer documents, like a book. That will be awesome.
From a developer productivity perspective it would be better if all YML syntax checks, rules/stories consistency checks etc. were performed prior to the running of the NLU and Core processing. Simple errors could be removed in a more timely manner.