I’m getting close to deploying a bot to production, and in the final debugging of our bot I am wondering if there are any tools, libraries or techniques that people would be willing to share to raise the visibility of transparency around which story has been used by a model to make a prediction.
The problem I am having is that as the number of stories and functionality increases in a bot, it becomes more time consuming to track down which story has caused a misfire.
I have used interactive mode and created around 8 interactive stories, including happy path and unhappy path, and I have carefully looked for collisions in the stories and curated them out. This leads to better but still imperfect results. I know that I will need to spend some time tuning the pipeline settings, but before I invest a lot of time in that, I would like to know if anyone in the community has general advice for approaching this area (e.g. reverse troubleshooting from incorrect responses back to specific stories that have caused the prediction).
An ancillary question is whether or not this goes beyond stories, and into the model file that is produced for Rasa - and if there are transparency tools for inspecting the model files and understanding why a prediction was made.
Any advice would be much appreciated.
Thank you, Patrick