Is there a technique to visualise the outputs of a classifier to see how the input examples and entities have been mapped? I came across whatlies which serves the purpose, but did not find sufficient documentation on how to use it with the rasa training examples.
@Sweta can you please give some example for the your current use case?
@nik202 I am trying to build a chatbot to test intents and dialogue flow for a robot in a care home scenario. So the use cases would be to,
- ‘ask for a drink’ Example entities: coffee, tea, water, milk …
- ‘pickup an object’ lying around Example entities: book, remote, glass etc.
I tried visualising the generic spacy library through tensorflow projector where the words around ‘drink’ were similar to the entities used in training (coffee, tea, water …) , however the words around object were very different from the training example. So I want to visualise the word embeddings for the training examples to understand how they are placed.
@Sweta Sorry for the late reply, are you aware of rasa interactive
?
@Sweta Please check this small video RASA Interactive Learning and Conversation Visualization. - YouTube
Or may be I still not get you, my bad are you creating Text or Voice based bot?
Haan, this is possible if you run test stories. you will get the Confusion matrix and how and what was predicted. There are many other ways too One more way I would suggest is Rasalit you can check how to use it on the given link. Let me know if you have any doubts. And community correct me if I am wrong
@Horizon733 Thank you for sharing this! The Rasalit link shared roughly looks like what i was looking for. I will go through that and get back if I have any queries!
@nik202 I’m trying to build a text based bot. I have checked rasa interactive and the visualisation there is about the stories. I was looking for something more in depth like the Rasalit mentioned.
Hello Sweta, Right, Your use case is different and you can archived whilst using rasalit which is mention by Dishant @Horizon733 He even shared the Github link, you just need to clone and need to read detail documentation mentioned in the repo. If you having some issue, we are here to help you further. You can even use the trained model with Python code in Jupyter notebook to archived your goal. Can I asked why you need to visualise if there are in-built feature available with rasa? Ref link: Testing Your Assistant Thanks.
@Sweta Hey Sweta! Are you able to solved this issue?
@nik202 Sorry for the late reply. I tried rasalit and was able to visualise what i was looking for. Testing your Assistant provides similar outputs for intent and the confusion matrix is useful, but I wanted features like nlu-clusters, live-nlu and the attention charts shown in Rasalit in order to see what was happening to the example data inside the model.
Thank you @nik202 and Dishant for helping out with this!