Tensorflow Embedding / Confusion matrix

I am using Tensorflow embedding for training my model (NLU v 0.13.0a2). I have a fairly decent dataset with about 400 intents with appr 120,000 training examples. I am encountering a strange problem. Usually, post training I run evaluation to check and fix errors. Since yesterday I got a fairly decent confusion matrix with 4 intents wrongly being classified. I tried to fix by adding just couple of entity synonyms to increase the training examples. After retraining, my scores really went bad. I loaded back the old model and trained again, I got different set of results. Each time I delete and train, I am getting different confusion matrix results. Has any one noticed such behaviour?

Do you experience the same training accuracy?

No. Even the training accuracy keeps changing.

please check out this issue: Different confidence score on same NLU models for the same user query. · Issue #1620 · RasaHQ/rasa_nlu · GitHub

Thank you! Will test.

Seems to work. Thanks a lot. What is the impact of changing the seed value? Any guidelines as to what is recommended. I tried with 5.

42 is The Answer

2 Likes

Thank you :slight_smile: Will try.

666 is also a good one

I tried with various seed values. Can’t really figure correlation between values and accuracy. But, this no 666 is definitely interesting :wink: I need to observe carefully. They say, the devil is in the detail.