Impact of Training loss on rasa core chatbot performance

Hey.

My bot seemed to be making wrong prediction after an increase in stories i.e to about 100 stories now. The stories have multiple paths based on slot_was_set to decide which path to take, this slot is a categorical slot. After running rasa run with debug I realized the path taken determined by slot_was_set was being predicted by the TEDPolicy. After an increase in epochs of the TEDPolicy from 100 to 200. The prediction seems to work fine, I also observed a decrease in t_loss in the core model however accuracy remained the same. What is the impact of t_loss on prediction?. Does TEDPolicy require increase in epochs with increasing number of stories?. Despite the prediction improving the training time has exponentially increased.

100 epochs t_loss 0.523 accuracy 1.000 training_time 6 minutes prediction on path to take was wrong

150 epochs t_loss 0.469 accuracy 0.999 training_time 10 minutes

200 epochs t_loss 0.424 accuracy 0.999 training_time 13 minutes

My config,yml file

@akelad could you please help out on this question.

hi @Ian ! do you get any story inconsistencies when you run rasa data validate ?

Hey @amn41 no inconsistencies at all.