Training Keras hyperparameters

Hello there,

I encounter some weird bug right now with my rasa, and it seems to be link to the training of the Keras policy, which is used right now (even if it’s soon deprecated). The hyperparameters for the training are :

  • epochs : 150
  • batch-size : 32
  • max-history : 5
  • evaluate-on-number-of-examples : 0

I’ve tried almost every combination :

  • epochs from 100 to 700
  • batch-size 32/64
  • max-history from 0 to 100

Additional information, almost every time we have an accuracy of 99+%, which should be great, but it just don’t follow the different stories that we have…

If someone have an idea, or an explanation, please feel free to respond !

Technically the Keras policy is already deprecated in favor of the TED policy.

That said, it is unclear to me what the bug actually is. I understand that you’ve been trying different combinations of settings but it is not clear what the issue is. Something went wrong with the stories?