Training Rasa NLU model on AWS EC2 p2.xlarge Instance

Hi, It takes 2 days to train the NLU model on my local computer. Can I use p2.xlarge instances to reduce the training time? Does Rasa NLU supports p2.xlarge ? are there any limitations?

my first question is how much data do you have that it takes two days to train? My guess is that you synthetically generated this data with a script or a tool like chatito. That’s a bad idea, much cleaner to use a lookup table if you have a large number of predefined values.

Hi @amn41 Thanks for your reply.

The Rasa NLU dataset i use contains 411 intents with around 70k utterances. This data is manually generated.

wow, ok, that’s a lot of annotated data. I would recommend the supervised embeddings pipeline which uses tensorflow and should scale better to this size. I’m not sure how many epochs you have set, but with that much data I suspect you can reduce it to a v small number without losing performance

yes i use supervised embeddings. currently using default epochs. i will try to change it and see again. thanks a lot for replying :slight_smile:

hi @amn41 can you please confirm if i train the nlu model on gpu instances may reduce the training time?

Hi Bharat, I am also facing the same issue. Can you please share any finding you have got to reduce the training time. Thanks Prashant