Running out of memory with a large number of stories


I’ve been attempting to train a rasa_core model with about 150k stories, but unfortunately, I’m running out of memory. I’ve tried to use a machine with 400GB of RAM and it still runs out of memory.

My question is: shouldn’t the batch_size define how much data I load to my RAM? Why am I having this problem even with a small batch_size ? It seems that rasa_core was implemented such that all data is loaded into memory, but haven’t anyone tried to use large story data yet?

Thanks :slight_smile:

Hey @akari. By default Rasa does data augmentation - it uses the stories in your training data file and creates more training data examples. Do you have augmentation parameter set in the policy configuration? If not, can you try training the bot without the augementation by setting the flag --augmentation 0? Let me know if the issue persists with augmentation 0

Hi @Juste, thanks for answering. Yes, I forgot to mention but I’m using augmentation 0.

But in any case, even if I were using a huge augmentation factor, shouldn’t rasa_core deal with the large data generated and load the data into memory in batches?

Thanks :slight_smile:

Thanks for your fast reply @Juste

I’m from @akari’s team and we found out our issue. We have a custom training script and we pass an augmentation_factor=0. But we’d passed this flag in Agent.train() instead of Agent.load_data().

We didnt get any warning message because the function train() receive params using **kwargs :sweat_smile:

What do you think about a warning message of not used parameters (to avoid similar issues in the future)?

We can send a PR if you guys think nice :]