RASA Training Memory Issues

Hi,

Facing memory issues on the Rasa Training API.

We are building an API to train the model, which will accept the training data as JSON and execute training using Rasa Train API and create model.

After multiple rounds of invoking the API, it is observed that memory is getting filled up and leads to ‘Out of Memory’ error.

Initial Memory Usage

function                                 27859
tuple                                    15545
dict                                     14018
list                                     8071
weakref                                  6499
cell                                     4596
getset_descriptor                        3650
type                                     3538
wrapper_descriptor                       2973
method_descriptor                        2710

Final memory usage

tuple                                    163248
function                                 98467
dict                                     75886
list                                     40968
weakref                                  29497
cell                                     18163
set                                      12815
type                                     10658
getset_descriptor                        9039
property                                 8374

I have not included a single tuples in my code, so where is this tuple coming from ? And when second time the API is invoked, it shows the same final memory and get doubled at the end. And it keeps on increasing.

Just wanted to know if RASA has taken any measures to clear the unused memory after training(Even though python does auto garbage collection) or anything to be taken care in application side.

Any suggestions would be really helpful.

Regards

Which version of Rasa are you using? We recently release 1.6.0 with NLU memory consumption improvements, give this version a try.

Hi @stephens

The current version we are using is rasa==1.4.1

I think it is not about the NLU memory consumption but the memory disposal after training is issue.I hope you got the point, when each user triggers the training process the memory consumption gets doubled until i have to manually restart the service to dispose all the currently held up memory.

Anyway i will give 1.6.0 a try.

Thanks in advance