Rasa3 memory issue while training model

With the help of Rasa3 we are training multiple domain files using pythonic way (i.e. we are not using rasa3 approach to train domain file bcz it only support to train single domain and not support training multiple domains ),

using pythonic way means - we have created python program to train multiple domains one by one, here we use rasa3 as library and we call function to train domain/model from this lib.

code snippet :

from rasa.model_training import train_nlu

# to train different models , we call this multiple times.

nlu_model_path = train_nlu(config=config, nlu_data=training_file_path, output=agent_name, fixed_model_name=model_name, domain=domain_file)

Issue :

here we are trying to train 5 domain one by one , to train different domain we are calling train_nlu function each time , issue encountered here is - it getting @800-900 mb for training single model, but after training memory is not getting released. Is there any to release the memory so that it can used to next model or retrain purpose.

config.yml


pipeline:
  - name: SpacyNLP
    model: en_custom_spacy_model
    case_sensitive: True
  - name: SpacyTokenizer
  - name: SpacyFeaturizer
  - name: CountVectorsFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: char_wb
    min_ngram: 2
    max_ngram: 4
  - name: SpacyEntityExtractor
    dimensions: ['PERSON', 'ORG', 'GPE', 'LOC']
  - name: DIETClassifier
    epochs: 100
    entity_recognition: False
    constrain_similarities: True
    use_masked_language_model: True
    transformers_layers: 4

Other details :

rasa version - rasa3.5

python version - 3.8