simple rasa chatbot was take soo much memory, how to reduce it?
can you please provide us with a couple of specifics?
- How much memory does it take
- What is your config (policies, nlu pipeline)
- Which version are you using
- During training or inference time
- What machine do you have
- Which language model do you use?
when i am serve a bot it takes 1.1GB memory in my system, config is:`pipeline:
- name: “nlp_spacy”
- name: “tokenizer_whitespace”
- name: “intent_entity_featurizer_regex”
- name: “ner_crf”
- name: “ner_synonyms”
- name: “intent_featurizer_count_vectors”
- name: “intent_classifier_tensorflow_embedding” intent_tokenization_flag: true intent_split_symbol: “+”
`. i am using ubuntu 18, rasa-nlu = 0.14.6, rasa-core = 0.13.0a3, spacy en language model
Spacy language models are huge (check the sizes here). With
nlp_spacy the language model always has to be loaded in the memory. However, I don’t even think you need the
nlp_spacy component in your pipeline. Depending on your vocabulary
intent_classifier_tensorflow_embedding can also take a lot of memory, you can lower their requirements by specifying smaller vocabularies / neural nets in Components (note that this will probably also reduce the accuracy of the model).
In general 1.1 GiB does not sound to bad for me, but that probably depends on the use case
thanks a lot…