I’m using a custom Arabic language model, and this model is relatively large compared to spacy models, so during the training or starting the server, the memory usage increase rapidly, so are there any way to improve the memory utilization in this case specially as long as the nlu server is up?
@akelad your help is greatly appreciated, thanks in advance.
Hey @A7medBahgat,
We store the language model in memory before parsing, so that usage is a necessity. What do you mean by “the memory usage increases rapidly” though? It should be fairly static after loading the model into memory.
@MetcalfeTom Thanks for the quick reply
I mean, it took too much cuz the model is large so I just wonder if there’s any way like using the model while it’s on the hard disk or something to save some memory
No problem!
Unfortunately that might lead to problems when running in production - the model would be constantly shifting between loaded/unloaded (and besides, inference time slows down too).
The supervised_embeddings
pipeline is usually more lightweight though. If you want to lower memory usage, I’d invite you to compare that pipeline against your current Arabic language model and see how it fares in terms of accuracy vs. cost (I’d also love to see the graph from your results if you do so).
I’m really sorry for replying late, i don’t know what’s wrong but suddenly I don’t receive notifications on my email anymore! any way, thanks for your reply, this is the result of the comparison but keep in mind that both are trained on little data so I think these results may be misleading,
and those are the pipelines configuration: