I had been trying to load a rasa_nlu model trained on a machine over another one. However, I get a
Segmentation fault (core dumped). Why would that be? Is there a way I can load a pre-trained model? The whole thing works perfect if I am to freshly train the whole model on the second machine. But I want to go economic by not training the model over every machines. Is that possible?
Thanks in advance.
Yes that should work without any issues. Can you provide the whole log file please? Maybe your machine is running out of memory? What pipeline are you using?
I am facing a similar issue with only around 5000 utterances. I am using a custom pipeline. Segmentation Fault occurs after the training of last component - EmbeddingIntentClassifier in my Docker Container
Hey @pankti23 and @labeebee, any updates on this issue?
I’m getting the same error.
Segmetation fault is a C++/C error which occurs when a read only memory is attempted to be modified. In my case, the error was encountered upon trying to train the core model on my ubuntu16.04 server with currently latest version of rasa 1.7.1. Since it has something to do with memory access, so I assumed it could be sorted out by switching to superuser. So first I did
“sudo su” and then again tried to train the model and it worked!
Hey Pranjal, I got the same error for same configurations as yours.,while training the model
I used sudo su , the model got trained.
But then when i used rasa shell it gave same segmentation fault(core dumped) error.
Could you please help me in this?
Super user solution doesn.t seem to work always. It seems to be an issue with gym. On uninstaling gym and reinstalling a previous version , the segmentation fault is resolved.
pip uninstall gym
pip install gym==0.15.4
But started giving Memory error.
This worked for me. I didn’t get any memory error.
This works for me too! Seems like it is the solutions. However, after gym has been uninstalled, I was asked to reinstall pip again. So if anyone has the same problem please follow this instruction to reinstall pip