Rasa NLU with spaCy large default model - en_core_web_lg

(Erik) #1

Hi everyone,

I’ve got a speed issue when using Rasa NLU with the large spaCy default model. It takes about 11 seconds to respond when running through the pipeline. When I use the small spaCy model it’s only about 300-500ms. I believe this slowness is coming from loading the spaCy model every time a call is made to Rasa NLU.

I have an endpoint in a Django project that calls `from rasa_nlu.model import Interpreter interpreter = Interpreter.load(INTENT_RECOGNITION_MODEL_DIRECTORY)

return interpreter.parse(text)`

My NLU pipeline is as follows `pipeline:

  • name: “SpacyNLP” model: “en_core_web_lg”
  • name: “SpacyTokenizer”
  • name: “RegexFeaturizer”
  • name: “SpacyFeaturizer”
  • name: “SpacyEntityExtractor”
  • name: “custom.component”
  • name: “SklearnIntentClassifier”`

Is there any way to speed up the load time? If not, is there any way to cache the spaCy model in a variable in memory and have that called rather than performing a spacy.load() every time?

Thanks!

(Ella Rohm-Ensing) #2

Can I ask why you’re using the large model instead of the medium one? And what does your custom component do? The model doesn’t load up the model on every response but rather at the beginning of running the server. Do you call spacy.load() in your custom component code?

(Erik) #3

Sorry for the late reply. I’m using the large model as it performs better with named entity recognition of people.

The issue I was having is that I was running spacy.load() in the function call rather than at server startup. Once I changed that, everything started working great.