Core model training only uses one CPU core

My RASA Bot training takes a long time and I noticed that during the Core model training only uses one CPU core.

The bot training takes a long time (about three hours) but not uses all CPU capacity during Core model training. During the NLU model training all CPU Cores are used, but during Core model training only uses one core. I try run training with flag --num-threads 8, but the behaviour is the same.

What can I do to use all CPU cores during Core model training ? Someone can help me? Thank you in advance.

I have a virtual server with the following specs:

  • CPU: Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz (8 cores)
  • RAM: 32GB

Rasa Version : 2.6.3 Minimum Compatible Version: 2.6.0 Rasa SDK Version : 2.6.0 Rasa X Version : 0.40.1 Python Version : 3.7.3 Operating System : Linux-4.19.0-14-amd64-x86_64-with-debian-10.8

My RASA Bot config: language: en pipeline:

  • name: SpacyNLP model: en_core_web_lg
  • name: SpacyTokenizer
  • name: RegexFeaturizer
  • name: SpacyFeaturizer
  • name: SpacyEntityExtractor
  • name: LexicalSyntacticFeaturizer
  • name: CountVectorsFeaturizer
  • name: CountVectorsFeaturizer analyzer: char_wb min_ngram: 1 max_ngram: 4
  • name: DIETClassifier epochs: 100
  • name: EntitySynonymMapper
  • name: ResponseSelector epochs: 100
  • name: FallbackClassifier threshold: 0.4 ambiguity_threshold: 0.1 policies:
  • name: MemoizationPolicy max_history: 3
  • name: TEDPolicy max_history: 5 epochs: 10
  • name: RulePolicy core_fallback_threshold: 0.4 core_fallback_action_name: action_default_fallback

Following this issue.

@hams Try run the same code locally. Even share the config for local!

Thanks for your help. I try run the training locally in a laptop with Ubuntu, but the behaviour is the same. During Core model training it is only used one core.