Traning NLU on GPU

Hi,

While training NLU it is consuming 100% CPU with following specs.

8 GB RAM i5 7th gen 2 GB Nvidia 940 mx

I need help regarding setting up the GPU for training.

It would help us debug if you gave us detailed information about the steps that you took as well as your operating system.

One thing to point out though. While DIET is able to use the GPU via tensorflow, the other components in your NLU pipeline cannot. A countvectorizer cannot run faster when there’s a GPU around and will only use a CPU. That said, the CV object is made with scikit-learn so that can only use 1 CPU core at a time. So I’m curious what else might be causing it.

Hey thanks for the response but, I am not trying to set up the GPU anymore.

Hi… from reading this, I understand CV is not viable for GPU training. Let me know, if this config pipeline components can use GPU to the fullest…

language: en pipeline:

  • name: HFTransformersNLP model_weights: “bert-base-uncased” model_name: “bert”
  • name: LanguageModelTokenizer
  • name: LanguageModelFeaturizer
  • name: DIETClassifier epochs: 20 number_of_transformer_layers: 4 transformer_size: 256 use_masked_language_model: True drop_rate: 0.25 weight_sparsity: 0.7 batch_size: [64, 256] embedding_dimension: 30 hidden_layers_sizes: text: [512, 128]

Will LanguageModelTokenizer, and LanguageModelFeaturizer work using GPU? Thanks in advance.

What version of Rasa are you using? The HFTransformersNLP is technically deprecated.

Rasa Version : 2.7.1 Minimum Compatible Version: 2.6.0 Rasa SDK Version : 2.7.0 Rasa X Version : None Python Version : 3.8.5 Operating System : macOS-10.15.7-x86_64-i386-64bit

What to use in place of HFTransformers?

As explained on the docs, you should just be able to use the LanguageModelFeaturizer directly.