How can I give Rasa more resources for training?

As my bot grows in complexity, it takes longer and longer to train.

Watching the Performance tab in Task Manager (I’m in Windows 10), I can see it is only using 100% of the CPU and about 50-52% of the memory, and only 2% of one GPU and 0% of the other.

I’m running this from a command prompt, not in WSL / Ubuntu (that for those interested caps at 50% CPU and memory, so it takes twice as long to train in that)

I see that it is actually the python.exe process, not rasa.exe that is utilizing these resources.

Is there any way to configure this to use more memory? I gave it a High priority via Task Manager but that didn’t make a difference.

@jonathanpwheat, Rasa uses tensorflow inside, and that will take by default all the available GPUs and uses the Memory as it needs it.

‘Giving’ the process more memory will not help, because it already grabs whatever it needs.

The fact your GPU’s activity is low seems to indicate that your GPU is not being used.

Do you see any messages about the GPU in the printout during training ?

No errors that I had noticed. I just retrained it and specifically watched for errors and there is nothing out of the ordinary. I hops right into Training Core model; processing Story blocks, trackers, actions, then into the Epochs, where the system really takes a hit.

Strangely this time the numbers are fluctuating more for the GPU (CPU is still spiked)

I have a ThinkPad P52 (32G RAM) i7 Processor, apparently with 2 graphics cards.

  • Intel UHG Graphics 630 (using 2-17%) <<< so maybe it is using this properly?

  • NVIDIA Quadro P1000 (using 1-3%)

When it trains the NLU model, the Intel card fluctuates between 2-7%

Do you think it should be using more than that?