Training: How to use multiple GPUs for distributed training?

python:3.9
rasa:3.1.0

Now I have multiple GPUs, I want to use Rasa to do distributed training on multiple GPUs, I set the environment variable TF_GPU_MEMORY_ALLOC="0:5120,1:5120", but the model is only trained on GPU0. GPU1 is just blocked and does not participate in the actual training. How can I perform distributed training on multiple GPUs? Thanks!

1 Like