Can Rasa do distributed training?

Can Rasa do distributed training? For example, the layout is on k8s

Yep, I think so

Any hints on that process? So far I didn’t manage to distribute my NLU training over multiple GPUs.

distributed training is not supported at the moment

1 Like