Can Rasa do distributed training? For example, the layout is on k8s
Yep, I think so
Any hints on that process? So far I didn’t manage to distribute my NLU training over multiple GPUs.
distributed training is not supported at the moment
1 Like
@Ghostvv Is it possible to do distributed training in rasa version 2.8.6 or Are we supporting in any other version now?
I would also like to know that. My idea is to use Ray cluster for multi-node distributed training, with each node representing a separate HW, having a Rasa chatbot of its own.