Good day! At the moment, we have built the architecture like this: We have a server with Gitlab, on which two pipelines are built: for building and deploying a Docker image to the server. However, when pushing, Rasa training process takes about half an hour. What best practices have you identified with Rasa, perhaps you are doing separate training on a GPU server or something similar?
Hi @emil.alasgarov, one option for performing GPU training in the context of a CI/CD pipeline would be to set up a GPU instance that is able to take incoming requests from your pipeline, train a model, and upload it back to a cloud storage bucket for the pipeline to pick up on. AWS Sagemaker, for example, allows you to create a training job via an API call: Train a Model with Amazon SageMaker - Amazon SageMaker
1 Like