I am running an instance of rasa on my machine, it is using all the GPUs available. Is there any way to tell rasa that it should use only CPU and not “GPU”? Also, is there a way to tell rasa which GPU it should use from the available ones?
when i am trying to run rasa, it is giving me this error as my 1 GPU is already in use because of other instance of rasa. However my GPU 2 is free.
error: tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: out of memory Makefile:24: recipe for target ‘run-core’ failed make: *** [run-core] Aborted (core dumped)