Can Rasa implemented in an embedded system?

Hi, there! Can any one help me to answer these questions as follows? (1) Can Rasa implemented in an embedded system? (such as an arm-based embedded system.) (2)Are there for the costs of resources to be occupied on heavily? Actually, we will plan an implementation for the Rasa-core into an embedded system such as a Arm-based Linux system. (3)How to have a good setting for GPU performance? I mean is: besides of the dataset trainning-part, which parts of Rasa can be support on GPU? (i.e. Is NLU part? or Is the inference-part? Is …?) thanks so much.

GPU compatible are only TensorFlow parts for both training and inference. It should be used automatically if you install tensorflow-gpu

Yes, you are right. Thanks a lot. But I need porting Rasa-Core into an embedded system, so I must have a consideration on performance. We may consider to use the famous embedded system such as Nvidia Jetson Nano, but I worry about the GPU performance can not support as well because of Rasa. As you said, the GPU compatible are only TensorFlow parts for both training and inference. It should be used automatically if you install tensorflow-gpu , thus,can I have a good setting for GPU performance? The meaning is: besides of the dataset trainning-part and inference-part, which parts of Rasa can be support on GPU? (i.e. Is the pipeline-setting part? or Is the component-setting part, or …?) thanks so much.

what do you mean which part? only tensorflow, everything else is run on cpu. Moreover unless you have huge amount of data, you’ll not have a lot of performance gain from training these algorithms on GPU

Thanks for Ghostvv. Actually, my dataset is not enough yet so that It may not be obviously in GPU part. In future, our dataset will be increased for more. It may be worthy of trying. Thank you so much.