Model Persist/Load from URL

Hi I opened this feature request on Rasa, but I’m also posting here incase anyone knows a workaround for saving and loading a model directly from a URL?

Description of Problem: Currently only two methods exist in code for model saving and loading - either locally only, or locally first, and then uploading it to cloud. What I need is another method / embedded code for creating a temporary .tar file in memory for the RasaNLUInterpreter and then directly uploading to HDFS (Hadoop File Server). Similarly, it should be downloaded from there directly from the URL/.tar file too.

Overview of the Solution: You can look at TensorFlow Hub’s Implementation for Model Saving and Loading - they save .tar.gz files that can be saved or loaded directly from any URL, even though the model has multiple files to save as individual components.

Examples (if relevant):

import tensorflow_hub as hub 
model_url = 'http://192.168.0.61:50070/webhdfs/v1/user/root/NLPEngine/universal_sentence_encoder_large_5.tar.gz?op=OPEN' 
model = hub.load(model_url)

Blockers (if relevant): Currently because every component in RASANLUInterpreter’s Pipeline expects a model-directory for saving to be a local path, its becoming too difficult to manually overwrite every component’s persist method in order to save RASA Models to Hadoop File Server.