Python API: Use existing dict as training data, rather than load_data from json file

I’m working on using the python api in a server, and when the time comes to train the nlu model I already have the training data as a python dictionary, and it doesn’t seem like I can just use that, but rather have to save it as json, then load it. This seems… sub-optimal. Am I missing something? The HTTP API seems fine just taking some json data, but the python API seems to require the file, and my few minutes perusing the source didn’t yield anything.

Any advice? My plan is probably something like a BytesIO file-like object or TemporaryFile, but I don’t like either of those solutions. I’m really hoping I’m missing something.

Hi @matthew.cavener, as you already mentioned, the python API requires a file. Our HTTP API just dumps the data it gets into a temporary file before starting the training. So, internally we are working with temporary files. I guess, that is also the best option for you. Just dump the training data into a temporary file and use that for calling the train method.

1 Like

Thanks for getting back to me. I suppose it isn’t really a problem to use a temp file.