I am currently training models and storing them in buckets but i would like to persist both the NLU and Core model info like accuracy,F1 score …etc on to a database for future use.I was able to evaluate the NLU part with the nlu.md and config file, but for the case of core i need the model and stories.md path to get the accuracy, F1 score…etc.
So is it possible to the evaluate the core as well before training the model@Tanja
I’m not sure if I understand you correctly. You cannot evaluate a non-existing model. You always need to train the model first, before you can evaluate it.
For testing the NLU model via rasa test nlu you need to specify at least the model file and the test data. Same for testing the Core model via rasa test core.
Can you maybe explain again, what you want to achieve? Thanks.
Okay. i am currently building a service which would request model/train API to train the current data.So before training the Model i would like to store the cross validation score for my training data
Which i was able to achieve.Now my next task is to test the core accuracy for which we need model path (Sorry i made that silly mistake of asking the above question )
I have used to gzip library to convert the result returned by the API
temp_dir = tempfile.mkdtemp()
model_path = os.path.join(temp_dir,f"{result_obj.headers['filename']}")
with gzip.open(model_path,'wb') as _zip:
_zip.write(result_obj.body)
I dont know if the above implementation is correct since i am not able to load this tar.gz file or test the stories using it. It would give me the below error
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
So my question is how can i convert the byte object returned by the train API back to tar.gz file?
An implementation of how to convert byte to .tar.gz would be really helpful