hello everyone,
I saw this GitHub - PolyAI-LDN/polyai-models that says that conveRT won’t be available anymore, just thought that someone else might wonder why they get an HTTP Error 404
hello everyone,
I saw this GitHub - PolyAI-LDN/polyai-models that says that conveRT won’t be available anymore, just thought that someone else might wonder why they get an HTTP Error 404
Hi @magda
Thanks for pointing this out. We’re aware of that and recommend to use supervised embeddings pipeline, with word + char cvf for now.
Hi again,
I wrote it also in github, but I have trouble loading the model from the url provided in this github issue PolyAI Models - model.tar.gz - No Longer Available? · Issue #6806 · RasaHQ/rasa · GitHub, I tried changing the configuration file, and the url in convert_tokenizer.py but still. Then I tried to load it as tensorflow hub module but also get an error. Any help?
import tensorflow_hub as tfhub
model_url = "https://github.com/connorbrinton/polyai-models/releases/download/v1.0/model.tar.gz"
t = tfhub.load(model_url)
Results in
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'WordpieceTokenizeWithOffsets' in binary running on iti-722. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)
tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
import tensorflow_hub as tfhub
model_url = "https://github.com/connorbrinton/polyai-models/releases/tag/v1.0"
tfhub.load(model_url)
results in
OSError: https://github.com/connorbrinton/polyai-models/releases/tag/v1.0 does not appear to be a valid module.
Hi @magda , we are still trying to get full information on the license of the ConveRT model and whether it is re-distributable. The URL provided in the issue is by a community member and we haven’t validated if that works. Hopefully once we get more clarity on the licensing of the model, we can verify the model URL/host it ourselves if allowed to. Until then, it is recommended to use the supervised embeddings pipeline with count vectors featurizer. Thanks for your patience on this.
ok thanks