What does your compose file look like?
I misread your first question.
This thread isn’t about Rasa X at all, but I believe you should still be able to get detailed logs if you include the --debug
flag in your compose. Could you post this question in a new topic so that someone more familiar with Rasa X can help you there?
Got it. Thanks @ganeshv.
@dakshvar22, @ridhimagarg - is it still working on 1.10.14? I getting a new set of errors as mentioned in this post - Is anyone else having problems trying to download the poly-ai model?
Yes, getting some new errors.
@ganeshv What version were you using before?
I think you have to train your model again. token_pattern
was moved from CountVectorsFeaturizer
sometime back - Move token_pattern to tokenizers by tabergma · Pull Request #6073 · RasaHQ/rasa · GitHub
I was using rasa version 2.0.0a1-full before when I first reported the titular issue in this thread. I had also tried 1.10.13-full.
After your fix, I tried 1.10.14-full and got past the missing URL issue, but immediately ran into the token_pattern issue and that’s when the rasa container exited.
How do I train the bot in this case because I’m unable to get the rasa server working?
Some other issue, able to fix it.
I ran into an issue as well, but was able to work around it with: pip install tensorflow_text==2.1.1
Though in my case we need it working in Rasa 2.0.
@dakshvar22 I removed the trained model and it started running the rasa server.
However, I ran into a related problem as I was unable to train the NLU model because the URL to the model failed again.
Training NLU model...
2020-09-28 08:51:08 INFO absl - Using /tmp/tfhub_modules to cache modules.
2020-09-28 08:51:08 INFO absl - Downloading TF-Hub Module 'https://github.com/PolyAI-LDN/polyai-models/releases/download/v1.0/model.tar.gz'.
Traceback (most recent call last):
File "/opt/venv/lib/python3.7/site-packages/rasa/utils/train_utils.py", line 169, in load_tf_hub_model
return tfhub.load(model_url)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/module_v2.py", line 97, in load
module_path = resolve(handle)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/module_v2.py", line 53, in resolve
return registry.resolver(handle)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/registry.py", line 42, in __call__
return impl(*args, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/compressed_module_resolver.py", line 88, in __call__
self._lock_file_timeout_sec())
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/resolver.py", line 415, in atomic_download
download_fn(handle, tmp_dir)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/compressed_module_resolver.py", line 83, in download
response = self._call_urlopen(request)
File "/opt/venv/lib/python3.7/site-packages/tensorflow_hub/compressed_module_resolver.py", line 96, in _call_urlopen
return url.urlopen(request)
File "/usr/local/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/local/lib/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
I think they moved the URL again - https://github.com/PolyAI-LDN/polyai-models/releases/download/v1.0/model.tar.gz. Could you please advise?
Model cannot be found.
Will model be hosted someplace else?
Doesn’t look like it. Poly-AI took the model off the public domain. I’m currently using spaCy as a workaround.
Unfortunately, the ConveRT model was taken offline. We are working on a long-term solution and will keep you updated. In the mean time we recommend to remove ConverRT from your pipeline and just use supervised embeddings, such as CountVectorsFeaturizer
.
For example, you could change the default config.yml
created by rasa init
to the following to train your model.
language: en
pipeline:
- name: WhitespaceTokenizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: char_wb
min_ngram: 1
max_ngram: 4
- name: DIETClassifier
epochs: 100
- name: EntitySynonymMapper
- name: ResponseSelector
epochs: 100
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
# - name: MemoizationPolicy
# - name: TEDPolicy
# max_history: 5
# epochs: 100
# - name: RulePolicy
We are sorry for any inconveniences!
Please update to version 1.10.14, this bug fixed there