What could be the cause of this error? Has anyone encountered this issue before? I have deployed my endpoint in a docker container in Kubernetes.
My Dockerfile and config.yml are as follows.
Dockerfile:
FROM rasa/rasa:2.5.0-full
MAINTAINER Ben Jenis
COPY ./*.yml /app/
COPY ./data /app/data/
#COPY ./tests /app/tests/
COPY ./actions /app/actions/
COPY ./requirements.txt /app/requirements.txt
COPY ./models /app/models
WORKDIR /app
USER root
RUN mkdir cache
RUN chmod -R 777 ./cache
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5005
USER 1001
CMD ["run", "--enable-api"]
config.yml:
language: en
pipeline:
- name: SpacyNLP
model: "en_core_web_lg"
cache_dir: app/cache
- name: SpacyTokenizer
- name: SpacyFeaturizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: RegexEntityExtractor
- name: CRFEntityExtractor
- name: EntitySynonymMapper
- name: DIETClassifier
epochs: 100
entity_recognition: False
constrain_similarities: True # this should help to better generalization to real world test sets
# - name: EntitySynonymMapper
# - name: ResponseSelector
# epochs: 100
# constrain_similarities: true
- name: FallbackClassifier
threshold: 0.5
ambiguity_threshold: 0.4
#
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
- name: MemoizationPolicy
- name: TEDPolicy
max_history: 5
epochs: 100
constrain_similarities: true
- name: RulePolicy
core_fallback_threshold: 0.4
core_fallback_action_name: "action_default_fallback"
enable_fallback_prediction: True
More detailed output of error:
/model/parse POST parse
/conversations/<conversation_id:path>/predict POST predict
/conversations/<conversation_id:path>/tracker/events PUT replace_events
/conversations/<conversation_id:path>/story GET retrieve_story
/conversations/<conversation_id:path>/tracker GET retrieve_tracker
/status GET status
/model/predict POST tracker_predict
/model/train POST train
/conversations/<conversation_id:path>/trigger_intent POST trigger_intent
/model DELETE unload_model
/version GET version
2021-06-29 06:31:01 INFO root - Starting Rasa server on http://localhost:5005
2021-06-29 06:31:01 DEBUG rasa.core.utils - Using the default number of Sanic workers (1).
2021-06-29 06:31:01 INFO root - Enabling coroutine debugging. Loop id 92476912.
2021-06-29 06:31:01 INFO rasa.model - Loading model models/20210628-225258.tar.gz...
2021-06-29 06:31:01 DEBUG rasa.model - Extracted model to '/tmp/tmpmeay06bs'.
2021-06-29 06:31:03 DEBUG root - Could not load interpreter from 'models'.
2021-06-29 06:31:03 DEBUG rasa.core.tracker_store - Connected to InMemoryTrackerStore.
2021-06-29 06:31:03 DEBUG rasa.core.lock_store - Connected to lock store 'InMemoryLockStore'.
2021-06-29 06:31:03 DEBUG rasa.model - Extracted model to '/tmp/tmpdpwnqvqz'.
2021-06-29 06:31:04 ERROR rasa.core.agent - Could not load model due to cannot reshape array of size 170529788 into shape (684830,300).
/opt/venv/lib/python3.8/site-packages/rasa/shared/utils/io.py:97: UserWarning: The model at 'models' could not be loaded. Error: <class 'ValueError'>: cannot reshape array of size 170529788 into shape (684830,300)
/opt/venv/lib/python3.8/site-packages/rasa/shared/utils/io.py:97: UserWarning: Agent could not be loaded with the provided configuration. Load default agent without any model.
2021-06-29 06:31:04 DEBUG rasa.core.nlg.generator - Instantiated NLG to 'TemplatedNaturalLanguageGenerator'.
2021-06-29 06:31:04 INFO root - Rasa server is up and running.