Hi guys, i have build a new model for a small chat application using rasa nlu. whn am parsing the model for intent classification (i.e. rasa run -p 8085 --enable-api), the request seems to be consuming a lot of time for first request, can any one whats the problem here. is there any way i can overcome it?
Can you paste your pipeline here? Have you reviewed the NLU logs at start time and when you make your first request?
@stephens thanks for your reply
am using the below config file.
language: “en”
pipeline:
- name: “WhitespaceTokenizer”
- name: “RegexFeaturizer”
- name: “deepPavlov.DeepPavlov”
- name: “CRFEntityExtractor” features: [ [“low”, “title”, “upper”], [“bias”, “low”, “prefix5”, “prefix2”, “suffix5”, “suffix3”,“suffix2”, “upper”, “title”, “digit”, “pattern”], [“low”, “title”, “upper”] ]
- name: “EntitySynonymMapper”
- name: “CountVectorsFeaturizer”
- name: “EmbeddingIntentClassifier”
- name: “DucklingHTTPExtractor”
url: http://localhost:8000
dimensions:
- time
- number
- amount-of-money
- distance
- ordinal
policies:
- name: MemoizationPolicy
- name: KerasPolicy
- name: MappingPolicy
and i was watching the request it seems to be taking a lot of tym to contact nlu itself .
@stephens @akelad @juste_petr can any one help me with this?
I would next look at the logs. If you’re running a docker-compose setup: docker-compose logs -f
on start, issue your first request and see what’s going on. Errors, long delay at some point…