Hi Guys, Hope you all are doing well, i have some queries , i am working on building a model which needs the NLU alone, the records which i use in look up table for slot identification has about 25-30 Lakh data’s in it. and i also have a Custom component. My Pipelines are :
language: “en”
pipeline:
- name: “WhitespaceTokenizer”
- name: “RegexFeaturizer”
- name: “deepPavlov.DeepPavlov”
- name: “CRFEntityExtractor” features: [ [“low”, “title”, “upper”], [“bias”, “low”, “prefix5”, “prefix2”, “suffix5”, “suffix3”,“suffix2”, “upper”, “title”, “digit”, “pattern”], [“low”, “title”, “upper”] ]
- name: “EntitySynonymMapper”
- name: “CountVectorsFeaturizer”
- name: “EmbeddingIntentClassifier”
- name: “DucklingHTTPExtractor”
url: http://rasa-support
timezone: UTC
dimensions:
- time
- number
- amount-of-money
- distance
- ordinal
policies:
- name: MemoizationPolicy
- name: KerasPolicy
- name: MappingPolicy
version:
rasa -> 1.10.3
My Queries are : -> My model occupies about 8 - 11 GB of RAM, Which is so much for my Machine to Handle is there a way to reduce it. -> While Loading a model it takes about 12 - 15 minutes for the model to load and classify the text.(First Response Alone) -> RASA takes about 10 hours to Train a Model. is there a way to reduce it.