I’m building a bot which worked quite fine on the start however when the stories grew to about 165 I get cases of wrong predictions at random places. I wrote tests and sometimes I have all tests passing i.e 18/18 other times 13/18 etc. Basically the test outcome are not stable. My story writing strategy is as below
Basically due to the numerous number of turns/paths I use an action to join stories where the stories split based on button selection. I then use a categorical slot with influence conversation set to true to differentiate the stories. I have replicated use of the same intent and slot across the entire set of stories. Could this be the issue causing tests to be inconsistent ?
Below is my config.yml
language: en pipeline: - name: WhitespaceTokenizer - name: RegexFeaturizer - name: LexicalSyntacticFeaturizer - name: CountVectorsFeaturizer - name: CountVectorsFeaturizer analyzer: "char_wb" min_ngram: 1 max_ngram: 4 - name: DIETClassifier epochs: 100 - name: EntitySynonymMapper - name: ResponseSelector epochs: 100 - name: FallbackClassifier threshold: 0.7 - name: AugmentedMemoizationPolicy - name: TEDPolicy max_history: 5 epochs: 100 evaluate_on_number_of_examples: 0 evaluate_every_number_of_epochs: 2 tensorboard_log_directory: "./tensorboard" tensorboard_log_level: "epoch" - name: RulePolicy
Rasa Version : 2.2.8 Rasa SDK Version : 2.2.0 Rasa X Version : None Python Version : 3.8.0 Operating System : macOS-10.15.1-x86_64-i386-64bit Python Path : /Users/user/Data/bnbrproject/venv/bin/python3