As per my understanding, we have rules so that we can get the same behavior every time, for eg: if I have the following
` - rule: greet
steps:
- intent: greet
- action: utter_greet
- rule: bye
steps:
- intent: bye
- action: utter_bye`
the following tests should work automatically
`- story: greet greet
steps:
- intent: greet
user: |-
/greet
- action: utter_greet
- intent: greet
user: |-
/greet
- action: utter_greet
- story: bye greet
steps:
- intent: bye
user: |-
/bye
- action: utter_bye
- intent: greet
user: |-
/greet
- action: utter_greet
- story: greet bye
steps:
- intent: greet
user: |-
/greet
- action: utter_greet
- intent: bye
user: |-
/greet
- action: utter_greet`
but it does not in my case, the actual use case is more complex(have slots, more than one action, etc), which I can share if required but that’s the gist of it.
The weird thing is I am able to pass some of the test cases by adding stories by stitching rules so as to pass my use case but that requires too many stories, as is my case where if I use Augmentation 0 a lot of use case similar to greet bye above fails but with some positive augmentation they are working.
What am I missing in terms of understanding of rules?
Update: I was able to make my model stable by adding more stories, i.e. stitching stories, rules, etc. That works for now, but I have noticed whenever a rule is not successfully identified it is because for some reason the utterance is being predicted by TED instead of the rule, something similar is mentioned here