End to end story testing; how to read results of failed_test_stories.yml

Hi all,

I am hanging around for a while now without success regarding end to end testing. It all starts already how to interprete the results of the test run being written into failed_test_stories.yml. Currently the overall results show 3 of 30 test stories are correct, which results in having 27 in the failed_test_stories.yml. What I understood so far from the docs then is that one can expect the #tag to indicate where the test story deviates from the prediction. But here my problem starts - currently I have no hashtag at all. This forum post indicates that other users seem to share this experience as well http://forum.rasa.com/t/failed-stories/46003/29

Versions:

  • Rasa Version : 2.8.26
  • Minimum Compatible Version: 2.8.9
  • Rasa SDK Version : 2.8.6
  • Rasa X Version : None
  • Python Version : 3.7.13
  • Operating System : Linux-4.15.0-175-generic-x86_64-with-Ubuntu-18.04-bionic

I tested it also with Python 3.8 → same results

Open the record: Utilize a content manager or YAML watcher to open the “failed_test_stories.yml” document. YAML is an intelligible information serialization design. Distinguish the bombed stories: The record will contain a rundown of bombed stories, each addressed as a YAML report. Each bombed story will have a special identifier, generally a name or a number, alongside its related subtleties. Audit the disappointment messages: Search for the disappointment messages related with each bombed story. These messages give experiences into why the story fizzled. They can incorporate blunder messages, exemptions, or other significant data that analyze the issue. Investigate the disappointment cause: Inspect the disappointment messages and distinguish the fundamental reason for the disappointment. Normal causes can incorporate inaccurate way of behaving, absent or invalid information, joining issues, or natural issues. Understanding the main driver is pivotal for successful investigating and goal. Cross-reference with test situations: Assuming that your testing system gives test situations or experiment documentation, cross-reference the bombed stories with the comparing test situations. This will assist you with grasping the normal way of behaving and contrast it and the noticed disappointment. Catch extra data: In the event that the disappointment messages alone don’t give adequate setting, check if the “failed_test_stories.yml” record incorporates extra subtleties, for example, stack follows, logs, or screen captures. These beneficial snippets of data can give further insights to support investigating.