Story Blocks never process before Interactive Training with many Checkpoints

Story Blocks never process before Interactive Training

Same, problem for the core training hangs and the console gets killed due to memory, but with latest Rasa v2 version.

And yes I know, we have a lot of checkpoints. However, still, with Rasa V1 we have been able to train the core within 6 hours.

In V1 our pipeline used

  - name: SklearnPolicy
    epochs: 1
    max_history: 25
rasa train -vv core --augmentation 0 --debug

Once it reached 130+ story blocks it starts to be slower, but still proceeded.

Now we switched to Rasa V2.8.12 and we do not pass the point

Processed story blocks:  49%|█████████████████████▊                      | 132/267 [07:52<3:13:48, 86.14s/it, # trackers=35793

From the 132 story blocks it gets over the next 8 hours of training to 144 blocks and from then it starts to build up indefinitely long training proposal time as 240 hours and would kill the terminal due to memory.

Pipeline

  - name: TEDPolicy
    epochs: 1
    max_history: 25

Actually, we do not care if the training requires 24 hours or longer, but at least it does train. How can we solve this?

Thank you :slight_smile:

we are trying to reduce the number of checkpoints with more OR options. I guess the effect is the same. However, we are right now trying to replace many checkpoints with Creating Logical Breaks in Stories as described in Writing Conversation Data

Will this work and can this replace the modular design we had with checkpoints?

:disappointed_relieved: Still not working: The logical breaks are not alternative and do not really help the modularisation that I am seeking for: Writing Conversation Data

Only I found this entry in the forum: How to modularize multi branching sequence - #15 by FearsomeTuna But it also does not answer it.