Why would you say that the used of checkpoints is discouraged? This is not mentioned or suggested in the blog post (according to my understanding). Of course, it is mentioned in the documentation that the use of checkpoints has caveats such as worse readability of stories and increases training time.
However, I don’t see a way to get around checkpoints for very large stories (e.g. over 30 states in the tracker) In my case, intent prediction works fine for the story parts I stitched together with checkpoints, but not with the “Creating logical breaks in stories” approach
I see why you don’t recommend checkpoints. I just tried to add two more checkpoints to my giant story and the trackers went from like 7000 to 150k+, which isn’t feasible for training (at least on my machine)
Just to add on this: I agree that in my experience checkpoints work MUCH more reliable to connect stories. I would say they work in 100% of the cases. Not so with Logical Breaks. However, as you have experienced just adding some more checkpoints increases the training time extremely. So as a rule of thumb I would say that you use checkpoints if you want to make sure that the story continues exactly the way you want it to and use the other way if it is not “the worst” if it would not work out. But honestly when is it not bad?