I think this might be key, in addition to some landmark tokens to quickly backtrack to. The big question is how to train such model.
There is a recent paper from Meta that propose a way to train a model to backtrack its generation to improve generation alignment [0].
[0] https://arxiv.org/html/2409.14586v1