I think this is kind of the answer I was looking for, and in other systems I've actually manually implemented things like this with a "temp materialize" operator that's enabled by a "debug_run=True" flag. With the NB thing basically I'm trying to "step inside" the data pipeline, like how an IDE debugger might run a script line by line until an error hits and then drops you into a REPL located 'within' the code state. In the notebook I'll typically try to replicate (as close as possible) the state of the data inside some intermediate step, and will then manually mutate the pipeline between the original and branch versions to determine how the pipeline changes relate to the data changes. I think the dream for me would be to having something that can say "the delta on line N is responsible for X% of the variance in the output", although I recognize that's probably not a well defined calculation in many cases. But either way at a high level my goal is to understand why my data changes, so I can be confident that those changes are legit and not an artifact of some error in the pipeline.
Asserting that a set of expectations is met at multiple pipeline stages also gets pretty close, although I do think it's not entirely the same. Seems loosely analogous to the difference between unit and integration/E2E tests. Obviously I'm not going to land something with failing unit tests, but even if tests are passing the delta may include more subtle (logical) changes which violate the assumptions of my users or integrated systems (ex. that their understanding the business logic is aligned with what was implemented in the pipeline).