Let's say we have the following Haskell definitions:
error msg = undefined -- "throwing an error"
undefined = undefined -- infinite loop
In this setup, there's no way for us to tell, under any evaluation strategy, whether or not an expression evaluates to _|_. Any code we might write to "check" for this would have to run "after" an infinite loop, which is impossible. Hence it's impossible to write an expression which, under any evaluation strategy, normalises to two distinct non-_|_ values: either it always produces the same value, or it sometimes produces one value and sometimes _|_ (depending on the strategy), or it always produces _|_ regardless of strategy.If we can distinguish between "different _|_s", e.g. catching some as exception values, then we can write an expression which reduces to different non-_|_ values depending on the evaluation strategy, and hence we lose confluence (the weaker form; we already lost the stronger Coq/Agda form by having _|_ in the first place).
This is fundamentally different to the value-or-_|_ uncertainty, since that's unobservable from within the language.