I think it's just a case of me using more generalised terminology.
'pretend X doesn't exist' is in my mind 'speculate that X isn't enabled'. It really means the same thing doesn't it?
You don't need a guard between every instruction, as attaching the debugger is an async option - it's already non-deterministic when the application will receive the instruction to move to debug mode, so as long as it checks frequently enough (usually once per loop iteration and once per function call) it's enough.
I think "we'll have a function the debugger calls, telling us to redo the compilation" does describe speculation and deoptimisation. Remember the function may be currently executing and may never return, so it's not as simple as replacing it with a different version. You may need to replace it while it's running.
It does make compilation more complicated because you need to be able to restore the full debug state of the application, which means storing some results you may not choose to do otherwise, and storing extra meta-data.
> Debuggers don't follow any rules
Debuggers can be a formally or informally specified part of the language, and their behaviour may have to follow rules about which intermediate results are visible which may constrain your compilation.
My argument is: if you do treat debugging as speculation then your model is simpler and easier to work with and you don't need two kinds of deoptimisation. Real languages are implemented this way.
Here's two papers I've written about these topics taking the idea even further.
https://chrisseaton.com/truffleruby/icooolps15-safepoints/sa...
https://www.lifl.fr/dyla14/papers/dyla14-3-Debugging_at_Full...