>the kind of AI safety described in this post seems more like an extremely fancy version of program verification
It kind of is. The field of AI safety is actually much more advanced than most people realise, with actual, real techniques to e.g. make sure neural networks are aligned with certain goals even under fluctuating parameters. Granted, we're still far from soothing an AGI before it can do something bad, but the tools we have today are already pushing in that direction (assuming neural networks are the right way to AGI of course).