Emitting 'secure' code almost always means emitting 'slower' code, and one of the few things compilers are assessed on is the performance of the code they generate.
Compilers are built as a series of transformation passes. Normalisation is a big deal - if you can simplify N different patterns to the same thing, you only have to match that one canonical form later in the pipeline.
So if one pass makes code slower/secure, later passes are apt to undo that transform and/or to miss other optimisations because the code no longer looks as expected.
So while it is useful to know various make-it-secure transforms, which this book seems to cover, it's not at all obvious how to implement them without collateral damage.
On a final note, compiler transforms are really easy to get wrong, so one should expect the implementation of these guards to be somewhat buggy, and those bugs themselves may introduce vulnerabilities.
IBM even did their RISC research in PL.8 taking into consideration safety and pluggable compiler infrastructure, similar to what people nowadays know from LLVM approach.
Some would say that security measures in the car industry also slow drivers down and are a nuisance.
Not sure about that: what are brakes for? Slowing down & stop, right? But then I ask, how fast would you drive if your car had no brakes? I would guess not very fast at all. Thus, one important role of breaks is to allow you to drive faster.
In practice, the more safety measures you put, the more confident people grow and the faster they drive. To a point, of course.
Same with programming: I prototype faster with a good static type system than I do with a dynamic one. One reason is that I just write fewer tests (including those one-off verifications in the REPL).
The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it. So far, it seems all market participants are content with 90% of security concerns being addressed by security theater.
Returns in digital stores, increasing visibility of how it actually costs in real money to fix those issues, warranty clauses in consulting gigs (usually free of charge), and introduction of cyber security laws like in Germany [0].
[0] - https://www.bsi.bund.de/EN/Das-BSI/Auftrag/Gesetze-und-Veror...
This is the punishment approach. What it inevitably leads to is denial, coverup, unwillingness to innovate, and not fixing problems because fixing them is an implicit admission of fault.
The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.
... and finally adoption of the required methods and reaching required standards, like countless cases of successful regulation since times immemorial.
How do you give companies a positive incentive to fix an issue if the issue does not cost them money? Fixing such an issue is a competitive disadvantage.
> The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.
What does that look like? Paying companies per disclosed bug in their software? State sponored white-hat hacker teams that find and fix the companies' bugs for them without disclosure? I can't think of anything that sounds realistic.
I would add that the vast majority of businesses also choose features over speed.
In some cases they pay lip service to speed, for instance by choosing C++, but pay zero attention to actual speed, because they end up writing in a pointer fest RAII style that destroys memory locality and miss the cache all the time. Compared to that, even Electron doesn’t look too unreasonable.
The D compiler would be faster if we turned off array bounds checking and assert checking. But we leave those security features turned on for release builds.
On the 1st half, I'm uncertain how it relates to compilers.