From my copy...
> Efficiency
> ...
> Note that optimizing for time may sometimes cost you in space or programmer efficiency (indicated by conflicting hints below). Them’s the breaks. If program- ming was easy, they wouldn’t need something as complicated as a human being to do it, now would they?
> ...
> Programmer Efficiency
> The half-perfect program that you can run today is better than the fully perfect and pure program that you can run next month. Deal with some temporary ug- liness.1 Some of these are the antithesis of our advice so far.
• Use defaults.
• Use funky shortcut command-line switches like –a, –n, –p, –s, and –i.
• Use for to mean foreach.
• Run system commands with backticks.
...
• Use whatever you think of first.
• Get someone else to do the work for you by programming half an implementation and putting it on Github.
> Maintainer Efficiency> Code that you (or your friends) are going to use and work on for a long time into the future deserves more attention. Substitute some short-term gains for much better long-term benefits.
• Don’t use defaults.
• Use foreach to mean foreach.
...In threaded code it's not uncommon to analyze a piece of data and fire off background tasks the moment you encounter them. But if your workload is a DAG instead of a tree, you don't know if the task you fired is needed once, twice, or for every single node. So now you introduce a cache (and if you're a special idiot, you call it Dynamic Programming which it is fucking not) and deal with all of the complexities of that fun problem.
But it turns out in a GIL environment, you're making a lot less forward progress on the overall problem than you think you are because now you're context switching back and forth between two, three, five tasks with separate code and data hotspots, on the same CPU rather than running each on separate cores. It's like the worst implementation of coroutines.
If instead you scan the data and accumulate all the work to be done, and then run those tasks, and then scan the new data and accumulate the next bit of work to be done, you don't lose that much CPU or wall clock time in single threaded async code. What you get in the bargain though is a decomposition of the overall problem that makes it easy to spot improvements such as deduping tasks, dealing with backpressure, adding cache that's more orthogonal, and perhaps most importantly of all, debugging this giant pile of code.
So I've been going around making code faster by making it slower, removing most of the 'clever' and sprinkling a little crypto-cleverness (when the clever thing elicits an 'of course' response) / wisdom on top.
That book is one of the most underrated and overlooked works on the philosophy of programming I've ever read. It's ostensibly about best practices in programming Perl (which some people consider a complex language), but in reality this is a very deep book about the best practices for programming in any language.
Note the above excerpt is pretty much universally applicable no matter what the language. Much of the book is written at that level.
https://www.oreilly.com/library/view/programming-perl-4th/97...
Ultimately both are 'too simple', resulting in a combinatorial explosion of states, and at least a quadratic expansion of consequences.
We often write software to deal with consequences of something else. It's possible and not that uncommon for the new consequences to be every bit or more onerous than the originals. I call this role a 'sin eater' because you're just transferring suffering from one individual to another and it sounds cooler and more HR appropriate than 'whipping boy'.
I suspect that at first I did this in an attempt to hack my own sense of motivation, like putting the books you need to return next to the front door. But it turned out to be quite handy for seducing junior developers (and sometimes senior developers) into finishing an idea that you started.
They are so proud that they've thought of something you didn't think of, rather than something you were looking for a maintainer/free cycles for.