If you want to follow this advice, I think Leslie Lamport's formulation is the most concrete. His version is:
- Write down a small model of what you are about to build before building it.
- Try to see how that model could fail. If it still looks good, either make the spec/design doc/model clearer and more concrete, or start writing the code.
- If plain text gets too vague, use math. If you want tools to check it, use TLA+ with TLC or Apalache. Quint and stateright.rs are nearby tools in the same area.
---
Materials:
- TLA+ videocourse by Lamport: https://lamport.azurewebsites.net/video/videos.html
- How to win a Turing Award speech: https://www.youtube.com/watch?v=tw3gsBms-f8
- Thinking above the code: https://www.youtube.com/watch?v=-4Yp3j_jk8Q
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
[0] Another classic required reading of the industry https://pages.cs.wisc.edu/~remzi/Naur.pdf
This is also why one of my instructions to coding agents is that they adhere to established coding and testing patterns, even where they appear to be sub-optimal.
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Management just keep hiring developers. Coordination and communication between them and product managers gets more difficult.
Management keep agreeing to unrealistic timelines on new projects, yet haven't completed the old projects yet.
The architecture of the codebase is so convoluted and confusing that new devs take a very long time to get anywhere close to productive and self sufficient. Time to develop and release acceptable features gets longer and longer.
On top of all that, senior devs and tech leads are very up tight about code quality and 'how to write your code' yet can't even agree with each other on 'how to write your code' leading to Pull Request hell for devs and occasional whole week rewrites.
I'd buy a copy of this book and put it on my bosses desk, but he won't read it.
Vibe coded software is the Marvel green screen movie equivalent.
AI accelerated iteration and undisciplined use seem to me the main risk lately. Drift from any prompting, accumulation of features, plausible looking patches that are actually slop. Wrote about this recently [0], it creates a dynamic where you aren’t just making the software less coherent, you’re paying more and more to make it so. Curious to hear others experience of this.
Exponentially? Quadratically I would say.
“Through more vigorous computer programing and more sophisticated scheduling, it was possible to reduce the changeover period from two weeks to two days.”
- Lee Iacocca (in his autobiography)
It’s a new frontier, and there are no absolutes. But I suspect the most durable AI systems will be built around highly composable, well-orchestrated agents.
Adding AI to something makes it later/worse/slop
Fred Brooks wrote that book when they were programming IBM operating systems in assembly language.
Times have really, really changed - do not pay attention to the messages of this book unless for historical fun.
The last three times I read the book, everything held.
This time, I'm not so sure: AI does change things significantly. Perhaps not for all teams and not all scales of software, but in my case (solo developer, complex software system) I did measure a 12x productivity increase [1].
Also, some of the problems Brooks describes became much easier, if not borderline trivial with AI. For example, maintaining design documentation that stays consistent with the software being built. I do this and it is no longer a problem.
I still think most of what Brooks wrote is applicable today. I think the biggest difference is that AI enables smaller teams to work on larger systems, and the biggest benefit is for single-person teams (ahem) like me. I see it as another step that allows me to tackle larger systems: the previous one was Clojure which reduced incidental complexity so significantly that I was able to develop the system to the size it is today. AI is the next step: it allows me to build features that would have taken me years in a span of months. Not because of "vibe coding", but primarily because I can work on a set of design documents and turn my ideas into a coherent design.
[1] For the nitpickers: yes, measured, not guessed. Yes, the metric was reasonable. No, it wasn't "lines of code" or something equally silly, in fact one of my main goals is reducing code size as much as possible. Yes, I compared larger time periods: 2 months with AI to an average of 12 months of the previous year. No, the metric wasn't gamed: this is a solo business and I have no interest in gaming my own metrics. I earn a living from this work, so this is as objective as it gets.