But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
Here's what "spaghetti" looks like:
https://github.com/ctoth/Qlatt/blob/master/public/rules/fron...
One thing I want to point out. Before I started, I gathered and asked Claude to read all these papers:
https://github.com/ctoth/Qlatt/tree/master/papers
Producing artifacts like this:
https://github.com/ctoth/Qlatt/blob/master/papers/Klatt_1980...
Note how every rule and every configurable thing in the synthesizer pipeline has a citation to a paper? I can generate a phrase, and with the "explain" command I can see precisely why a certain word was spoken a certain way.
It’s all about planning and architecting things before any code is generated. It’s not vibe coding prompt->code, its idea->design->specification->review->specification-> design->architecture->define separate concerns->design interfaces/inter module APIs->define external APIs->specification/review->specification-> now start building the code, module by module. Writing in c++ at Least, I have had very good results with this approach, and I’m ending up with better, more maintainable code, less bugs to hunt, and all in about 0.15-0.25 the time it would have taken to hack something together as an MVP that will have to be rewritten.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
After the war when the dust settles, we'll start over. We might escape the death of the sun, but not the heat death of the universe. Therefore, none of the above matters and I'm rambling.
Having reinvented them, they will understand. In this way, human progress is unstoppable, even with knowledge micro (and macro) dark ages.
I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
If anything, I know more about the code I work on than ever before, and at a fraction of the effort, lol.
However, I think what a lot of people don't realize is the reason a lot of executives and business users are excited about AI and don't mind developers getting replaced is because product is already a black box.
They navigate such complex decision spaces, full of compromises, tensions, political knots, that ultimately their important decisions are just made on gut feelings.
Replace the CEO with an LLM whose system prompt is carefully crafted and vetted by the board of directors, with some adequate digital twin of the company to project it's move, I'm sure it should maximize the interest of the shareholders much better.
Next up: apply the same recipe to government executive power. Couldn't be much worse than orange man.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
The risk of making the source in a compiler a black box is pretty high.
Because of your program is black box compiled with a block box, things might get really sporty.
There are many platform libraries and such that we probably do not want as black boxes, ever. That doesn't mean an AI can't assist, but that you'll still rewrite and review whatever insights you got from the AI.
If latest game or instagram app is black box nobody can decipher, who cares? If such app goes to be a billion dollar success, I'll feel sorry for engineers tasked with figuring why the computer says no.
That server did its job for about 10 years with a few upgrades - more drives and a faster CPU - and really only needed replacement because of the low max memory limit. Its successor ended up being a DL380 G7 which is still in use, the Scaleo Home Server annex Intel SS4200 now waits in the barn for potential re-use as a backup server.
[1] http://ss4200.pbworks.com/w/page/5122767/Windows%20refund%20...
Are you sure you can't think of a commonly used operating system which doesn't?
Name ends with "ux", or maybe "BSD"?
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
For me, that material conciousness in computers was always in grasping the way system works holistically. To feel the system. To treat it as almost a living organism.
It's not like working with legacy codebases used to be.
In a software context, I wonder what the impact of the language used is on the sense of "resistance"?