1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.
> parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
and of course, this isn't even the worst. A lot worse can happen, such as data loss and corruption. Things that can directly affect people's lives in real life.
As a developer, these things are constantly on my mind, and I believe this is the case for people who do care about the quality.
As has been said elsewhere many times, AI producing code is not the same as say, a compiler producing machine code. It is not such a well-defined strong abstraction, hence why code quality still is highly relevant.
It is also easily forgotten that code is as much a social construct (e.g. things that have to be done in certain ways due to real life restrictions, that you wouldn't have to do in an ideal world).
Sometimes I feel very powerless though. It feels as if some of us are talking past each other, even if we seemingly are using the same words like "quality". Or in a way, that is what makes this more futile-- that we are using the same words and hence seemingly talking about the same thing, when we are referring to completely different things.
It is difficult to have a conversation about a problem when some of us don't even see it as a problem to begin with-- until it reaches a point when it starts affecting their lives as well, whether it be directly or indirectly. But that takes time.
Time will tell.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
That's where you get it wrong. The world is full of mediocre and low quality work in many, many fields. We all, in fact, depend on mediocre work in many ways.
Many, many people would prefer a solution with mediocre or even bad code than no solution at all while they wait for "high quality work" that never appears.
The magic of LLMs, especially as the technolgy improves, is that a truly mind boggling number of solutions to problems will be created with thoroughly mediocre (or worse!) LLM generated code. And the people benefitting from those solutions won't care! They'll be happy their problems are being solved.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
> absolutely false.
Actually, you are both correct.
Nobody makes a purchasing decision based on code quality.
But they may later regret a purchasing decision based on code quality.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
Have you taken a look at the world in the past… I dunno, at least several decades. That ceased to be true somewhere around the time I was in high school, maybe before.
High latencies, outages, memory leaks, security vulnerabilities, will be seen in your AWS bill or whatever hardware or service you deploy your software to. If your code isn’t clear enough to understand what it’s really doing, you have no chance at preventing or addressing the above.
Second, shitty electron apps are pervasive.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
From working on many many old and important code bases, the code quality is absolutely trash.
I know engineers who aren't that lucky and struggle in "enterprise" software development, where slop was a feature for decades - people making decisions won't use the software (their low paid employees will) and software monstrosities need a hell a lot of support which sometimes brings more revenue than the original purchase.
You present no proof only touchy feely it must be so otherwise pseudo proof like software longevity is on the line.
Please first define software longevity quality in detailed terms: what is it exactly and how to you assess it regarsing quality and quantity?
Doom is judged by myself and by its versatility a masterpiece however, we all know and the Black Book is quite open about it the code itself is arguably not hitting modern standards as of today and there is a infamous WTF hardcoded value used to do speed code. So what? It inspired a whole generation. Second reality? A mindbender demo that accelerated quite a few finish developer careers has sadly forgotten after it was once considered for decades to be for the demo scene on PC what Doom was for the underground gamer scene. A nice match anyway.
Is Windows a masterpiece or not? Judging by its UX/UI definitely not, by its versatility and transposable potential I would rate it a masterpiece.
1000th of developers work on some code basis which change chipsets and compiler settings quite significantly - so there are ups and downs even in domains where a simple deadline and a requirement change suddenly makes technological prowess useless trash.
And the most heretical question ever: what if your so called number one quality software product might fool you and - could be way better done? You didn’t even consider that option which makes your point shaky to say the least.
I mean Jira is probably one of the most purchased software packages in the world that is specifically used by developers that care about their craft - you don't want to look at the code, trust me.
The assumption that people CARE about your product is the most Silicon Valley, Hacker News, forgot what the world out side of tech looks like thing ever.
People CARE about their software as much as the CARE about their bank, or a new finance product... People excited over software is more of a historic footnote than any thing real people think in 2026.
The vast majority of Software is one of two things:
A) a tool
B) a source of entertainment
As a tool it either needs to provide value or it's something that is shoved on you by work.
The user experience of your average mobile game today is fucking awful. People put up with a massive amount of garbage for a trickle of fun. So much of the web looks like a mid 90's Hong Kong back alleyway --- biking ads, videos screaming at you, and someone trying to steal your wallet. And the majority of things people are forced to use for work... well... Show me someone who is excited about their ERP or CMS or Network drive... Show me someone who thinks that anything to do with Salesforce is something to be excited over.
> The general public does not care about anything other than the capabilities and limitations of your product.
A segment of our industry is screaming about the security of open claw. People dont care (because we have also made a mockery of security as well) - they are using it as a tool that will deliver them a solution. It strips away all the arcanea that made people think we were wizards and writes the dam spells for them. It's a dumpster fire, and people are thrilled about it and what its delivering them. And thats software not made by you or I.
For now. We only call it slop when we notice it. Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.
This whole "treat your code as craft" reminds me of organic farming, zero waste, etc movements. Cute movements that only minority of people care about.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
Speak for yourself. This is exactly the GPs point. Some people care more about the craft of code than the output. I personally find writing good code to be what motivates me. Obviously its a spectrum; shipping is good too. But it's not why I get up in the morning.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and
2. The people that do.
Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.
> No one has ever made a purchasing decision based on how good your code is.
This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.
AI is going to absolutely fail any kind of due diligence.
There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].
How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.
[1]: https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
LLMs are often poor at writing tests that provide useful information to human readers and poor at writing tests that can survive project evolution. To be fair, humans are also poor at these tasks if done in hindsight, after all the information you normally want to capture in tests has been forgotten. That boat has been missed for the legacy code no matter how you slice it. But LLMs are quite good at writing tests that lock in existing functionality in the rawest way. It seems like LLM-generation is actually the best hope of saving such a project?
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
Would those companies be better off just using pen and paper? Because "craft code" programmers don't have time for this, and not all companies can pay for bespoke software. Well, maybe now they can, with AI "slop".
Likewise, should people who don't have the skills or means to cook delicious and nutritious meals just starve without food? Or is it okay that they eat something which isn't perfect?
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
For personal projects, I've been in both camps:
For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.
But for developing a product to release, you want to be able to jump back in even if it's years later.
That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...
Code quality is absolutely important. It’s just not a quality that’s easily visible to a layman. I can definitely feel the difference as a user when a program has been crafted with care.
It mystifies me when people don't intuit this.
For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.
I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.
There are very good reasons and situations for both approaches, even in one project.
1. you care about shipping working, tested code that solves a specific business/user problem
2. you care about closing tickets that were assigned to you
The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)
Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.
That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.
So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.
You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.
AI can help you make well-engineered code, but you have to ask for it because it's not what it will do by default. Prompt it with "Figure out how this crappy piece of code really works and document it in depth. Propose viable refactorings that could improve code quality" and it will do a much better job than the usual vibe-coded result.
This is very much a "it's not the fall that kills you, it's the sudden stop at the end" sort of thing. (Same with the other variant I've heard, which is something like "no company has gone out of business because of tech debt".)
Code is as much a tool for developing and expressing conceptual models as it is for making computers do things. So not only does code quality have proximate impacts on engineering productivity and reliability, but, done well, it also improves the holistic design of the system you're building. You get better tools, faster, by putting some thought and care into your codebase and, especially, your core abstractions. Teams with good code move faster even in the short term and produce better tools and products.
Of course, it's not just a matter of code; you also need a culture that gives engineers the agency to make real, long-term decisions about what you're building (not just how) which, unfortunately, is rare to find in the modern tech industry :/ The dominant "high-output management" paradigm where code is seen as a virtually fungible "output" to be "delivered" loses the higher-order advantages of good code and good conceptual design, and leaves us with something much closer to the trade-off you describe. But there are other ways of approaching technical work that don't make this trade-off at all!
Modern harnesses are systems built with LLMs as one of many building blocks (incl. regex, test suites, linters). If it can be measured and verified, there's a good chance LLMs will optimize it
This is not a new concept. Humans stopped writing "artful" assembly many years ago, because Lattner and others made it much more efficient to rely on LLVM than hand-optimizing assembly
It's also been demonstrated in other domains within Google (4x4 matmul, silicon photonics, protein folding)
Interface heavy apps are not purely about objective function, they are about feel, comfort, usability - those apps will benefit heavily from humans. But subcomponents of these apps (eg. an algorithm to route packets efficiently) can often be better solved (somewhat objectively) by LLM-based solvers or other forms of RL.
However, writing assembly for the sake of art sounds rather interesting in an 2026. Many of my favorite musicians and DJs are driving a resurgence in vinyl to help balance the computed future - and I think that's a great thing
if it really were fuzzing and finding different candidate spaces, then I'd expect it to be good at things like dynamic programming, where I've only seen it fail. usually i see it get stuck in a bad solution and just thrashes around in that minima. these are problems where we can construct a verifiable test space, and it will eventually wind up with a solution, but one that is thousands of lines long and uses no structure of the problem space
The question is how much does the market value this, and how much it should value it.
For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.
But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.
To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.
So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.
As stated by others, this is very false. Most if not all software I use is selected by its disk/memory footprint and performance. Having a small disk/memory footprint and having good performance at the same time is a good indicator of a good code quality.
Moreover, after using computers for more than three decades, you get a feeling about the performance of a particular software suite. So an inefficient piece of code makes itself known in a loud way if you look the right way.
One of my favorite applications, Obsidian, is generally performs well, but when you hit it just the right way (e.g. add a couple of PDFs and enable previewing), you can feel how sluggish it becomes.
Having a suite of well written applications which have high performance/footprint ratio also allows me to do more with less resources and in less time. So, good code quality matters. It also almost guarantees the software suite will be maintained in a longer time.
Incidentally, I'm also in camp #2, and write my code with the same attention to detail. I have also written code which squeezed all performance from systems, approaching theoretical IPC limits of the processor the code is running on.
ZERO times has anyone even mentioned disk/memory footprint. Performance maaybe, but no hard limits were defined in any contracts. And even those were "these things have to be processed within 24 hours because the law says so", not microsecond precision.
Even Obsidian is 440MB. It's a markdown editor with a built-in javascript scripting system. There's no reason for it to be almost half a gigabyte. Zero people have checked the directory size and went "nah, too big, won't use it".
If that would be true, electron apps would not exists and everything would be a native software. But alas, most modern products, even before vibe-coding are horrible performance-wise.
Of course it depends on the context, but consumer facing products have been awful in terms of performance for a while now.
Vibe coding will fill in more "new feature" checkboxes, faster, but the level of quality averages out, to something often mediocre, or worse (like all my OSS projects in which I experiment; because such projects are the training data). It skips liberally on maintainability, accessibility, security and privacy considerations.
Code is a liability, I want to have less of it at a higher abstraction level (for which natural language isn't a good fit due to inherent ambiguity). For products, simplicity and user utility is how I approach the problem when given wiggle room.
We are on HN, so there's bound to be many startup people that only need to bang out features to lure in users and then pass on that pile further onto someone else, when they cash out.
What I have seen however, are mid-managers+ that haven't coded in a decade or so, and now with LLMs they feel that they deliver equal quality results, whereas they have been so long out of the game and haven't picked up the modern skills on how to maintain and build applications.
Yes, there are risks (Lethal Trifecta and all that), but AI assisted developmet by non-programmers isn't that much worse than letting the same people do complex macro/function/VBA setups in shared Excel sheets.
I'm running out of fingers on my hands on the number of 100% vibe-coded applications we've built internally that save double digits of % of time from people's day to day work. All created by the people who use them to fix a very specific workflow they've had to do by hand over hours. Now It's a click of a button on a bespoke application they made.
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.
On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.
Re: "No one has ever made a purchasing decision based on how good your code is." Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.
Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
That’s fine for people argue those things.
My criticisms of AI are mainly
1. The principle of the GenAI approach
2. Political
The first point is about how stupid the GenAI approach is (I could link to the arguments). But I have left open the door for pure results, i.e. demonstrating that it (despite my belief) works in practice. So this is not about craftmanship.
I’ve previously commented that I would respect a more principled approach even though it takes my craft.[1]
> Personally, I fall into the first camp.
Of course you do. Because...
> No one has ever made a purchasing decision based on how good your code is.
In these dichotomies the author typically puts himself forward as the pragmatist and the other side as the ones who care about things that are just irrelevant to the purchasing decision or whatever.
But the AI haters have made real arguments against AI, against the people behind AI, and so on. It’s not a matter of vibes. So maybe respond to those arguments? We don’t need another armchair lesson in psychological inclinations.
Be a pragmatist for all I care. But beware of the bloodless pragmatist who only sees what is, essentially, instant product gratification and not what comes after, or from the sides, or from below.
The future of software looks a lot more like factory production lines with a small group of architect-tier engineers working on a design with product management and then feeding it into the factory for prototyping and production.
If you're not an experienced late senior or principal engineer at your career stage by now there is basically no future for you in this industry. Lower end roles will continue to be reduced. People who can build and maintain the factory and understand its outputs are going to be the remaining high-value software talent.
The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.
Well, you certainly should. Those people made AI based coding a possibility in the first place.
Other times, I have something specific I want to accomplish, but I dread the amount of time it will take to make it happen.
Now, it is never that I don't know HOW to make it happen, it is that I know how, and I know how many steps it is and how many components there are to build to even get the simplest version running and I just dread it. I want the thing, but I don't want to spend the time to make the thing.
I have had so much fun recently making so many things that I have never gotten around to over the years, because I just couldn't justify the time.
I also have the time to tell the AI to add all the nice to haves, and handle all the edge cases that weren't worth the time before, etc.
I am having a blast. I still stop to write the fun bits when I want to, though. It is great because I only have to code the bits I want, that are fun.
Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.
The two-camp construct is a tool to establish the believer as a member of the supreme one camp group; apart from the lesser campers. Their entire identity and self worth is built around one-camp membership.
>No one has ever made a purchasing decision based on how good your code is.
Disagree. Users tend to be very sensitive to quality, in software more than anything else. (Of course there isn't always perfect transmission from "user" to "purchaser" but most likely for an upstart product, your user will be your customer. You're not Oracle or Microsoft yet.)
In fact, quality matters now especially more than before because the barrier to entry has reduced. If you have a great idea, but executed it poorly, it attracts second movers like a magnet. Quality is intimidating. Quality is moat.
Good software engineers (your camp 2) can anticipate the ways in which bad code results in a poor quality product, or one that is difficult to debug and evolve. That is the crux of good code. Good code is not craftsmanship in the same sense as making a beautiful painting or making colorful notes.
Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.
The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.
So I’m confused. I really am, help me understand your world view.
I think more often they simply picked the wrong programming language as a target. In my experience, AI is especially bad at writing Typescript/Javascript, which happens to overlap with the most widely used language. I have negative things to say about AI too when developing in that ecosystem. If I only ever used AI in that ecosystem I'd probably tell you it is useless with the rest of them.
But my daily work sees me working in more than one language and when I am in some other language environments I have no reservations about AI whatsoever. AI vs good code is no longer even at odds with each other. In those certain languages, the models write good, stable, production-ready code pretty much all the time. It is really quite amazing.
I am in both camps. Always have been.
Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.
Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
My output is org velocity.
I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.
It's about that solid foundation - and of course lots of tests on the other side.
Amen, slow and steady and the feature fly wheel just keeps getting faster.
I am stealing that phrase haha
Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.
Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.
But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.
In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?
There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.
The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.
> Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.
So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.
Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.
I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.
If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.
The thing here was that if you have two boxes that take the same input and produce the same output at the same speed, do you care what the insides look like?
What if one is delivered in 4 days and the other in 30 days and costs more? Which one will you pick?
But I am not the fan of code it writes most of the times. I want my code to read and behave certain way. I can not submit that code, even if it works, if I can't explain or just don't like it. I then iterate over that code myself or ask AI until it has the shape I agree with.
For my personal side projects I don't care as much what code looks like as long as it works correctly and easily modifiable. But for work, it still remains my responsibility no matter which tool was used.
People who think developers fall into one of two camps.
And people worth listening to.
I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.
- revenue/man-hour, features shipped/man-hour, etc.
- ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.
People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.
Because to determine good code, you need to see it and I'd presume most open source code is free.
Code quality isn't just a fetish. It has real implications for security and the final product.
I've also found that unmaintainable codebases aren't just hard to maintain for humans. LLMs seem to struggle with them just as much
And then they can ship those products much faster than before, because human hours aren't being eaten up writing out all of these abstractions and tests.
The better tooling will let the AI iterate faster and catch errors earlier in the loop.
Right?
Code quality makes the difference between a janky system that works most of the time and a rock solid system that is an enjoyment to use.
QA can only apply duct tape. If your state management isn’t clean, the UX will suck. If your functions aren’t clean, you will keep chasing bugs.
Luckily, AI is capable of writing good code. Today, that still requires some amount of hand holding, but it’s getting better.
That's however what makes for stable systems, deep knowledgable engineers, and structurally building the basis for the future.
If all you care about is getting money for your product slop, it's not different than late night marketed crap, or fast fashion...
But many ppl will refuse a purchasing because the product breaks randomly
Being created by a human doesn't imbue any specific amount of reliability to a piece of software.
> No one has ever made a purchasing decision based on how good your code is.
People make purchasing decisions on the availability of source code all the time, preferring source code available and be able to use it. It is safe to assume that they can perform purchase decisions on the quality of source code, given all is equal.So instead of criticizing the gentleman for its dichotomy I feel like adding more states to it to complete the picture. And I mean it, this is not making fun of someone it means I tamed myself in stopping to do premature optimization knowing Gigahertz won’t care about me saving a cycle and might even hinder the masterfully crafted compilers from optimizing even more.
3. Partly awesome, partly not so much but don’t need to be awesome.
4. Myself has to understand this masterpiece of human thinking in six months and more from here and after a 20 hour stint I myself marvel at the result so better comment before committing the code into oblivion.
5. Embarrassing, but people are delighted.
6. This made headlines years ago, but some code doesn’t age well.
7. OMG, OOP might looks right, but a new paradigm rushes me into refactoring frenzy and makes the code look better without breaking any features!
8. I used tool to check for bottlenecks and it runs well, but looks crap. So what do I do?
9. Loop unrolling is still a thing or not? Do compilers have headaches just like I do? Do they really care or simply follow orders or adjust to the target platform and settings?
But also I know when to put up and make the damn thing work.
I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.
Another perspective: if the quality of your code has no bearing on the quality of the product, then your code/produce clearly isn't doing much useful, and perhaps we could do without it.
Because the ones that sell crappy code don’t sell to people that can tell the difference.
You think I’d pay for Jira or Confluence if it wasn’t foisted upon me by a manager that has got it in with the Atlassian sales rep?
I don’t even need to see Atlassian’s source code to know it’s sh*t.
You (are required to) treat your code as having to fulfill both functional requirements and declared non-functional requirements, including measures of maintainability, reliability, performance, and security and (regulatory/legal) compliance.
If I do a bad job, I get a bunch of bug reports, I get called out for writing bugs, etc. We've been pushed to use AI, and it's hurt more than it's helped with our code base.
1. You treat the house as a means to an end to make a living space for a person.
2. You treat the building construction itself as your craft, with the house being a vector for your craft.
The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good the pipes inside the walls are.
The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.
It might do that, if it had any basis in reality. Why do you believe it does?
I've always found my self at the clean code end of the spectrum (which also means simple and flexible to me) because it makes it easier to be flexible for customer needs. So I like good code but it's a means to an end.
It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.
If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.
Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.
I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.
With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.
I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?
I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.
Craft isn’t a fussy thing.
I got my company to switch from GitHub to GitLab after repeated outages. I've always moved companies to away from using GCP or Azure because of their reliability problems.
This is a really funny comment.
As such AI is a net negative as it would be in writing a novel or making any other kind of art.
Also it’s more than an art or a craft. It’s identity. Many people hold their coding skill as an identity they honed it over many years and it put them in the ranking they are in today. This kind of destruction of identity by AI is what causes people to deny reality.
This stuff also covers your job, even if you don’t hold coding as an identity it is still responsible for many people’s livelihoods. Like ai is convenient right now but what happens when it gets even more convenient? What happens to your job and your life especially if software was all you did for over a decade?
I’m in camp 2 and I can’t lie to myself about what’s happening. I’ve embraced ai and I now vibe code daily even though I was originally an artistic functional programmer. This ability comes at high cost. I’m able to do it because I hold zero identity. I dont identity with anything and I don’t put too much pride into anything I do or at least if I put pride into something I’m always conscious of severing the entire thing at a moments notice.
Lying to oneself is a powerful ability but it becomes a liability when society goes through an intense paradigm shift. This is what is happening now.
The developers don't care that either. If developers cared the whole npm ecosystem wouldn't exist.
I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.
I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.
So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).
Invariants must be documented as part of defining the data or program module, and ideally they should be restated at any place they're being relied upon. If you fail to do so, that's a major failure of modularity and it's completely foreseeable that you'll have trouble evolving that code.
John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.
This sounds more like a product owner not a developer
It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.
RollerCoaster Tycoon.
> The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
People care how fast you're able to ship updates, new features, and bugfixes. If you're working with a pile of vibe-coded spaghetti slop it's going to take longer to deliver these.
But I will demand my money back or sue you if your crappy code leaks my personal information, destroys my property, performs worse than advertised, or otherwise harms me in some way.
There was sloppy code before LLMs. It’s what they were trained on. And it’s why they generate slop.
All that code that was rushed out to meet an arbitrary deadline made up by the sales team, written by junior and lazy senior developers, pushed by the, “code doesn’t matter,” folks. Code written by the enterprise architecture astronauts with a class hierarchy deeper than the Mariana Trench. A few years down the line you get bloated, slow, hard to maintain spaghetti piles of dung. Windows rendering text that stutter when you scroll them. Virtual keyboards that take seconds to pop up. Browser tabs that take more available memory than was available to send astronauts to the moon and back.
When humans write it you generally have a few people on a team who are concerned with these things. They try to reign in the slop generating, “always be shipping,” people. You need a mix of both. Because each line of code is a liability as much as it’s a new feature.
> 1. You treat your code as a means to an end to make a product for a user.
> 2. You treat the code itself as your craft, with the product being a vector for your craft.
Among the vocal devs, maybe. Most devs choose a trade-off between #1 and #2, leaning heavily towards #2.
And the reason is, very few people actually want to the output of their labour to be poor, no matter how superficially good it looks.
I find, like the poster below me said, the people presenting the false dichotomoty you present are desperate to legitimise their production of lovecraftian code horrors.
It's a trick, a verbal one usually, that people who espouse woo and who know that they are BSing, use to sort of "borrow" legitimacy from a field that is already respectable. Like... ghost-believers referring to themselves as occult scientists. They throw in the word "scientist" in there to borrow the legitimacy and respectability of actual scientists[1].
Throwing in "user delight" or "useful to the user" into their arguments for vibe-coding is their way of borrowing the respectability of actual developers, who had always been developing for an actual user, and who cared about their user enough to target that specific use-case.
The folks in #1 are simply borrowing what they can from the respectable practitioners to paper over the fact that all they care about is themselves, not actual users.
The clear majority of them are hoping to hit a jackpot; the borrowed terms, phrases and words is simply a poor attempt to cover up their naked greed.
---------------------------
[1] There's probably a joke in there somewhere about "software engineers" :-)
When I make a purchasing decision I expect the payment to go trough quickly and correctly and for whatever I purchase to arrive to me in reasonable time. All of this rests on the reputation of software being solid. If a user hears a whiff of purchase not being executed correctly, money or goods going somewhere else, this is the death sentence for your company.
Industry is now pushing for agentic web where agents can do this on your behalf. But if we have slop foundations and then add unstable models that can hallucinate and make mistakes on top of it, then it's just a recipe for catastrophe. I think relegating 2) into category of only mission critical software escapes the reality of how much reliability goes into everyday services people use.
Just because one falls in the "Cha bu duo" camp and (potentially) looks down on the "Kaizen" types doesn't mean the two products are equivalent.
That also doesn't mean that slop / cha bu duo / made in china products are bad, mind you. They have their place, and occasionally a Kaizen approach would be detrimental to getting "something" done quick and dirty that will likely work ok anyway. The danger is in believing that just because they're "ok" this means they are equivalent (or at least largely overlap) with the more refined products, which is demonstrably false and can be a dangerous attitude to have.
Shit bloated code is one of the reasons Epic Launcher is extremely behind in market share when compared to Steam.
Sure, they ship their product fast. They can iterate faster than Valve. They also add technical debt with each iteration.
Also: we are almost all using a Chrome derived browser instead of Firefox, old IE, old Opera, because of performance and quality. They just won the internet because of the quality of their code. Besides that, all browsers let you browse the internet.
When people can choose, they choose quality most of the time.
Did that guy make it because Rust, and because he's passionate about that sort of thing? Probably.
But it's fucking fast. So did he sell out to OpenAI? Of course he did.
And thusly, both camps.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
Why did Whatsapp grow so big while thousands of previous chat apps didn't code quality (scalability)
Oh wait, they're the biggest car builder in the world.
I don't believe there is a dichotomy, or even a spectrum of developers, but a complex landscape. Of course, that is also an bald assertion, but on a weaker claim, and no less valid than the original assertion.
That said, independent of assertions about developer classification, in my experience there is a clear connection between the quality of the software and the quality of the product, and I've often see evidence of poor quality software compromising the product and user experience. Poor quality leaks out. Remember BSOD? Maybe not.
I've become hesitant to unleash coding agents simply because the code base ends up looking like the victim of drive-by coding, littered with curious lambda functions, poor encapsulation, etc. The only thing I use coding agents for is exploratory and throwaway code, like one off scripts. I love coding agents for all the ancillary work, I protect the critical path like mamma bear her cubs.
Coding agents make all the second order work easier so I have more bandwidth to focus on the critical parts. Again, software is a landscape, but at least for my work I can't abdicate parts to a coding agent and "works" is an inadequate standard. I need bullet-proof and unfailingly correct.
Token generation definitely produces a certain stream-of-consciousness, Kerouac-as-programmer style. As long as I don't ever have to maintain or modify the code myself, am not concerned about cost control (especially in cloud environments where I am billed by compute cycles), I am fine with quick and dirty and done. I sigh when I see what should be a six line change in my head balloon to 300 lines of generated code, revert, and write the six lines myself. Would take longer to write the prompt to get the coding agent to fix it than fix it myself. It would grind away for several minutes and burn up an astonishing number of tokens for simple fixes.
Anything linguistic the coding agents do well. Want to rename a variable in 300 different source files? I mean, it is overkill to be running a 200B parameter model to avoid writing the sed script I might write otherwise, but who am I to turn my nose up at my work being subsidized by investors? I don't think that economic model will go on forever.
Any higher abstraction is being cargo-culted from language. This is where LLMs are weakest, because they don't understand abstraction or encapsulation, only the artifacts as expressed in language.
Outside of exploratory and throwaway code, I use inline prompting to precisely target and scope changes, and then identify the cleanup and refactoring required to bring the code to acceptable quality. Although I do a lot of cleanup by hand as well. Rather than tell the coding agent that a lambda function wrapping a one liner that is used in one place in the code is dumb, I'll just remove the lambda myself. The coding agent can't adopt and generalize lessons from code review comments the way a human software engineer can -- I am forced to burn tokens every single time to get it to dial back its insane love affair with lambda functions. Again, not a big deal while costs remain subsidized.
Operations and maintenance overhead in the type of software I've written through my career dominates over programming cost. Telecom, aerospace, e-commerce, etc. Systems are long lived. Outages are expensive. Regulatory compliance is a large factor. I've worked in shops with 70% cost overhead in operations. A $50K a month cloud compute bill can be reduced to $15K. There's usually some low hanging fruit and poor quality software doesn't account for all of this, but it is a significant fraction. Like a poorly written termination condition in a container that essentially was a busy wait burning thousands of dollars a month doing nothing (true story).
I am currently writing a trading system, and can't afford to hallucinate a bunch of bad trades. Like the developer landscape, the software landscape is complex and not uniform. So I will concede there are probably many types of software outside of my own experience that can be implemented largely by coding agents. Low consequence. Marginal operational overhead.
I might assert that coding agents forte is autogenerating technical debt, but then I am just being a wag. Less waggishly I would say use of coding agents is subject to engineering judgement, like any tool. Who is going to read that headline or give it a billion dollar valuation?
Yes, some people left to their own devices would take twice as long to ship a product half as buggy only to find out the team that shipped early has taken a massive lead on distribution and now half the product needs to be reworked to catch up.
And some people left to their own devices will also ship a buggy mess way too early to a massive number of people and end up with zero traction or validation out of it, because the bugs weren't letting users properly experience the core experience.
So we've established no one is entirely right, no one is entirely wrong, it's ying/yang and really both sides should ideally exist in each developer in a dynamic balance that changes based on the situation.
-
But there's also a 3rd camp that's the intersection of these: You want to make products that are so good or so advanced *, that embracing the craft aspect of coding is inherent to actually achieving the goal.
That's a frontend where the actual product is well outside typical CRUD app forms + dashboard and you start getting into advanced WebGL work, or complex non-standard UI state that most LLMs start to choke on.
Or needing to do things quicker than the "default" (not even naive) approach allows for UX reasons. I ran into this using Needleman-Wunsch to identify UI elements on return visits to a site without an LLM request adding latency: to me that's the "crafty" part of engineering serving an actual user need. It's a completely different experience getting near instant feedback vs the default today of making another LLM request.
And it's this 3rd camp's feedback on LLM development that people in the 1st camp wrongly dismiss as being part the 2nd craft-maxxed group. For some usecases, slop is actually terminal.
Intentionally contrived example, but if you're building a Linear competitor and you vibecode a CRDT setup that works well enough, but has some core decisions that mean it'll never be fast enough to feel instant and frontend tricks are hiding that, but now users are moving faster than the data and creating conflicts with their own actions and...
You backed yourself into a wall that you don't discover until it's too late. It's only hypervigilance and strong taste/opinion at every layer of building that kind of product that works.
LLMs struggle with that kind of work right now and what's worrying is, the biggest flaw (a low floor in terms of output quality) doesn't seem to be improving. Opus 4.6 will still try to dynamically import random statements mid function. GPT 5.3 tried to satisfy a typechecker by writing a BFS across an untyped object instead of just updating the type definitions.
RL seems to be driving the floor lower actually as the failure modes become more and more unpredictable compared to even GPT 3.5 which would not even be "creative enough" to do some of these things. It feels like we need a bigger breakthrough than we've seen in the last 1-2 years to actually get to the point where it can do that "Type 3" work.
* good/advanced to enable product-led growth, not good/advanced for the sake of it
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
I'm glad to see that the author of the article is putting an emphasis on simplicity here, especially given the nature of their business. Those that fully embrace the "code doesn't matter" approach are in for a world of hurt.
Long-term, I expect there will be more tooling and model advancements to help us in this regard - and there will certainly be a big economic incentive for that soon. But in the meantime it feels like a dam has been breached and we're just waiting for the real effects to become manifest.
I'm not saying it's wrong, because I haven't actually looked for alternative sources, just that the source isn't great.
That new design-level representation will be code.
It will need to be code, because prompts, while dense, are not nearly deterministic enough.
It will need to be much higher level code, because current code, while deterministic, is not nearly dense enough.
There needs to me more design rep indeed.
The trouble is people don't want to bother reviewing the changes.
Whatever the hell economics was supposed to do, right now it seems to be causing every industry to produce worse products, lay off more people, and concentrate wealth in an aristocratic subset of the population, and this has been going on for the better part of my entire lifetime. If we're to reverse this trend, we need to stop pretending that economics is a natural force and remember that it is a complex system made of policy decisions that can in fact be the wrong ones
The whole business strategy for those companies is to be the one big monopolists that is left standing. That is why they are bleeding money offering token prices well beyond market rate so that they can grow.
Plus they can always lobby the state to ban foreign competition for security reasons.
This is possibly the dumbest version of an "economic incentives" argument. Current code is the result of current economic incentives. It is a mystery to me why making code generation cheaper will make it more "good" in any way, instead of being either more of what we have now, or worse.
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk WorksIf a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
It takes a lot of work to make cheap, low precision parts work together reliably. The Rolex has it easy, all the parts are precisely built at a great cost and everything fits perfectly. With the cheap alarm clock, you don't know what you will get, so you have to account for every possible defect, because you won't get anything better with your budget and the clock still needs to give you an idea about what time it is.
The parallel in software would be defensive programming, fault tolerance, etc... Ironically, that's common practices in critical software, and it is the most expensive kind of software to develop, the opposite of slop.
It would make sense to me that a parallel mechanism could apply to Soviet engineering. If material and technologically advanced capital are scarce, but engineers are abundant, you would naturally spend more time doing proper engineering, which means figuring out how to squeeze the most out of what you have available.
aka "fitting".
I wrote a blog on why Soviet-style engineering is bad https://blog.est.im/2026/stderr-04
> Everything was ruthlessly sacrificed to cut costs, including pilot safety.
If we translate this analogy back to AI driven software development, what would be the equivalent of "pilot safety"?...
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
That's an issue I have with Claude actually. I found it very good at breaking abstractions to get the job done. This is what I'd call slope (more so than the class internals).
What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
Test what you care about. If you care about performance, then test your performance. Otherwise performance doesn't matter.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
You are reinventing the wheel again with yet another form of reinforcement learning. I don't use any form of LLM assistance for coding, but if I have to continually tell it what to do, how to do it, what not to do, what assumptions to make - I would rather stimulate my neurons more by doing that damn thing itself.
The narrative of "Yeah it will do everything, provided you tell it how to do everything!" seems baseless, personally. Even if you emulate the smartest human possible, can you emulate an idiot?
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
When you work with F500s you end up seeing code and culture that is absolute balls and that I would never work directly for all the time. And yet roles are always filled. And when the economy gets bad, they have decent engineers.
I call it the fast food quality theory of economics. When the economy is good, low pay jobs tend to have low quality employees and it shows in their products. When the economy gets bad higher quality employees end up downgrading because of layoffs and the quality of these low tier jobs improves.
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
I don't fully agree this optimistic view. Unfortunately, for now, coding agents produce code that, if not further optimized upon "human" request, often generates more complexity than necessary.
It's true that this requires more computational effort for the agents themselves to debug or modify it, but it's also true that the computational cost is negligible compared to the benefit of having features working quickly.
In other words: agents quickly generate hyper-complex and unoptimized code. And the speed of delivery provides more immediate benefits than the costs resulting from bad code.
On the other hand, it's also true that the "careful eye" of an experienced developer can optimize and improve the output in a few simple iterations.
So overall (and unfortunately) the "bad code", if it immediately works, can wins against (or with) a good code.
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.
Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
What if we built things that are meant to last? Would the world be better for it?
We only recently figured out how to reproduce Roman concrete.
We’d have more but a lot were blown up during WWII.
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
That can't be right? What about safety factors
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
Good engineering is building the strongest bridge within budget and time.
Your brain can still "just click" with agentic coding. But it will have to be at a higher level of abstraction. Perhaps the "click" feels different, and will take some adjusting to.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.
None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
Software, on the other hand, can be free. Even before LLMs I would argue the best code was found in FOSS projects.
Nobody is going to use sloppy buggy software if a handcrafted well engineered alternative exists, and is free.
In the case of software, the group of people who have principles might be the ones funding FOSS projects, and the software itself would then be enjoyed by all. This is more or less what's already happening today.
Dear sir, I think you may have already got the entire software market incorrect as it already stands.
So what you're saying is "someone" can make a living doing it.
What you're not saying is "you" can make a living doing it.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
All the change and shuffle feels like an inevitable consequence of so much communication and competition between companies, and cultures and such. Gone are the days where a software product can remain stagnant. Someone else will build something that does a bit more, or if nothing else, does something new, and it will take people's attention away.
Everyone is stuck trying to keep up with trends, even if those trends don't make any sense.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
The slop problem isn't AI, it's people who can't tell the difference between good and bad output because they never developed the craft in the first place. AI just makes that gap more visible.
So probably the vast majority of people that program....
(Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)
"Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.
(And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)
I can promise you outsourcing of coding is still huge.
This said, companies have changed it up a bit, instead of hiring a outsourcing shop, they'll setup their own branch in LCOL countries.
India, Portugal, a few different countries in eastern Europe are all rather large software producing countries for US companies.
1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.
2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...
3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.
Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.
Put simply LLMs perform better on better code.
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
Everything fundamental that makes good easier for humans to maintain also makes it easier for LLMs to maintain. Full stop.
And property testing is going to be an important way to validate.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
Microslop is the future.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
when you're making (crafting) software if the lines are going up for an equivalent functionality it means you're cooking up bullshit.
the whole premise of software arts (engineering) - is you do MORE with LESS.
engineering is not science, and neither is art. creativity is needed, rules of thumb are to be followed.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
Is that still the future or nah?