The "AI effect" on the world has many similarities to previous events and in many ways changes very little about how the world works.
> I'm terrified of the good enough to ship—and I'm terrified of nobody else caring.
For almost every product/service ever offered, it was possible to scale the "quality" of the offering while largely keeping the function or outcome static. In fact, lots of capitalistic activity is basically a search for the cheapest and fastest way to accomplish a minimum set of requirements. This leads to folks (including me!!) to lament the quality of certain products/services.
For example, it's possible to make hiking boots that last a lot longer than others. But if the requirement is to have it last for just 20 miles, it's better to pay less for one that won't last as long.
Software is the same way. Most users just absolutely do not know about, care about, or worry about security, privacy, maintainability, robustness, or a host of other things. For some reason this is continually terrifying and shocking to many.
There is nothing surprising here, it's been this way for many years and will continue.
Obviously there are exceptions, but for the most part it's best to assume the above.
nitpick: most users don’t care about these things until something goes significantly wrong and it impacts them, e.g. a massive data breach or persistent global downtime.
then they get angry. very angry.
just because people don’t care about it now doesn’t mean they won’t care about it in the future.
edit — these are the hidden requirements.
> For example, it's possible to make hiking boots that last a lot longer than others. But if the requirement is to have it last for just 20 miles, it's better to pay less for one that won't last as long.
until requirements change, or the hidden requirements come out to play … most software engineers can probably recall multiple times when the requirements changed half way through. hell, i’ve done it on solo projects.
now we’re stuck with boots that can only last 20 miles, but we need to go 35.
> then they get angry. very angry.
Yes, this has a lot of overlap with how humans differ from "Homo Economicus" [0].
Humans generally can't find out, don't know, care to know, have the time to research, or are expert enough to understand the ramifications of decisions perfectly (or adequately to some definition of adequate).
However, they do understand price!!! So you end up getting cheap stuff that everyone chooses because they don't understand how they lower their future risk or save money over the long run with a more immediately expensive option.
This, also, has been true for a long long time. Humans are far more likely to choose the cheap option if they don't believe or understand the expensive one.
Incidentally, this is somewhat rational given that marketing half-truths are rampant.
The short life boots are great for everyone (boot makers, suppliers) except the end user.
A slightly higher quality boot could reduce their expenditure (monetary and time) and collectively allow society to devote the time and resources saved to higher goals.
However the wants of the few outweigh the needs of the many.
Data breaches are so common they don't even register any more, and people share far more personal information now (willingly or not) than they used to. Remember when the common advice was "don't use your real name online"? Now every service demands your phone number to register, and those temporary email services (like 10minutemail) rarely work any more, in my experience. Downtime makes the news if it's bad enough, but Cloudflare, Microsoft and Amazon still control most of the internet. They fuck up badly all the time, and nothing ever happens. Windows 11 is literal adware, and Linux desktop usage is still a rounding error.
Remember that Tea "dating" app that leaked pretty much everything last year? As far as I can tell, it's still in business.
Many such cases.
Which that's probably why we're in a lemon economy. To figure out the answers to these things is incredibly time consuming, even if one does have expertise in a specific domain. Fatigue leads to the same result, oversimplification of assumptions
There are examples of seemingly contradictory high/low cost, high/low durability, high/low reliability, high/low status symbol, etc. and seemingly every combination.
Cars are a great example:
* Reliable cars can also be cheap.
* High status symbol cars can be incredibly expensive but also unreliable.
* Expensive cars can be dangerous.
* etc.
Even the things that are "good enough" and cheap tend to come with massive hidden costs. For example, good looking clothing can be inexpensive enough for a person to wear everything once and throw it away, but behind the scenes there are child slaves, microplastics/PFAS contamination, and a textile waste crisis.
When you watch a video or use a service that requires significant effort and value to create, you inherently trust that the creators have invested diligence and care to protect their investment. Creators risk losing customers through bad reviews or, worse, being sued for damages.
In an age where it's reasonably straightforward to create something that appears to match the quality and effort of what was previously difficult to accomplish, it becomes harder for users to distinguish high quality anymore.
I think we'll go through a period where many users will get burned by poor services (lost data, security breaches, etc.) and will need to find new ways of verifying product and service credibility.
I suspect the market for simple consumer apps charging $5+ monthly for basic functions (like todo lists) will disappear, and possibly the same for low-to-moderate complexity enterprise apps (like Jira). This is probably better for consumers. Many of these apps and tech businesses can charge so much for fairly basic functionality because the barrier to building alternatives is too great. There was simply no option if you wanted a particular set of features. It's 'value-based pricing' that extracts benefits from consumers unable to negotiate the price.
People have forgotten that a lot of people were killed by horses. Cities had to deal with vast quantities of manure and horse corpses. Horses knew they were slaves and you always had to be careful around them. Horses are expensive and required daily maintenance.
I'm not so sure about those editorial standards. The internet has revealed that there's a lot of propaganda in the newspaper editorials.
Put another way, who here wants a car that costs more than their house? Or shoes that cost 2000$?
It's the age old paradigm of buying a pair of shoes/boots, the poor man keeps buying $20 shoes/boots that wear out in a year or two. The wealthy man looks perplexed and states, "this is why they are poor, they don't understand investing in a quality pair of shoes/boots... For a measly $100 they could buy a pair of shoes/boots that would last them 10+ years". But what is always overlooked, is that the poor man doesn't have the flexibility of spending to afford to invest better quality purchases, because the money needs to be applied to other problems in their lives.
I would argue that this is one contributing factor, outside of companies just chasing the lowest quality/cost, that contributes to crappier stuff.
This is what produced our high standard of living.
For example, Ford and the Model T. Before the Model T, only the rich could afford to buy a car. Ford was relentless with the T in finding ways to cut the manufacturing cost. And the result was America got wheels.
Sure, but the OP's concern is whether this chokes off innovation. Is there some better kind of hiking boot, longer-lasting and cheaper and maybe more comfortable, that we've never found because the shoemakers who'd be able to invent it are too busy optimizing Nike production lines?
But that question is impossible to answer and therefore can justify no recommended changes to the current state.
Yes. Which means the overall quality of things is dropping. Its nothing new or AI-specific indeed, but that doesn't make it a good thing.
Excellent point which leads to one related but less commonly mentioned:
If you build a product that lasts 25 years and that's what people want, you need to price it in such a way that if the entire market buys your product, you will run out of customers (for roughly 25 years or until new customers are born). Otherwise, you have a big rush of revenue early and then it drops off a cliff.
(I'm oversimplifying here but this is partly why there is a trend to make things more disposable or have a limited lifespan).
That's rewriting history especially in terms of software and hardware.
Appliances like Microwaves, etc were revolutionary for its time. Only problem: they lasted forever (>20 years). No 1 needed to buy it again = no business. It was deliberately not made to last as long and possibly not exactly cheaper both in cost and retail price.
> Software is the same way. Most users just absolutely do not know about, care about, or worry about security, privacy, maintainability, robustness, or a host of other things.
They don't want to know. They assume it is there. Most people have inherit trust with for example big companies.
> In fact, lots of capitalistic activity is basically a search for the cheapest and fastest way to accomplish a minimum set of requirements.
This is a rewrite of history to. In search? No. More like self create. Was Uber for example searching for the cheapest way? Well, yes, by throwing so much money to have a monopoly. We're currently throwing trillions at AI to find the "cheapest" way. Just like with the dot com era, we might not even recover 1% wasted. Are you sure it is the cheapest?
I'm curious if the inflation-adjusted prices of those long-lasting early microwaves were less than the cost of 3 current microwaves that last 7 years. Also, this isn't an apples to apples comparison because they gradually lost performance over time and it took longer to heat up food as they aged.
This is a common myth that was debunked a while back. Essentially people get fooled by survivorship bias: they only see the few old appliances that somehow survived, and that leads them to conclude that things were higher quality back in the day.
That is a problem that needs to be fixed in those users, not something we should take advantage of as an excuse for releasing shoddy work.
> For some reason this is continually terrifying and shocking to many.
For many reasons.
It means that a good product can be outcompeted by a substandard one because it releases faster, despite the fact it will cause problems later, so good products are going to become much more rare at the same time as slop becoming much more abundant.
It means that those of us trying to produce good output will be squeezed more and more to the point where we can't do that without burning out.
It means that we can trust any given product or service even less than we were able to in the past.
It means that because we are all on the same network, any flaw could potentially affect us all not just the people who don't care.
The people who don't care when caring means things release with lower cadence, are often the same people who will cry loudest and longest about how much everyone else should have cared when a serious bug bites their face off.
and so on… … …
Are you suggesting we should just sit back and let then entire software industry go the way of AAA games or worse?
> That is a problem that needs to be fixed in those users, not something we should take advantage of as an excuse for releasing shoddy work.
Ok. Tech folks have been trying to educate users and get them to make better decisions (in the viewpoint of those tech folks) for a long time. And the current state points to how successful that's been: not very. This isn't exclusive to software... many industries have consumers who make unsound long-term choices (in the viewpoint of experts).
Taking advantage? Besides cases where folks are actually breaking the law and committing fraud, this isn't some kind of illicit activity, it's just building what the users choose to buy/use.
> It means ... It means ... It means ... It means ...
Perhaps, perhaps, perhaps, and perhaps.
> Are you suggesting we should just sit back and let then entire software industry go the way of AAA games or worse?
I'm not sure what "the way of AAA games" means. I'm just laying out how I view the last 30 years of the software industry.
I don't see any reason to expect significant change.
A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Absolutely no one.
https://www.penguinrandomhouse.ca/books/719111/survival-of-t...
Sure some things deteriorate, but many things improve. Talking about a net decline (or net gain) is very difficult.
Every age has its own set of problems that need to be solved.
Or, try organizing any kind of movement that those with power don't like. It does not even have to be violent! Here in Germany, as soon as the previous government with the Green Party was in power, a huge never-ending campaign started. Easy - after all, the vast majority of the important media is owned by very few, just like in the US. Funny enough, after inevitably that government failed, turned out the CDU failed many if not most of the promises made, and in other areas does exactly what they heavily criticized.
The point is, surveillance, "soft" punishments, and media control and reach are on a whole new level. Trump wanted TikTok for a reason, and Musk wanted X not for the money that company could make.
The more tech we have, and it's conveniently concentrated too, the worse it can get if you don't want to play that game.
On top pf that, debt and a system of law heavily skewed for those with money, just because of its complexity and to gain access, and no more competition for minds from a block of socialist countries, so no clear alternative apart from obviously stupid ideas most people won't want to vote for, and this "democratic" system can go very far towards being very controlling and restricting for many.
We can see for example in Iran, or few decades ago in China, or since it was founded in North Korea what happens when people protest - and how nothing changes. Now we have billionaires who would love to have similar powers, who don't want to be "held back" by laws and regulations.
"This morning at 8:00 am Pacific, there were 5 simultaneously assassination attempts on tech executives across the Bay Area. The victims, who are all tech executives known to us have suffered serious injuries . It is reported that Securibot 5000s were involved. Securibot inc declined to comment. This is a developing story"
Humanoid robots became possible and so people are racing to be first to market assuming that might be a giant market (it's cheap labor potentially so of course it might be huge - the microcomputer was).
"Robotic security. [...] The armed mass as a model for the revolutionary citizenry declines into senselessness, replaced by drones. Asabiyyah ceases entirely to matter, however much it remains a focus for romantic attachment. Industrialization closes the loop, and protects itself." [0]
The important part here is that "[i]ndustrialization [...] protects itself". This is not about protecting humans ultimately. Humans are not autonomous, but ultimately functions of (autonomous) capital. Mark Fisher put it like this (summarizing Land's philosophy):
"Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will be ultimately be sloughed off." [1]
Land's philosophy is quite useful for providing a non-anthropocentric perspective on various processes.
[0] Nick Land (2016). The NRx Moment in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition
[1] Mark Fisher (2012). Terminator vs Avatar in #Accelerate: The Accelerationist Reader, Urbanomic, p. 342.
As we saw 100 years ago, violent authoritarians will gladly use technology to make themselves look like the populists choice all the while planning to neglect the very thing they promised when they were getting elected.
What's stopping them from good actions is not the fear of "doing something messy and imperfect". It's the lack of financial and power-grabbing motivation.
The terminology may have changed a bit, but they still employ people to do stuff for them
One big difference is while professional class affluent people will hire cleaners or gardeners or nannies for a certain number or hours they cannot (at least in rich countries) hire them as full time live in employees.
There are some things that are increasing. For example employing full time tutors to teach their kids - as rich people used to often do (say a 100 years ago). So they get one to one attention while other people kids are in classes with many kids, and the poor have their kids in classes with a large number of kids. Interesting the government here in the UK is increasingly hostile to ordinary people educating their kids outside school which is the nearest we can get to what the rich do (again, hiring tutors by the hour, and self-supply within the household).
They also hire people to manage their wealth. I do not know enough about the history to be sure, but this seems to be also to be a return to historical norms after an egalitarian anomaly. A lot of wealth is looked after by full time employees of "family offices" - and the impression I get from people in investment management and high end property is that this has increased a lot in the last few decades. Incidentally, one of the questions around Epstein is why so many rich people let him take over some of the work that you would expect their family offices to handle.
Brink? This has been the reality for decades now.
>A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Nobody. They will try to channel it.
I think all signals are pretty inevitably pointing to three potential outcomes (in order of likelihood): WW3, soviet style collapse of the west or a soviet style collapse of the sino-russian bloc.
If the promise of AI is real I think it makes WW3 a much more likely outcome - a "freed up" disaffected workforce pining for meaning and a revolutionized AI-drone first battlefield both tip the scales in favor of world war.
Not to the degree you might originally think. Most of the wealth being captured today is hypothetical wealth (i.e. promises) to be delivered in a hypothetical future. Except we know that future will never come as the masses, as you point out, have almost nothing, and increasing nothing, to offer to make good on those promises. In other words, it is just a piece of paper with IOU written on it, not real wealth.
What that hypothetical wealth does provide and what makes it so appealing, however, is social standing. People are willing to listen to the people who have the most hypothetical wealth. You soon hear of what they have to say. When the hobo on the street corner says something... Wait, there is a hobo on the street corner?
A small group of people having the ear of the people is human nature. In ancient times, communication challenges left that small group of people to be limited to a small community (e.g. a tribe, with the people listening to the tribe leader). Now that we can communicate across the world with ease, a few people rising up to capture the attention of the world is the natural outcome. That was, after all, the whole point — to move us away from "small tribes" towards a "global tribe".
Hypothetical wealth is the attention-grabbing attribute du jour, but if you remove it, it will just become something else like who is most physically attractive, who tells the funniest jokes, whatever. The handling of "Dunbar's number" doesn't go away.
> Trouble who in power is willing to stop it?
China has tried with its Great Wall (meaning the internet one, although perhaps you can find relevance in the physical one too), but is it successful? Maybe to some degree, but I expect many people in China still listen to what Elon Musk has to say, all while completely ignoring the millions of Chinese people immediately outside of their door. It isn't really something a power can do (ignoring that there even being a power contradicts the whole thing). The people themselves could in theory, but they would have to overcome their natural urges to do so.
People always say this with zero evidence. What are some real examples of real people losing their job today because of LLMs. Apart from copywriters (i.e. the original human slop creators) having to rebrand as copyeditors because the first draft of their work now comes from a language model.
I wholeheartedly recommend you buying a new keyboard, by the way.
If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...
I've worked with, trained and lived alongside workers overseas for months at a time and can say that there's no meaningful difference across racial divides, save for some variation on cultural norms. I would have assumed a more charitable interpretation of my words, but we live in uncharitable times. I'll do better going forward.
Cheers
I didn't mean to imply that anyone of asian descent is inherently generating "spaghetti code". If that's how it read, I apologize, that was not my intention.
To further clarity, I've dealt with a number of these offshoring agencies (the really inexpensive ones specifically), and their output is very similar to what AI produces today. They have extreme turnover rates, and team assignments change at random so lost context and variable output is common. They do operate cube farms just like US workers, though I'm not sure why that's pertinent to call out though.
I agree, however, that I'm not saying anything sophisticated or complex, merely stating an observation.
I get the feeling that we're not even close to paying for the _actual_ costs of our frontier GenAI models at current usage levels, with or without subscriptions in the picture. AFAICT we're all using a highly subsidized product, made possible by private capital on the promise of future returns that may or may not materialize.
Outside of a few vertically integrated companies (Google with their custom TPUs, possibly AWS with theirs) LLM companies like OpenAI have to rely on massive data centers via MSFT, Oracle and Nvidia deals to train their frontier models to stay competitive. Theres a lot to pay for when wielding 20 Gigawatts of compute on other folks' machines. For OpenAI we're talking 4+ trillion USD so far with no signs of slowing. That's a hell of a lot of subscriptions to make up for that spend and they have a long climb ahead of them to get there. Maybe their "killer app" will be their new "erotica" models, who knows (porn has lead several tech initiatives in the past.) But I wouldnt bet money on it working out for them.
It's estimated that OpenAI spends 3 USD for every 1 it makes. Obviously that will have to change to make them an actually viable company in the long term. In the end, I see the most likely scenario is we're left with the few large players like Google. They're the ones that have any hope on "winning" the GenAI race, as they're in the best position to not rely on someone else's shovels.
All that said offshoring started out with similar promises to GenAI and some things panned out with offshoring and others didn't. Only time will tell what shakes out of all this mess. I just hope we get a sane readjustment of expectations for GenAI before our next economic collapse (the massive GenAI investment has helped prop up our economy to an extent, at least in the US)
In short, a business exists to turn a profit and OpenAI has yet to do so. Perhaps they eventually will and be the new "offshore" solution going forward as you imply, but just like actually moving your technical talent overseas it comes with a significant amount of tradeoffs to consider (tradeoffs already outlined in parent and other posts on this thread.)
Soon, we'll all just be meatpuppets, guided by AI to suit AI.
This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.
Quality. Matters.
It always has, and it always will. If you're telling yourself otherwise, you are part of a doomed way of thinking and will eventually be outcompeted by those who understand the implications of thinking further ahead. [ETA: Unfortunately, 'eventually' in this context could be an impossibly long time, or never, because people are irrational animals who too often prioritize our current feelings over everything else.]
I would argue most never did.
If you spend time in the startup world you quickly realize how little the average developer cares about craftsmanship or quality.
The startup world is full of mantras like move fast and break things, or if you are not embarrassed by your mvp it’s not an mvp.
This is how our world works and until it hits the proverbial wall, this is how it will continue to work because it's too big to be detoured or course-adjusted
In a 1995 interview with Inc. magazine, author Kurt Vonnegut was asked what he thought about living in an increasingly digitized world. His response is so perfect that it’s worth reprinting in full:
I work at home, and if I wanted to, I could have a computer right by my bed, and I’d never have to leave it. But I use a typewriter, and afterwards I mark up the pages with a pencil. Then I call up this woman named Carol out in Woodstock and say, “Are you still doing typing?” Sure she is, and her husband is trying to track bluebirds out there and not having much luck, and so we chitchat back and forth, and I say, “OK, I’ll send you the pages.”
Then I’m going down the steps, and my wife calls up, “Where are you going?” I say, “Well, I’m going to go buy an envelope.” And she says, “You’re not a poor man. Why don’t you buy a thousand envelopes? They’ll deliver them, and you can put them in a closet.” And I say, “Hush.” So I go down the steps here, and I go out to this newsstand across the street where they sell magazines and lottery tickets and stationery. I have to get in line because there are people buying candy and all that sort of thing, and I talk to them. The woman behind the counter has a jewel between her eyes, and when it’s my turn, I ask her if there have been any big winners lately. I get my envelope and seal it up and go to the postal convenience center down the block at the corner of 47th Street and 2nd Avenue, where I’m secretly in love with the woman behind the counter. I keep absolutely poker-faced; I never let her know how I feel about her. One time I had my pocket picked in there and got to meet a cop and tell him about it. Anyway, I address the envelope to Carol in Woodstock. I stamp the envelope and mail it in a mailbox in front of the post office, and I go home. And I’ve had a hell of a good time. And I tell you, we are here on Earth to fart around, and don’t let anybody tell you any different.
We’re dancing animals. How beautiful it is to get up and go do something
If you are one of those devs who heavily uses LLMs at work and you are in a position of relative authority, either as senior+ or something else, and you hand off your LLM code to others to review or “build off of”… we hate you. We don’t want to be your voluntold slop jannies. LLM over use and vibe coding is taking a fairly enjoyable job and making it insufferable. Now I have to sift through 3-10x more lines of code that are written in a non-human thought process using terrible naming schemes and try to find the bug… just to realize that the code isn’t even solving for the correct or underlying problem. Every time I have to interact with a co-workers LLM code, my tasks take weeks longer than they would have. This is including the ones who claim to be exerts in prompting and harnessing and whatever skibidi buzzword is out this week.
You are not saving time, you only think you are because you don’t look closely at the output and send it off to your lowley janitors to deal with. And the people who claim to be running 20 or 30 AI tasks at once what are you even building? If you aren’t literally shipping the next Amazon that’s just embarrassing.
I can not wait for people to wake from this bizarre mass psychosis. I already see co-workers context window getting smaller than free version ChatGPT in an incognito window.
As much as we can fault the technology and the hype around it, this as much a people problem as anything else. Before AI, this same problem happened with architecture/PoC to implementation hand-offs.
AI is a new tool that a lot of us are still figuring out, but that doesn't excuse poor communication.
Till this day even there exists IDEs with proper auto-complete suggestions (or editors with LSP support), there are still a lot of people prefer doing it in the old way (vim/emac/nano) and none of them get fired for that.
Z3 exists but we still like to solve algorithms by hand. High level languages exists but C/C++ codes are still written every day, even asm is still used.
On the brighter side of the issue, we now have ton of legacy projects written in obscure languages (COBOL, FORTRAN) that only some dozen people can maintain effectively, and those people are mostly at retirement age now. GenAI can solve that.
Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.
And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)
I feel like long before LLMs, people already didn't care about this.
If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".
When software operators tolerate bugs they’re signaling that they’re willing to forego the fix in exchange for other parts of the feature that work and that they need.
The idea that consumers will somehow not need the features that they rely on anymore is completely wrong.
That leaves the tolerable bugs, but those were always part of the negotiation: Coding agents doesn’t change that one bit. Perhaps all it does it allow more competitors to peel away those minority groups of users who are blocked by certain unaddressed bugs. Or maybe it gets those bugs fixed because it’s cheaper to do so.
"Terrified" is a strong word for the death of any craft. And as long as there are thousands that love the craft, then it will not have died.
Mourn it? Overall, people seem to hate tech workers in general (of all kinds). The death of the craft will not be mourned as a tragedy, it's already being celebrated as a triumph (whether it's true or not, doesn't matter).
I must admit there's a part of me that wants it to die. I want people to remember what was good about it in retrospect, then realize there is no chance of ever getting it back. Permanent losses are important lessons.
As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.
Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.
If only it was plywood, at least it'd be solid and sturdy. These days it's particleboard, which is much worse than plywood. Similar concept, but now made out of sawdust and glue instead of woodchips and glue that are alternately laid down in different orientations layer by layer for increased strength.
Particleboard chips much easier, breaks down much faster with moisture, and can't hold screws in. But it's very cheap, can be made very smooth, and is light.
Agree with the general sentiment though.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
My favorite pre-LLM thing in this area is Flighty. It's a flight tracking app that takes available data and presents it in the best possible wway. Another one is that EU border visa residency app that came thru here a couple of months ago.
Standards for interchange formats have now become paramount.
API access is another place where things hinge on.
Its being compared to that of a slop machine, and billionaires claiming that its better than you are in all ways.
Its having integrity in your work, but the LLM slop-machines can lie and go "You're actually right (tells more lies)".
It all comes down to that LLMs serve to 'fix' the trillion dollar problem: peoples wages. Especially those engineers, developers, medical, and more.
AI produces code that technically runs but lacks the thoughtfulness that makes software maintainable or elegant. The "90% solution" ships because economic pressure rewards speed over quality.
What haunts me: compilers don't make design decisions. IDEs don't choose architecture. AI does both, and most users accept those choices uncritically. We're already seeing juniors who've never debugged without a copilot.
The author's real question: what if most people genuinely don't care about the last 10%? Not from laziness, but because "good enough" is cheaper and we're all exhausted.
Dismissing this as "just another moral panic" feels too easy. The handcraft isn't dying because AI is too good. It's dying because mediocrity is profitable.
80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.
This just isn't true. First, cheap tools have always been around. I have a few that I've inherited from my grandfather and great-grandfather. They're junk and I keep them specifically to remind myself that consumer-oriented trash versions of better quality tools have always existed.
Second, Harbor Freight is the only consumer-oriented tool retailer that seems to be consistently improving their product lines. Craftsman, which was the benchmark for quality, consumer-oriented hand tools, dropped off a cliff in terms of quality around the mid- to late-2000s.
If you can afford professional-grade tools (Snap-On, Mac, Wera, Knipex, etc.) great. For the rest of us, Harbor Freight is the only retailer looking out for us. Their American- and Taiwanese-made tools are excellent. Their Chinese-made tools are good. Their Indian-made tools will get the job done, but it won't be pleasant. At least they give the consumer a range of options, unlike Snap-On, which gives you a payment plan.
This is happening in other areas as well. The Chinese mini excavators and mini skid steers are changing what smaller landscape companies can do. They are not as good as a Kubota but they are 1/2 the price and 80% as good.
I installed some drywall a few years ago. I plan to install a room of drywall exactly never again. Not worth it for me to buy the best drywall tools.
But I have installed multiple wood floors, replacing old carpet, and would do so again if needed. I’d rather get higher quality tools there so I can keep them and reuse them for years.
Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).
Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).
Agents don’t care about any of Workday’s value-adds: Customizable workflows, “intuitive” experiences, a decent mobile app. Agents are happy to write SQL against a few boring databases.
Software is just a proxy for the thing that we want which is data. The same way an electric drill is a proxy to a hole. Since it's impossible to sell holes there's a market for selling electric drills to make holes.
A lot of economic activity is based on these proxies. Same in the software digital world. Even though it's data that were after many successful software businesses have been started to sell the tools, i.e. software products for people to make their digital "holes".
Now imagine if you could just suddenly 3D print your electric drill. Or your frying pan. Or your garden shears. What would happen to the economiies based on selling these tools?
Once you can prompt your way to any digital creation what happens to the economies based on making the digital tools?
It's not there yet, but if/when it does it's going to be a complete economic restructuring that will affect many. Careers will be wiped out, livelihoods will be lost.
Only big creative companies like Disney can play the game of making licensing agreements. And they are ok with it because it gives them an edge over smaller, less organized creators without a legal department.
https://thewaltdisneycompany.com/news/disney-openai-sora-agr...
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
The slop problem isn’t just model quality. It’s incentives and decision making at inference time. That’s why I’m working on an open source tool for governance and validation during inference, rather than trying to solve everything in pre training.
Better systems can produce better outcomes, even with the same models.
What are you building?
It runs at inference time rather than training time and is model agnostic. The goal is to make disagreement explicit and costly instead of implicit and ignored, especially in high stakes or autonomous workflows.
A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.
Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
There has been no shortage of mobile apps, Apple frequently boasts that there are over 2 million of them in the App Store.
I have little doubt there will be more, whether any of the extra will be decent remains to be seen.
It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing
It's just that many powerful people have a vested interest in keeping the rest of us poor, miserable, and desperate, and so do everything they can to fight the idea that anything can ever be done to improve the lot of the poor without destroying the economy. Despite ample empirical evidence to the contrary.
Never mind that UBI has never actually existed, it probably never will exist, and it's very, very likely that it won't even work.
People need to face the possibility that we will destroy people's way of life the way we're headed, and to not just wave their hands and pretend that UBI will solve everything.
(Edited to tone back the certainty in the language: I'm not actually sure whether AI will be a net positive or negative on most people's lives, but I just think it's dishonest to say "it's ok, UBI will save them.")
I'd rather we democratize ownership [1]. Instead of taxing the owning class and being paid UBI peanuts, how about becoming the owning class and reaping the rewards directly?
Whereas a Japanese business would rather just not ship in such a case. Look at the Nintendo, such as the 3d Mario games. Those things are polished to an insane degree that no American studio would bother with.
Apple is exceptional in many ways and this is one of them. Microsoft, with “no taste”, is the standard American fare.
They allow me to do work I could never have done before.
But there’s no chance at all of an LLM one shotting anything that I aim to build.
Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.
The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.
If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.
As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.
I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.
Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").
Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.
This is the browser engine I was alluding to in the post: https://github.com/wilsonzlin/fastrender
https://arxiv.org/abs/2510.15061
Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.
The definitions you're operating under are mentioned thus:
> characteristic repetitive phraseology, termed “slop,” which degrades output quality and makes AI-generated text immediately recognizable. (abstract)
> ... some patterns occur over 1000× more frequently in LLM text than in human writing, leading to the perception of repetition and over-use – i.e. "slop". (introduction)
And that's ... it, I think. No further effort is visible towards a definition of the term, nor do the background citations propose one that I could see (I'll admit to skimming them, though I did read most of your paper--if I missed something, let me know).
That might be suitable as an operating definition of "slop" to explain the techniques in your paper, but neither your paper nor any of your citations defend it as the common definition of an established term. Your paper's not making an incorrect claim per se, rather, it's taking your definition of "slop" for granted without evidence.
That doesn't bode well for the rigor of the rest of the paper.
Like, look: I get that this is an extremely fraught and important/popular area of research, and that your approach has "antislop" in the name. That's all great; I hope your approach is beneficial--truly. But you aren't claiming a definition of slop in your paper; you're just assuming one. Then you're coming here and asserting a definition citing "the LLM creative writing community circa 2022-2023" and asserting redefinition-after-the-fact, both of which are extraordinary claims that require evidence.
Again, not only do I think that mis-definition is untrue, I also think that you're not actually defining "slop" (the irony of my emphasizing that in a not-just-x-but-y sentence is not lost on me).
I don't know which of the authors you are, but Ravid, at least, should know better: this is not how you establish terminology in academic writing, nor how you defend it.
A computer is a person employed to do arithmetic.
It's often lamented that the World Wide Web used to be controlled by indie makers, but now belongs to a handful of megacorp websites and ad networks pushing addictive content. But, the indie maker era was just a temporary market inefficiency, from before businesses fully knew how to harness the technology.
I think software development has gone through a similar change. At one point software companies cared about software quality, but this too was just an idealist, engineer-driven market inefficiency. Eventually business leaders realized they can make just as much money (but make it faster) by shoveling out rushed, bloated, garbage software, since even though poor-quality software aggravates people, it doesn't aggravate enough for the average person to switch vendors over it. (Case in point - I'm regularly astounded at how buggy the YouTube app is on Android of all platforms. I have to force-kill it semi-regularly to get it working right. But am I gonna stop watching YouTube because of this? Admittedly, no, probably not.)
Seems an unlikely problem. It'll get better, which may cause it's own problems.