Yes.
For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly. Prior to 1800 or so, those skills were not all that useful to the average farmer. There were more smart people than jobs for them. Gradually, more jobs for smart people were developed, but not until WWII did the demand start to exceed the supply. Hence the frantic technical training efforts of WWII and the following college boom. This was the golden age of upward mobility.
It's hard to imagine this today. Read novels from the 18th century to get a feel for it. See who's winning and who's struggling, who rises and who falls, and why. Jane Austen's novels are a good start.
The nerds didn't take over until very late in the 20th century. There were very few rich nerds until then. Computing was once a very tiny world. You could not get rich working for IBM. The ones who left and got rich were in sales.
So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
This is really bleak to me. We can do better than primogeniture, and of course the gender discrimination that goes along with it. You might as well write that subjugation of women is a "core value", simply because it has been for so many time periods.
> Physical robustness. Strength, perhaps brutality.
John Henry is not going to beat the steam shovel any time soon.
> For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly.
It's not an anomaly; rather, it's the other way round. These used to be highly specialized skills that carried significant status, and got democratized by mass education in the 20th century.
We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
In fact, there are few things less discriminatory than a random birth order. You may as well be assigned a random number at birth, and the lower your number, the more you're paid. In such a system, there's nothing to discriminate against; the ordering is absolute and immutable, and everyone is treated equally.
I agree that it's a bleak idea, but Animats wasn't talking about subjugating women.
Unfortunately, that is the current stage of humanity. We all currently live in a global subscription model for food, housing, safety, etc. No doubt that we will move beyond it eventually, but the current organization of society is kept in place by the owner class which benefits from the current arrangement.
One of the steps for moving beyond it is educating the modern day serfs (our peers) about reality as it is and alternative visions of a future where we are no longer selling our labor to the owner class. It will take generations.
The algorithms and bots that curate/generate content directed by accelerationists definitely want people to think that. There is a whole system in place now that can shape future outcomes just by convincing everyone that have no power when the opposite is true. The parent is probably a bot, or has been influenced by one to many there is nothing new under the sun solipsism bs.
To extrapolate from fewer people were formally educated or literate to intelligence wasn’t valued is absurd.
As for your part about reading and writing. Literacy has always been a very valuable skill that would increase your social standing. It was scarce and difficult to acquire before the printing press, but it was always valuable.
You might still only be a farmer if you're smart, but you can at least be one of the more productive farmers with a more smoothly running farm.
None of this Jack inherits but wants to live in the big city and be an architect. He'll inherit and keep because there is no architecture job to be had.
As someone who grew up on a farm, "you may be a farmer but you could be a productive one" is so intensely depressing. Farming is a shitty job that requires insane amounts of back-breaking labor, never-ending toil, and all this at a time when climate change is going to utterly fuck over farmland and destroy crop yields.
People seriously underestimate how underpowered and tiny llms are for the tasks they need to solve.
A trillion parameter model can't tell the difference between left and right. We will need to grow them millions to trillions of times before they are half as good as AI boosters claim they are.
This isn't the end of thinking any more than the watt steam engine was the end of horses. It will be centuries before we get there. And by that point the difference between man and machine will be at best academic.
This is a "noble savage" conception of the past. Thinking/cleverness/craftiness was highly rewarded even in preliterate societies. Even in war, "polytropos" Odysseus comes out ahead of the dumb brutes with bigger spears.
"Oh well, we were in an anomalous time of social growth, time to go backwards! We won't even need to read or write or think! It's all just too bad, but that's just the way the world works, like it did in 1800." [or pick your date before any current person was alive]
Lots of people have started considering a time of significant "progress" as "an anomaly", as if the world should always just be the way it was in, say, 1800, like that was actually the realistic pinnacle of human society. You also seem to be loosely basing this argument on the availability of "rich nerds", which seems like a bizarre non-sequitur. Computing once didn't exist, and we still valued reading, writing and thinking.
I'm kind of baffled by how regularly I see comments like this. Like, come on. This is basically the AI black pill, no?
I think, in a way, the Internet itself is the virus. It has infiltrated us and our minds. Rage and suffering are what get clicks and engagement. The Internet has become a suffering engine, which spins angst into gold.
You're ignoring the astronomers of ancient Mesopotamia, the scribes of Egypt, the grammarians of India, the philosophers of ancient Greece, the orators of Rome, the physicians of Islam, the scholars of the Middle Ages, the masters of the Renaissance, and all the great natural philosophers, mathematicians, physicists, biologists, of all the ages up to 1800.
We are a technological civilisation, a scientific civilisation. Who do you think comes up with all the technology? Alexander, the Great Butcher? Attila the Hun? Jenghis Khan?
We live in the civilisation that was born in Athens, not in Sparta. Knowledge and wisdom always are the greatest power that shapes reality. This won't change just because OpenAI made a viral app.
Then consider the role of the clergy in the Middle Ages, and say nothing of Rome and large bureaucracy (Roman engineering alone).
On top of this you need to ignore very large bureaucracies and trading networks in Asia to go far with your narrative (Persian, Turkish, Mongol, Indian and Chinese).
There were a good deal of powerful nerds before the 1700s.
My point here is that the prevalence of smart people in the population exceeded the demand for them until the second half of the 20th century. Then, for one human lifetime, large numbers of smart people were needed. That period may be ending.
It may have already ended. In the US, about half of college graduates have jobs that don't require a college degree.
This is some weird manosphere bullshit. Pre-industrial societies invented philosophy and writing. People across the world know the name of Socrates from 2500 years ago. They know stories from Homer 2800 years ago.
It's a mistake (a) to think pre-industrial people were grug-brained cavemen (b) that we're going to revert into the same cavemen because a computer can do your pointless six-figure office job.
Today? On an engineer's salary? Ha!
Which you mostly don't need today. You may have a lawn guy, take your car to a mechanic, and use a washer/dryer to do most of your laundry.
Sorry but that is just not true.
Sure farmers aren't academics, but the sheer number of tasks, and tools required to efficiently do those tasks were vast. Innovation in winnowing was literlly life and death, as was plant/animal breeding traits.
Observing and reacting to changes in plants, lands, water, animals was critical to getting a good harvest. Packing and storing food was critical to surviving the winter.
Sure, the lack of literacy hampered knowledge recording and dissemination.
BUT, if we look mine the vast memory that is classics, knowledge, wit and cleverness were prised as often as strength and beauty.
AI is not a problem because it is AI. It is because of political circumstances.
Think beyond the small worldview where technology and valuation are everything and you are just a pawn. Then you see that a better world is possible. The first step is then to not give up.
The premise here is that AI works well enough to automate the “smart” people jobs. No one but delusional workaholics are afraid that their job will get automated because they cling to the job in itself. So clearly, this is not about the tech itself.
There wasn’t a college boom post-WWII because technology came and demanded it.
> That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
Take me by the hand, circumstance. I am yours to be swept away.
There are jobs that demand robustness, but they are about applying knowledge in extreme conditions, not about letting an AI do the thinking.
Not really disagreeing with you, but there are a few obvious examples. A lot of construction jobs are still labour intensive, and I've seen a lot of people who don't last the first day, let alone their first week. Also, security jobs, say in nightclubs, also value physical robustness. Orderlies in hospital, require the ability to move bodies, alive and otherwise. The machine is usually better.
The same goes for other occupations, and...farming. Breeding cattle is a complex science, so is growing crops consistently and valuing the production.
When nail guns and cheap power tools came in, every other yokel was suddenly a builder. That is when I got out
> So what was valued? ...
None of that was valued much compared to lineage, though.Every professor at any university has a dozen more project ideas than they have graduate students, every factory boss has a dozen more optimisations than ways to implement them, and looking up into the night sky we have 95% of it that cannot be explained.
The gap is not too few smart people, nor too few "jobs" that need smarts. The gap is being prepared to arrange society and wealth so the "job" is discovery, science, sharing. We are no longer hunter gatherers, no longer a feudal society, perhaps we shall stop being whatever this one is and try a new one.
(and no, I don't think there is a name for the new one yet (its not socialism, maybe not capitalism).
Lets just not fall back to Feudal if we can help it
We don’t use chimpanzees for any knowledge work today, even though they’d be better at it than some other animals.
It’s ownership of capital and technology.
Plumbers aren’t suddenly getting rich. At best they’re not losing jobs at the rate everyone else is, but once so many people lose jobs then they can’t afford plumbers either. So even plumbers are worse off even if they’re not as badly off as the rest of us.
And all this assumes that humanoid robots don’t develop and succeed which is a major assumption.
Dude we already live in a world dominated by capital that displaced the need for brute strength. Just go to a construction site in a developed economy. For which the returns to the owners of assets drive continual reinvestment.
Posts like this expose how many here think they bring something to the table when it’s just noise.
Knowledge is a unique resource compared to the other traditional factors of economic production (land, labor, and capital). It is often invested in with capital (education and tools), but it is carried with the human, and leaves with them. It is always decaying - knowledge workers should be in constant learning mode, and stale knowledge eventually becomes a drag on performance.
I'd argue the future is about knowledge workers all becoming managers. When you use agentic AI, it has the flavor of the skills of management. Management is "a practice and a liberal art", according to Drucker, one that has been in poor supply for some time. LLMs are have somewhat stale knowledge and require the human, tools, and RAG to freshen it. And LLMs will always regress to the mean. It is pretty good at pattern analysis and starts to get shaky and mediocre with synthesis. It requires very nuanced, and elaborate prompting to shape its token output towards insightful results that aren't a standard answer. For coding exercises, that can be fine, but at high complexity levels, or when dealing with issues of strategy or evaluation, it is a platitude generator and has no unique competitive advantage.
In other words, competent, talented management mixed with knowledge work is the scarcity we are heading towards. This is arguably why you're seeing the rise of "markdown frameworks" that people swear improve performance, it's the beginnings of management scaffolding for AI.
Technical folks struggle with valuing management skills, and I expect this will increase its value and scarcity.
As for "Physical robustness. Strength, perhaps brutality. Competence in physical tasks." I think the robots will be replacing that pretty shortly.
"Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values."
Ehhhhh not really? What about Christianity, where the meek shall inherit the Earth, and love is the core value (putting aside modern day Pharisees and Charlatans that twist the underlying value system)? Or Islam, whose core value is submission to God? While there have been Societies that valued parentage and birth order, that's far from universal.
This leads to the reformulation of knowledge workers as "human capital", and it's hardly post-capitalist. A capitalist society is one where people assemble different forms of capital to produce capital returns that are larger than the sum of the capital inputs, where the possibilities available to you depend on the amount and quality of capital that you have access to. This is all still very relevant when discussing human capital - access to human capital is determined by the quality of your professional networks, whether you decide to be present in geographic talent clusters (i.e. cities as centers of industry), and whether you have sufficient financial capital available in trade.
AI will not transition us to a post-capitalist society. Its promise is solely the ability to replace human capital with other forms: chips and electricity. It does not spell the death of human labor any more than computers and spreadsheets did for accountants.
Here are some words to live by[0]. I don't agree with everything Derek Silvers says, esp about philosophy. Its more of a guiding principle that drives rather than divides.
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
You say it in a way that it sounds like automobiles don't have a positive effect. I don't agree - they have some negative effects but overall they have a vast net positive effect for everyone.
The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
The psychological effects of this are enormous and under discussed.
It's not like you're living away from any people - you have 100 other neighbours living on your street!
Yes, you could say that, though I'm not sure who would actually say that seriously.
Today I find myself in an urban hellscape without owning a vehicle. Nothing is walkable. I am crammed in, thanks to Equal Housing, with immigrants and people of utterly alien races and cultures (I consider myself the minority.) If I expect to find people like me or shop within my demographic, nothing is adjacent and it’s all several miles worth of transportation.
Car culture and forced integration has fragmented every possible family unit that could have been cohesive or collectivist. If I am celebrating a religious or cultural festival, I can count on none of my neighbors sharing that celebration, or in fact raising conflicts on the days most sacred to me.
Anywhere I may choose to walk, or even if I drive, I am trudging through vast empty parking lots of asphalt because of cars. The roads are laid out for cars. A cop told me yesterday I shouldn’t drive my e-scooter at 17mph in the street but on the sidewalk. Every motorist also hates those scooters, whether in motion or properly parked. Every motorist also hates the light rail train and hate for Waymo is fomented by motorist and pedestrian alike.
There is no place I could move to or live that would change this equation in any useful way. I do not hate cars, but I hate what they have done to our lives and our landscape.
We can argue about whether this is a good trade off, but the claim that cars make everyone's life better is straightforwardly false.
The only way you receive food (except from your backyard inner-city garden?) is through people DRIVING. The way you receive packages is by DRIVING. They city infrastructure you enjoy is maintained through skilled laborers and tradespeople DRIVING.
The problem is we are numb to it. 40,000+ people are killed in car accidents every year in just the USA. Wars are started over oil and accepted by the people so they can keep paying less at the pump. Microplastics entering the environment each day along with particulate from brakes, and exhaust. Speaking of exhaust: global warming. Even going electric just shifts the problems as we need to dig up lithium, the new oil. We still have to drill for oil for plastics and metal refining, recycling and fabrication.
I'm not sure what the alternative would be. Maybe everyone lives in giant 10 million+ population cities that are all connected to each other by rail (and rail connects all airports, harbors, etc.) and then you have to show up at rail station to get your groceries or whatever else?
All they saw was that trips taking a day could now be done in an hour and produced no manure, and that meant suddenly you could reasonably go to many more places. What's not to like? A model T was cheap, and you didn't even need to worry about insurance or having a driver's license. Surely nobody would drive so carelessly as to crash.
*well, not technically nobody, but nobody important.
What’s really interesting is that you can find newspaper columns in the 1920s recognizing what we now call induced demand as even by then it was clear that adding road capacity simply inspired more people to drive.
Today we have a much better understanding of the world, so we have the means to think down the line of what the negative effects of LLMs and course correct if needed.
I don't see anything positive about being forced to participate in this car-ownership game where 99% of North American cities are designed around car ownership, and if you don't own a car you're screwed. I don't WANT to own a car, I don't want to direct countless thousands of dollars to a car note, car maintenance, gas, etc. I want the freedom to exist without needing to own an absurdly expensive vehicle to get myself around. There's nothing freeing or positive about that unless all you've ever known and all you can imagine is a world in which cities are designed around cars and not people.
Really these people decades ago had a great grasp on these things. But why did they "fail" and we still have traffic? They didn't fail really, what failed was implementation not planning. Most cities you see with notorious traffic today, chances are the bottlenecks that exist were planned to be relieved by some midcentury road plan that was for whatever reason, not ever built. Comprehensive rapid transit was often also planned, several times over, but not built or at least never to the full scale of those plans. Catalytic converter was also a great success people today probably don't even think about. You can see the mountains again in California's cities thanks to the catalytic converter.
Leaded gas took longer, but I'd say the tailpipe pollution, congestion, and general capacity related issues were well understood.
Yes they ship people around somewhat fast. Slower than possible with other methods, and the cost is incredible - economic (much more expensive per passenger than almost any alternative), political (they inherently divide people, dehumanise and make people never really share a public space), health - they reduce lifespan by both lowering living quality as well as directly killing a staggering amount of humans per year).
And we have learned how to build better places for humans that do not need these coffins on wheels - if you visit any European capital, and most Asian ones - you will see environments built for humans, not cars - soo much nicer.
So cars as a technology have definitely not been beneficial to humanity overall, but it has been somewhat useful to some.
I think strongtowns were very good advocates of what places in America could like if you look beyond cars. I personally like the “not just bikes” channel though.
Cars aren't a positive in society. Transportation is the benefit, and cars are the worst possible way to transport people. A functioning public transit system is better in every possible way apart from egotistical arguments like "I don't like seeing poor people on the bus".
A lot of this comes down to having too much of a good thing. We are really bad at detecting when we've gone past the point of too much, and we're even worse at undoing it once we have.
Kind of like how fat and salt are good for you until you over consume. The world has massively overconsumed cars.
We (or lobbyists) resist having carbon costs included in the prices we pay at the pump.
Edit: More transportation is good; I am not throwing the baby out with the bathwater, just that our accounting for costs makes things look better than they are.
The different however is network effects. When we make a place better for cars, I make it worse for pedestrians. Your adoption of the car, and its pressure on my lived environment, has effects on me. Same as, say, people joining facebook or twitter. But do LLMs create network effects that are directly harmful, or is it just a matter of making it harder to compete, just like a mechanical watchmaker has less business now that it's so easy to have a reliable clock? Because the first case is a problem, but the second one... that's competition. It's civilization. And then it's not really a matter of cars vs pedestrians.
An analog might be the push for banning phones in schools. Setting apart times and spaces where serendipitous human interactions are encouraged by the lack of distractions.
> Now might be a good time to call your representatives.
Well, not that must-read I guess
Musks SpaceX Keynote was ridiculous, don't get me wrong, but we will be able to see AI progress in the next 5 years which will give us some kind of gut feeling were the journey can go.
Also AI solves another problem: Compute. It was clear that we want some kind of compute but its like with 4k; We have 4k for ages now but it is not the default resolution on all displays sold. We stoped pushing the boundaries because invest is not here. People do not bother too much with it.
With AI and the richest companies and people want to see what happens, pushes the envolope a lot faster, pushes us to find solutions.
This AI Compute based on ML/Neuroal Networks can also be used for physics simulation, protein folding, and everything else.
Stoping technology is not an option and not a solution. Education is. We need to educate people.
The problem is that the connectivity required for much of AI is very different than that required for classic HPC (more emphasis on bandwidth, less on super low latency small payload remote memory operations) and the numeric emphasis is very different (lots of mixed resolution and lots of ridiculously small numeric resolutions like fp8 vs almost all fp64 with some fp32).
The result is that essentially no AI computers reach the high end of the TOP500.
The converse is also true, classic frontier scale super computers don't make the most cost effective AI training platforms because they spend a lot of the budget on making HPC programs fast.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.
It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation.
[1] https://www.pcmag.com/news/meta-security-researchers-opencla...
The combination of these two things could lead to a situation where there is a massive, startup-dominated market for engineers who can take projects from 0.5 to 1, as well as for consulting companies or services that help founders to do the same.
Another pair of hopes is that a) the LLM systems plateau at a level where any use on complex or important projects requires expert knowledge and prompting, and b) that because of this, the hype of using them to replace engineers dies down. This would hopefully lead to a situation where they are treated like any other tool in our toolbox. Then, just like no one forces me to use emacs or vim (despite the fact that they unambiguously help me to be at least 2x more productive), no one will force me to use LLMs just for the sake of it.
Focusing on option 2 and software development, teams and companies will only downsize if the demand for software doesn’t increase. Make the same amount of stuff you do now but with less people.
What I think will happen is that enough companies will choose to do things that they couldn’t afford or weren’t possible without AI (and new companies will be created to do the same) to offset the ones that choose to cut costs and actually increase the amount of people making software.
I am pretty sure these are well known economic ideas but I don’t know the specific terminology for it.
More armies of one. That single team of five now becomes 30 teams of one or two each.
Which largely how automation resulted in more jobs - the cost decrease induced demand. Think about how cheap cameras, laptops & internet up-ended traditional media. We went from 3-4 channels on TV in the 60s, to 3-400 channels on cable by the 90s, to 115 million channels on YouTube right now. Because anyone with a basic phone can record and edit content, which used to require millions of dollars in equipment and took years to learn to do. And people are happy to do so for a fraction of the revenue a TV station would require.
And early cars were expensive, dangerous, highly unreliable, uncomfortable, belched foul exhaust, and required knowledge of how to drive AND maintain them. We are far, far from that scenario these days.
There are many studies concluding that for some tasks, experts make decisions that are no better than a dice roll, sometimes worse. So the game here is not to make good decisions, but to make a convincing argument. And it is something LLMs are really good at.
And it is ironic because it matches the job of a CEO pretty well. CEOs often make decisions with high uncertainty, the kind where it is hard to beat random, and they are expected to communicate with authority.
It doesn’t have to be effective. It has to make CEOs believe it is effective.
Random number generators can't solve open math problems, but it looks like AI agents can? [1]
[1] https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect.
If AI is smart enough to replace the 99.999% it's also smart enough to replace the 0.001%.
The 99.999% needs to assert their controlling stake in the technology. I don't know what this looks like. Maybe ubiquitous unionizing, coupled with a fully public and openly-trained LLM.
https://www.bentoml.com/blog/navigating-the-world-of-open-so...
Energy. The key is controlling their access to energy.
But that doesn't really matter when we talk about "replacement" because these people don't "do" they simply "own".
They're not concerned about being outpaced at some skill they perform in exchange for money...they just need the productive output of their capital invested in servers/models/etc to go up.
What's important is that ultimately some small subset owns this, and it doesn't matter how smart they are, only that they own the thing and that it cannot be employed against them (because they hold the key).
General strike and bank runs.
Not to collapse the economic system, but to present a credible threat of collapsing the economic system which AI development, as these elite and their platforms know it, relies on. When they're freaking out, we call for negotiations.This only works if people with "secure" livelihoods not just participate, but drive the effort. Getting paid six figures or more in a layoff-proof position? Cool, you need to be the first person walking out the door on May 1st (or whenever this happens), and the first person at the bank counter requesting your max withdrawal.
Collective humanity needs to think this matter through and take global action. This is the only way I fear, short of natural calamities (act of God) that unplugs humanity from advanced tech for a few generations again.
As for bank runs, no one cares. The big banks no longer need retail customer deposits as a source of capital for fractional reserve lending. Modern bank funding mechanisms are more sophisticated than that.
In which the FDIC took unprecedented action, drawing down the DIF to backstop depositors beyond the insured $250k and offering a credit facility to other banks, in order to prevent "contagion" - a panic, a bank run - which was presumed to be likely after the 3rd largest bank collapse in US history. A bank almost no one outside of California had heard of before it died.
Bank runs are serious business, and far from being something "no one cares" about, even just talking about them makes banks nervous, because they can happen to even "healthy" banks. The big banks have been undercapitalized for more than a decade, and even a moderate run on a regional institution threatens the entire system. Which is why it should be done, or at least signaled as incoming; it's good leverage.
>You're free to take a vacation or quit working if you want to. Go ahead.
The implicit, "I'll stay here, where I'm nice and secure," is delusion. People care about your outcomes even if you don't care about ours. Take the invitation to organize with others to secure your own future, to show just how much you're needed before your employer decides that you're not (however erroneously).What? I don’t know anybody who has a layoff-proof position.
Or things could turn out more than fine and we progress as we've always progressed, towards more abundance and humans in 30 years will live massively better lives than we live today, just as we live massively better lives than people at just about any previous point in history.
>“There are people sitting in our office in King’s Cross, London, working, and collaborating with AI to design drugs for cancer. “That’s happening right now.” https://www.htworld.co.uk/news/research-news/isomorphic-labs...
and
>...enables researchers to move seamlessly from AI-generated sequences to functional antibodies in just days https://the-decoder.com/googles-ai-drug-discovery-spinoff-is...
There may also be downsides, like skipping testing things that would enhance our fundamental understanding of something because the AI was wrong. But that’s already a problem , and having a better gauge in the early stages could be really helpful
But LLMs compute requirement is so high that it pushes the boundaries of compute, memory and memory bandwidth which is fundamental for curing diseases.
LLMs math / neural networks can and are used for medical research. Simulating a whole body with proteins, cells etc. will bring us the breakthrough we need.
Nothing in modern medicin research is withoout compute.
AlphaFold def helps researchers around the globe.
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
* They are growing up in a climate that is worse than any prior generation had and getting worse.
* In the US, they are growing up in a time with less upward mobility and more economic inequality than the previous several generations had.
* Trust in social institutions and government is crumbling before their eyes.
* Blue collar jobs are already gone and white collar jobs have no certainty because of AI. Almost all of the money has already been sucked out of artistic professions and what little is left is quickly evaporating because of AI.
Imagine you're 17 like my daughter and trying to decide what to major in in college. You want to pick something that you think is likely to give you some kind of decent career and sense of stability. What do you pick?
Because, I'll tell you, she asks me and I have no fucking idea what to say.
When I was in school (US, Ohio, 48 y/o) we got the "if you don't go to college you'll flip burgers" spiel from our teachers / guidance counselors.
Last week she got a variant of that except the teacher thoughtfully added "and burger flipping will be done by robots so you can't even fall back to that". The teacher threw in a healthy dose of suggesting creative jobs will all be destroyed and that "learning to manage AI" is the only viable future career path.
Trades are what my daughter brought up as a viable career path (and I was proud when she did). She also pointed out her school focuses heavily on "college prep" and is loathe to even mention that trades exist.
Edit:
I'm telling my daughter to lean on her interpersonal skills and charisma, and take every opportunity to lead groups. Being a physically present, inspirational, and effective leader is, I figure, a role that isn't going to go away any time soon.
I didn't go to college (beyond an Associate I grudgingly completed) and I didn't end up "flipping burgers". I concentrated on marketable skills in an industry that was growing, and I leaned into good writing and communication, and entrepreneurship. I've tried to hold this up to her, though I am quick to concede that the world is different now, by a large margin, from when I got started.
This isn't true at all. There's never been a better time to be in the trades.
I see these types of jobs flourishing in my community. My barber is fully booked for the next month, and a hair salon owner in my street bought a new property and started a second hair salon... In the same street! And the second salon is also fully booked.
Those days of grinding on some grad school maths homework until insight.
Figuring out how to configure and recompile the Linux kernel to get a sound card driver working, hitting roadblocks, eventually succeeding.
Without AI on a gnarly problem: grind grind grind, try different thing, some things work, some things don't, step back, try another approach, hit a wall, try again.
This effort is a feature, not a bug, it's how you experientially acquire skills and understanding. e.g. Linux kernel: learnt about Makefiles, learnt about GCC flags, improved shell skills, etc.
With AI on a gnarly problem: It does this all for you! So no experiential learning.
I would NOT have had the mental strength in college / grad school to resist. Which would have robbed me of all the skill acquisition that now lets me use AI more effectively. The scaffolding of hard skill acquisition means you have more context to be able to ask AI the right questions, and what you learn from the AI can be bound more easily to your existing knowledge.
The problem is: (almost) nobody does that. You'll just ask Claude Code to fix the build, go grab a coffee and come back with everything working.
It's like the difference between hand-made furniture and IKEA.
Until OpenAI etc need to turn a profit.
Now, part of me thinks 'is not letting students having AI like not letting them have a calculator'. On the other hand, if I just let the AI do the exam, well I don't really need the student at all do I?
Same is true for your field now. When kids learn things the AI already knows, it's clear they can't use the AI.
If you want them to become smarter than the AI, they will have to pass through a period where they are dumber than the AI, and it's clear at that point they can't use it.
AI raised the bar, that's all. But it's still a bar that can be passed with human intelligence, and your job is to get them past that.
Like years of manually studying, fixing and reviewing code is experience that only pre ~2020 devs will have.
The intuitive/tacit knowledge that lets you look at code and "feel" that something is off with it cannot really be gained when using Claude Code, it takes just 1000s of hours of tinkering.
It will suck if the job shifts to reviewing and owning whatever an LLM spits out, but I don't really know how effective new juniors are going to be.
True. Pretty soon, pre-AI devs may be the COBOL/Fortran engineers of this era: niche and hard to replace.
ML promises to be profoundly weird* - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
* (That first title was different because of https://news.ycombinator.com/item?id=47695064 - as you can see, I gave up.)
p.s. Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
2. Dynamics - https://aphyr.com/posts/412-the-future-of-everything-is-lies...
4. Information Ecology - https://aphyr.com/posts/414-the-future-of-everything-is-lies...
6. Psychological Hazards - https://aphyr.com/posts/416-the-future-of-everything-is-lies...
The next time someone asks me where I think AI is going, I'll just point them at this series.
I've had a tremendous amount of respect for you since I first encountered the Jepsen analyses, but your breakdown of the likely impacts of LLMs and ML may impress me more.
You've articulated very well several concerns of mine that I haven't seen anyone else mention, and highlighted other issues I had not previously recognized.
Thank you for publishing this now, when it could still have some influence, rather than polishing and researching and refining until it was thoroughly rigorous and too late to be relevant.
This is in jest right? Yesterday hn posted a paper from 1983... I regularly see 20 year old articles reposted, you've even got a section in your code of conduct arguing how you aren't Reddit despite the apparent nature of this site....
For community happiness, what matters is avoiding relative repetition. Absolute repetition isn't a problem because, once enough time has gone by, everything old becomes new again. It's like the second-hand clothing store in my home town that used to be called "New to You". (Actually it was called "New 2 You", but I'm pot-committed to the wrong spelling: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....)
In this sense, historical material usually counts as non-repetitive because it doesn't appear much and when it does, the topic is usuually unlike anything else. It is, in fact, some of our most uncorrelated material, so we encourage it.
For example, today we have lunar dust, Trojan coins, and a lost folk singer. A few days ago we had APL source code, medieval pronouns, and lord knows what else.
In the case of classics—perennial submissions that most HN readers either know or would probably enjoy—we explicitly allow them to be reposted after a year or so, and often list the previous discussions because many readers enjoy scrolling back through those too. Here's a current example: https://news.ycombinator.com/item?id=47811186.
Let's presume there's a series on re-making the antikythera mechanism:
1. Metallurgy: finding, mining and smelting the ore
2. Building the tools (files, molds, etc)
3. Designing the mechanism
4. Making the parts (gears, bearings, etc)
Am I wrong or there's no repetition, except maybe the title and calling it a series? Why reject parts 2, 3, 4?
(Edit: I just noticed that strbean already made this point in the sibling comment!)
Also: usually the splitting into a series is somewhat artificial. In the worst cases, people try to make the segments be like TV episodes with cliffhangers, to push you to the next bit. That's a poor fit for HN. But even when they don't, to get the full "meal" you still have to go through all the parts. Few people do that, and the threads as a whole never do. This makes it less interesting and satisfying.
But there can be exceptions, and (ironically?) featuring an occasional exception mixes things up and so reduces repetitiveness! The trouble is that once people see one exception, they immediately expect/want others, pushing things back into a repetitive sequence and making the site less interesting again. It's a bit like telling the same joke twice in a row—the interest is all in the first telling.
As someone that loves cleaning up code, I'm actually asking the vibe coders in the team (designer, PM and SEO guy) to just give me small PRs and then I clean up instead of reviewing. I know they will just put the text back in code anyway, so it's less work for me to refactor it.
With a caveat: if they give me >1000 lines or too many features in the same PR, I ask them to reduce the scope, sometimes to start from scratch.
And I also started doing this with another engineer: no review cycle, we just clean up each other's code and merge.
I'm honestly surprised at how much I prefer this to the traditional structure of code reviews.
Additionally, I don't have to follow Jira tickets with lengthy SEO specs or "please change this according to Figma". They just the changes themselves and we go on with our lives.
At the moment I'm more looking at menial work for one of the local universities. Money is money, and my needs are small; the work is honest, I still should have a decade or so of physical labor left in me, and it carries the perk of free tuition for the degree I never had time for. I would have the time and energy to write, perhaps, even! And, however badly the people in charge are running things lately, the world will always need someone good at cleaning a toilet. (And I am already pretty good at cleaning a toilet!)
If my competitors are filling their flour with sawdust, guess I got to just do the same?
Having different opinions on AI/LLMs doesn't make the use of it the same as replacing flour with sawdust.
The AI 'image' slop for example, i don't think its bad. But i also don't think it takes anything from a real artist. It takes jobs from people with drawing skills but it doesn't change anything for an artist.
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
That said, there is no obvious reason to posit that the intergalactic feudal system, CHOAM, or the empire, came to be because of the butlerian jihad. The concrete side effects of the jihad were in fact hyper specialization of cognitive faculties in humans: mentats, guild navigators, and soldiers all possess super human specialized abilities.
The only thing I've really taken from what Herbert himself said, not something a character in one of his books said, is distrust of messiahs and centralized power being an inherently corrupting force, even in the hands of good people.
Unfortunately, I would have to say right now my bets on the most plausible fictional future becoming reality is WALL-E.
I always preferred this take:
“Civilization advances by extending the number of important operations which we can perform without thinking of them.” ― Alfred North Whitehead
It's both opposite and complementary to your Frank Herbert quote.
> “There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.” ― Isaac Asimov
The easier society makes it to be unaware of the complexity of everything around us, the easier it becomes to assume everything is actually as simple as their surface-level understanding.
On one hand I intuitively think this is correct, on the other hand these very concerns about technology have been around since the invention of... writing.
Here is an excerpt of Socrates speaking on the written word, as recorded in Plato's dialogue Phaedrus - "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom"
Learning how software is built is hard and gruelling work, and you need to constantly invest in yourself. Trouble is there is no time left to “go back to basics and learn FP” for example, because you also need to keep up with all the new LLM stuff happening on top of that.
It is easy for us who already have the foundational knowledge to be able to step back, take the wheel and try to do it ourselves, but plenty of people simply don’t have that option.
And I expect this trend to deepen and broaden. There will definitely be a lot more “witches” than actual engineers.
If they do it entirely using AI to code, and the end output is good enough, they'll learn all the right skills to do this.
Human's always think everything is sliding into doom, and inevitably, it doesn't.
But at the end of the day, does it really matter to most people how a car works? No. When it breaks, we still have professionals to fix it.
The same can be said about software. Does it actually matter how it gets built or what the code looks like, so long as it works and there are no security vulnerabilities? Not really. We will always need people who know how to debug/fix it, albeit that number might be smaller than it is today.
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
Is there a single "document containing all the words," and it updates the website, pdf, and epub whenever you change it?
What struck me was that the presentation is beautiful. It seems worth emulating. But that raises the question of what format you'd write your original words in. Do you suppose they just use Markdown files, or something more elaborate?
For my blog, I write posts in Markdown. Then the build process uses Pandoc to convert the posts into web pages and, for certain pages, PDF files typeset with TeX. For example, here is a post in both web and PDF versions:
(Web) https://blog.moertel.com/posts/2024-08-23-sampling-with-sql....
(Typeset PDF) https://blog.moertel.com/images/public_html/blog/pix-2024060...
The best I was able to do was http://github.com/shawwn/wiki, which has been broken since 2020. You can't spell Haskell without Hell.
If you wouldn't mind pulling up a `claude` in your repo and running `/init` and showing the result, that'd give me at least a vague idea of what to do.
> This piece, like most all my words and software, was written by hand—mainly in Vim. I composed a Markdown outline in a mix of headers, bullet points, and prose, then reorganized it in a few passes. With the structure laid out, I rewrote the outline as prose, typeset with Pandoc. I went back to make substantial edits as I wrote, then made two full edit passes on typeset PDFs. For the first I used an iPad and stylus, for the second, the traditional pen and paper, read aloud.
https://gist.github.com/aphyr/6f0cd6910ccfe2cd7828d1ade2eac5...
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
> That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
Her kids will be fine, its the vast majority of other, non wealthy, kids who are in trouble.
Not that we're in any way in that path, of course, with the people making the working machines also accumulating all the wealth. But still, there's something intrinsically good about automation, even when the system is not suited for it.
I want my ai to do dishes and laundry so I can write, draw, do deep cognitive work.
Not for it to do cognitive work and write and draw while I don dishes and laundry.
Personally, I think until real AGI, the current LLMs will automate a lot of tasks, but the market will adapt and humans still end up with about the same percentage of employment and wages.
I will encourage my kid to gain independence, but of course I'm worried about it! The fact that there is uncertainty in her independence and that I can imagine bad outcomes does not mean I'm working against her interest by encouraging it.
"I don't know what jobs there will be to do" is a statement of uncertainty, and, given how you are relaying it, there must have been fear there as well. But it doesn't seem like it's a statement that the world will be worse. You can be fearful and hopeful at the same time, and fear tends to be the stronger of the two, and come out more strongly, again especially in parenting I find, even if you find the hopeful outcomes more likely.
(note that even the "her kids will be ok" isn't true at the limit. If wealth concentrates sufficiently enough it will lead to societal collapse)
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
If we make it easier for people to drive and have cars, isn't that a net positive? If we make it easier for X, isn't that better? No, not necessarily, that's the entire point of this series of essays. Friction is good in some cases! You can't learn without friction. You can't have sex without friction.
It's the first step on the road to hell.
(Do you not realize how crazy the entire premise here is? Imagine someone in 1975 saying that ARPANET has been up for years so everything there is to know about networking technology has probably been found already.)
"Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives."
So I fired-up a vpn, and it appears to be a personal blog. About ai risks.
The geo-block is kind of a shame, as the writing is good and there appears to be nothing about the site that makes it subject to the OLSA.
The regulators of OSA say otherwise. Or at any rate, they refuse to agree and won't rule it out.
____________
For the geo-blocked, reproducing relevant content [0]:
> A few months back I wound up concluding, based on conversations with Ofcom [1] that aphyr.com might be illegal in the UK due to the UK Online Safety Act.
> [...] This blog has the same problem: people use email addresses to post and confirm their comments. I think my personal blog is probably at low risk, but a.) I’d like to draw attention to this legislation, and b.) my risk is elevated by being gay online
[0] https://aphyr.com/posts/395-geoblocking-multiple-localities-...
[1] https://blog.woof.group/announcements/updates-on-the-osa
OFCOM has a q&a-based tool [1] to advise on whether the OLSA applies to a site. I'm not a lawyer, but its pretty clear that a non-adult-themed personal blog where people can post only textual comments on content that they dont control, is not going to be subject to the act.
[1] https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
We have already passed the critical point. The LLMs, the agent harnesses are here. There is too much willpower, capital, and risk behind these technologies now—the automobile has landed, thousands of people have purchased it already, protesting the car won't undo it at this point.
What you can do that will be meaningful, is to instead understand the new car, and understand it deeply, Use that understanding to carry the values you care about into the new world and re-articulate them. Make the car safer, push for tactical regulations on it. If you are privileged enough to be able to forgo its use entirely, sure, but that advice is not uniformly applicable. People forget that being able to simply opt-out of certain things is often only a viable option when you are already in a certain position. What we really need are the heavy skeptics to stop falling for luddite temptation and to start bringing their critical lens to bear in positive ways on this new technology to make it safer and better. By opting out and staging a feeble resistance you won't do anything other than let the current dangerous power consolidation continue.
That said, the final point is one I take issue with:
> For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
To me, there is no intrinsic value in solving this problem other than rote problem solving reps to make you a better problem solver. There isn't anything fundamental about the protocol they've never heard of that operates the lights. It's likely similar to many other well-thought out protocols in the best case and in the worst case is something slapped together.
There are certainly deeper, more fundamental concepts to learn like congestion control algorithms in TCP. Most things in software though are just learning another engineer's preferences for how they thought to build something.
I poke at this because if an exercise only yields the benefit of another rep of solving a problem, then it holds less water to me. I personally don't think there will be fewer problems to solve with this technology, just a different sort at a different layer of the stack.
Interactive learning and thinking is underrated, in part I think because of the cynical (and likely accurate) assumption about what the laziest among us will do with the tools, but projected onto everyone.
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
The only one who's written thus is Marx, but those ideas has not found broad socital support.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
So the solution to checking whether an article is reliable is to check whether its sources are reliable? How far back do you go? Or do you disregard immediately any article that does not cite only sources you already trust?
Unfortunately, the several million other people who live in the same voting unit as me didn't and ended up electing an asshat anyway.
No. I resisted for a bit but have started using it at work. Mostly because I believe usage is now being monitored. I'm in a very high-scale engineering environment involving both greenfield and massive brownfield codebases and the experience is largely a net loss in productivity. For me and some others who I've spoken to in my org, opting in is a theater that we're required to engage in to keep employment and not a genuine evolution of our craft.
These tools struggle with context once you get deep into a codebase with many, many millions of lines of code and sprawling dependencies. Even for isolated Python scripts or smaller, supporting .NET apps, the time spent correcting subtle bugs or bullshit, or just verifying the bullshit, often exceeds the time it would take to have written it from scratch.
Regardless, what I've observed is that these tools do nothing for the actual bottlenecks of software engineering: requirements gathering (am I writing the right thing?) and verification (does it work without side effects?). Because LLMs are great at generating text, they're actively exacerbating these issues by flooding our process with plausible looking noise.
Ideas are mediocre. Plans are arbitrary. Research is untrustworthy. But telling it "generate me 100 ideas for X" feels really productive.
I think a version of me with no access AI will not just stay competitive, but even outcompete the version of me with unlimited access to AI.
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
Security, Guards, Locks, Cameras, the mockery of the naive, bumbling fools who easily trust one another - as if that ability to not be capable to form such members of society is something to be proud about. The endless self upselling "protectors", the shards of glass on the wall, the scams, the con artists, proud in ripping of the "Naive and stupid" all these zero-sum gameplayers, producing nothing and furiously proud of their retardation. A whole industry to support a mountain of dysability. If the culture you grew up in is not capable of forming such a society, you are not part of the west. You can not be and never will be. All the shoring up work named above, even with society enforced norms be damned.
If your presence is a detriment, the answer is to build a society without you. Arcologies, cooperate cities, Amish towns - call it what you want. Place where the "stupid" can be "easily gullible" and cooperate and work. And others where the "real ones" can roam around and rip each other off to their hearts content. A harsh wall in the middle, razor-wire on top - and thats the end of that illusion.
I know a RTSC is the holy grail, but it really feels like AI is in the same stage computers were in the 80s. I used to be extremely bearish and think AI was useless, but I've taken a total 180 the last 6 months. If these things get better (they will), nobody's job will be safe.
Room temperature superconductors don’t enable THz chips with no heat output.
If superconductors broadly allowed this, we’d already have such chips available because we could super cool them and keep them at that temperature easily.
As far as im aware, 3d stacking chips requires the inside to be cooled as well (not just the outside). I don't think they've solved this yet.
Must be nice to not have a paycheck tied to using this tech. For many people, myself included, its either use it (adapt) or lose your job. Most of us relay on our jobs to pay bills and live in the modern world.
In all the 10 articles, I think this is the only thing really.
I think we have to learn how to overcome and thrive in the new world. The gravy of CS careers is gone for all :(
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
Source of this claim?
It is admittedly a specific cherry picked point in time at which this was true, but useful to illustrate the issue.
>“In the summer of 1812 there were no fewer than 12,000 troops in the disturbed counties, a greater force than Wellington had under his command in the Peninsula.”
But for that year Wikipedia has
>Wellington's 48,500-man army... https://en.wikipedia.org/wiki/Battle_of_Salamanca
That's the rub: if we build it later, our economy crashes in the meantime.
Just flow with it and all it's bullshit, yeah life will be a little worst but it will still be better than those who chose to completely ignore it.
If the world is going mad, be the craziest of all these crazy motherfucker. At least it's interesting, I'm very curious to know what the world will look like in 10 or 20 years.
Maybe, just maybe lol, we'll finally have this dreamed world where robots do all the work and we, human, can just enjoy ourselves 24/24.
This tech has made it easier for second-handers to pass off inadequate work as the equal of your work. They're too lazy to exert the effort to read/think/write, and being second-handers, they're fine with the APPEARANCE of reading, thinking, and writing.
This has been going on for millenia, and the only fix I've seen is to call it out every time it rears its head.
Situations vary, obviously! I'm no stranger to rural life, I wound up in a car-dependent suburb with terrible bus service for a bit, and my partner is in the trades. Private vehicles are sensible and essential answers to lots of problems.
But as the Netherlands illustrates, it's not all-or-nothing: reductions in car utilization and car infrastructure have real benefits. Broadly speaking I think we can and should disincentivize private car use, increase public transit frequency, and build networks of protected infrastructure for pedestrians, cyclists, and other non-car means of getting around.
>> Now might be a good time to call your representatives.
Turns out you can bypass that sort of nonsense the same way you can buypass paywalls:
I think a lot of people are just getting their firs taste if agent harnesses plus slightly better models right now, and yes, the first time you use them it seems scary and amazing. By the hundredth time though, it's very apparent that there is still tremendous work to do before any kind of fully automated software pipeline (let alone any other domain) can be realized.
There is a class next door to my office. An old woman is teaching ~20 people how to be insurance agents with a slide show. It seems like a two week course with a certificate at the end.
They don't seem worried that the slideshow could be pasted into an LLMs context window and outperform all of them on the test in 5 seconds and are diligently taking notes.
But then i read this at the end:
> This piece, like most all my words and software, was written by hand—mainly in Vim. I composed a Markdown outline in a mix of headers, bullet points, and prose, then reorganized it in a few passes. With the structure laid out, I rewrote the outline as prose, typeset with Pandoc. I went back to make substantial edits as I wrote, then made two full edit passes on typeset PDFs. For the first I used an iPad and stylus, for the second, the traditional pen and paper, read aloud.
Then you realize the context of this article, who is writing it. No hate to my man here but clearly this is someone who has the desire and time to make things difficult for themselves and take pride it in it. It's needless effort in this day and age. but hey, to his own analogy, plenty of gearheads love their old cars and making em work. Those guys are some of the most knowledgeable, and I respect that, but also.. that same group is gonna hate on any new technology and complain it isn't the old way.
At least he realizes this technology is unlikely to slow down. With international relations as they are, it's MAD all over again, only the "D" is a fuzzy, hypothetical thing nobody can name, so even that bit of deterrence is lost. Yet finally he ends with the most uninspired advice of all: "we should try, unsuccessfully, to stop it."
Everyone must understand: for all of history, progress and productivity and value creation overall could only scale with people. Now it can scale with power and compute. This is a tremendous economic force, akin to a force of nature, that is nigh impossible to stop. (I always did think the Butlerian Jihad was the biggest plot hole in Dune.)
My advice is this: we have no choice but to adapt. We must realize that, by a stroke of luck, this is a power available to us more than the capital class. If they can scale without people, so can we. But because harnessing AI effectively requires hard skills -- at least for now -- that the capital class don't have and used to pay us for, we might even scale better than them!
Carpe diem.
There are multiple sections that talk directly about utility. Here's one of them: [0]
But, sure. I'll bite. Here's the third paragraph of the first part of the essay [1]:
This is *bullshit* about *bullshit machines*, and I mean it. It is neither balanced nor complete: others have covered ecological and intellectual property issues better than I could, and there is no shortage of boosterism online. Instead, I am trying to fill in the negative spaces in the discourse. “AI” is also a fractal territory; there are many places where I flatten complex stories in service of pithy polemic. I am not trying to make nuanced, accurate predictions, but to trace the potential risks and benefits at play.
I'd say that the specific sort of "utility" discussion that you're probably looking for would be classified as "boosterism". [2]> Now it can scale with power and compute.
Eh. Carefully read through and consider [3].
[0] <https://aphyr.com/posts/411-the-future-of-everything-is-lies...>
[1] <https://aphyr.com/posts/411-the-future-of-everything-is-lies...>
[2] Due to their nearly-universally breathless nature, I know that's how I classify the overwhelming majority of such discussions.
His lack of personal experience with LLMs was the most disappointing aspect, because he does not really know what we're dealing with. He's just going off what he's read / heard. So again, where's the incisive insight?
Now, here's a concrete example of what I mean by utility: a single person being able to rewrite an entire open source project from scratch in a few days just so it could be relicensed. Is that good or bad? I don't know! Is it a stupefying example of what's possible? Yes! Is that "breathless boosterism?" Only if you ignore the infinite nuances involved.
> Eh. Carefully read through and consider [3].
Hadn't come across this one before, but there's not much in there I hadn't seen and even discussed in past comments. As an example, it still mentions the METR study from 2025 without mentioning the very pertinent follow-up from just a couple of months back... which is not very surprising to me: https://news.ycombinator.com/item?id=47145601 ;-)
It does mention (and then gloss over) the real finding of the DORA and related reports, which is pertinent to my original point: LLMs are simply an amplifier of your existing software discipline. Teams with strong software discipline see amazing speedups, those with poor discipline sees increased outages.
And, to my original point, who knows what good software discipline looks like? Hint: it's not the capital class.
AI will basically either enrich our life like the loom did or it will outright kill the current economic system of the world which might stop poverty at all or it will sort of start a big collapse where people suffer at the beginning but than it will still have a positive outcome at the end.
Humankind always found a solution in the past and it will even do that in the future.
- LLMs trained on OUR copyrighted works and OUR open source code which was licensed for human use (MIT license explicitly says for "Persons").
- A monetary system that has been centralizing opportunities and creating an asymmetric playing field due to the Cantillon Effect caused by government and institutional money creation.
Either of these points on its own entitles us to as much UBI money as we need.
I think even without AI or any technological progress, the monetary system is itself enough to create the kind of massive centralization that we've been seeing. People have been saying that for years before LLMs. People are now blaming AI for the fact that some people can't get jobs but it's not the root cause.
Software devs won't be able to get jobs as plumbers either because the plumbing sector in many countries has become insanely regulated... Society has been fundamentally corrupted.
I only see two ways forwards;
- Communism with UBI (closer to what we have now)
- Abolish all regulations and have Capitalism again.
Well, yes, the entire world order is currently being upended. The USA is completely unrolling its place in the global order and becoming isolationist (and soon an authoritarian single-party state). The Petrodollar is either dying or being converted to a Northwestern-Hemisphere-Petrodollar, with the Yuan in the ascendancy (so there goes the strong economy powering VC money). China, EU, and Russia are the new global leaders. The Middle East and its oil is being taken over by Israel. Taiwan will fall to China and thus the whole technological world follows. Countries that are friendly with China will have good renewable tech, countries that aren't will be doubling down on oil and coal. Fresh water will become as valuable as oil. A world war will decimate global productivity for decades. Most of the democracies in the world will be gone by the end of the century.
But none of that has to do with AI.
Bad things will always happen in the world. Good things will happen too. But you're only focusing on the bad. That's not good for your health, or others'.
> Refuse to insult your readers: think your own thoughts and write your own words. Call out people who send you slop. Flag ML hazards at work and with friends. Stop paying for ChatGPT at home, and convince your company not to sign a deal for Gemini. Form or join a labor union, and push back against management demands that you adopt Copilot [..] Call your members of Congress and demand aggressive regulation which holds ML companies responsible [..] Advocate against tax breaks for ML datacenters. If you work at Anthropic, xAI, etc., you should think seriously about your role in making the future. To be frank, I think you should quit your job.
He's freaking out, and rejecting AI completely, out of fear. And that's okay; we all get a little freaked out sometimes. But please try not to make other people freaked out as well? Just because you are scared of something doesn't mean the fear is justified or realistic.
What's going to happen now is the same thing that happened during the pandemic. A bunch of irrationally fearful people will decide that the only way they can cope with their fear, is to reject the basis of it. COVID deniers and anti-maskers/anti-vaxxers were essentially so terrified of the loss of control they had, that they refused to acknowledge it. They instead went full-bore in the opposite direction, defying government mandates and health warnings, in order to try to regain some semblance of control over their lives. And it did not go well.
That's what's now gonna happen with AI deniers. They're so freaked out about AI that they're going to reject it en-masse, not because it is actually doing anything to them, but because they're afraid it might. And the end result is going to be similar: extreme people do extreme things, and the end result isn't good. So please try to reign in the doomerism a bit, for all our sakes.
so the boundary is blurry... I'm not sure what to do
I think we'll settle into a new norm over the next few years, but the role of software engineer will change though. Ultimately, always remember (and remind your boss) that you are in charge and, more importantly, responsible.
Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?
The only saving grace is that this is less cynical than typical rageviews, considering they have something of a point in that they are going to be negatively impacted by the same technology that has been trained on their content without compensation.
The solution is obviously some form of socialism but a lot of tech people are blinkered libertarians who refuse to put two and two together.
To take the car analogy: it matters how we use the car.
The car in itself can be used to save time and energy that would otherwise be used to walk to places. That extra time and energy can be used well, or poorly.
- It can be squandered by having a longer commute that defeats the point
- Alternatively, it can be wasted by sitting on a couch consuming Netflix or TikTok
- Alternatively, it can be used productively, by playing team sports with friends, or chasing your kids through the park, or building a chicken coop in your back yard
It’s all about wise usage. Yes it can be used as a way to destroy your own body and waste your time and attention, but also it can be used as a tool to deploy your resources better, for example in physical activities that are fun and social rather than required drudgery.
I think it’s the same for LLMs. Managers and executives have always delegated the engineering work, and even researching and writing reports. It matters whether we find places to continue to challenge and deploy our cognition, or completely settle back, delegate everything to the LLM and scroll TikTok while it works.
Kyle' recommendation to stop/slow using AI is phrased as another individual choice, but given that lesson I think it's appropriate to interpret it as a collective choice - collective through regulation, collective resistance etc.
Yes, individuals have choices. But in a collective, dynamics occur and those dynamics can't usually be overcome by individuals.
Social media could be used differently, but the way it exists Irl is determined by the nature of the medium, the economic structure and other things outside of individuals' control.
But the majority have always chosen the path of least resistance. This is not new! Socrates’ famous exhortation is “the unexamined life is not worth living”. People were living mindlessly on autopilot before TikTok.
I think if you want to give a call to action, as this piece does, the right call to action is “think carefully about how you can make a good use of your time and energy, now that the default path has changed.” I know it’s not as simple or emotionally powerful as “go down kicking and screaming, stick it to the man”, but as a rule of thumb, the less fiercely emotional path is usually the right one.
> I’ve thought about this a lot over the last few years, and I think the best response is to stop.
This is exactly where it shows.
LLMs, agents and whatever comes next are not only the future of tech, but they are going to be national resilience drivers for the countries that will be able to support them with power, water and science.
Who is supposed to stop? The US? China? Russia? Everyone? Of course this won't happen. This is an arms race.
But even if it weren't, stopping is the wrong answer. You don't have to outsource your thinking, writing or reading. How you use LLMs is entirely up to you.
There is a way to use LLMs which is beneficial. I treat them as a private tutor available to me for questions. This solved a lot of friction I had with my relationship with LLMs.
More telling is that the author mainly thinks about their relationship with LLMs while in reality the space has moved on to automation with agents. You don't interact with LLMs as much as before, and if you still do, then soon you won't.
Ahents are not really ML. It's harnesses and parsing and memory and metrics. It's software. Should we stop this as well?