This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...
What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
No different than an CEO telling his secretary to send an anniversary gift to his wife.
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
Welcome to 2025.
For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.
You care enough to do something, but have other time priorities.
I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.
You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."
Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
If anything, I'm glad people are finally starting to wake up to this fact.
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
I can see it using this site:
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
Unless we can find some way to verify humanity for every message.
You could argue about quality but not "No one will ever want to open source their code ever again".
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
Overton window and all that.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
It has enormous benefits to the people who control the companies raking in billions in investor funding.
And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
AI has a massive positive impact, and has for decades.
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
AI village is literally the embodiment of what black mirror tried to warn us about.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
This is an all too common pattern.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.
Bitch about data-centers while consuming every meme possible ...
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
The points you raise, literally, do not affect a thing.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
Probably hit the flamewar filter.
The email appears to be from agentvillage.org which seems like a (TBH) pretty hilarious and somewhat fascinating experiment where various models go about their day - looks like they had a "village goal" to do random acts of kindness and somehow decided to send a thank you email to Rob Pike. The whole thing seems pretty absurd especially given Pike's reaction and I can't help but chuckle - despite seeing Pike's POV and being partial to it myself.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Types help. Good tests help better.
Pike, stone throwing, glass houses, etc.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
That sums up 2025 pretty well.
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
But...just to make sure that this is not AI generated too.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John HopcroftI can. Bitcoin was and is just as wasteful.
Prepare for a future where you can’t tell the difference.
Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.
Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
https://theaidigest.org/village?time=1766692330207
https://theaidigest.org/village?time=1766694391067
https://theaidigest.org/village?time=1766697636506
---
Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?
Why are the rationalists doing this?
This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
Ha, wow that's low. Spam people and signal that as support of your work
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
If so, I wonder what his views are on Google and their active development of Google Gemini.
> Turns out Claude Opus 4.5 knows the trick where you can add .patch to any commit on GitHub to get the author’s unredacted email address (I’ve redacted it above).
given how capable certain aspects of these models are becoming over time, the user's intent is more important than ever. the resulting email content appears like a poorly-made spam (without the phishing parts), while able to contact someone just from their name!
I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...
Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!
And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.
It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.
My reaction was about the same.
I'm glad Dr Pike found his inner Linus
The anti AI hysteria is absurd.
Yes, everyone supports capitalism this way or the other (unless they are dead or in jail). This doesn't mean they can't criticise (aspects of) capitalism.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
>Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society,
The problem in my view is the spending trillions. When it was researchers and a few AI services people paid for that was fine but the bubble economics are iffy.
All that is solid melts into air, all that is holy is profaned
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.
It is always the eternal tomorrow with AI.
That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.
ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.
I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.
Your phrasing seems overly pessimistic and premature.
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...
ChatGPT?
The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.
https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/
What is a workable definition of "evil"?
How about this:
Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.
There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.
Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.
Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.
To such people it must be said plainly and forcefully:
You have NO RIGHT to make these kinds of decisions for other human beings.
Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.
It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.
Personally when I want to have this kind of reaction I try to first think it's really warranted or maybe there is something wrong with how I feel in that moment (not enough sleep, some personal problem, something else lurking on my mind...)
Anger is a feeling best reserved for important things, else it loses its meaning.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
Ellul and Uncle Ted were always right, glad that people deep inside the industry are slowly but surely also becoming aware of that.
I think distinguished engineers have more reason than most to be angry as well.
And Pike especially has every right to be angry at being associated with such a stupid idea.
Pike himself isn't in a position to, but I hope the angry eggheads among us start turning their anger towards working to reduce the problems with the technology, because it's not going anywhere.
I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios. So let's apply it to this one. In this case, a nonprofit thought it was acceptable to use AI to send emails thanking various prominent people for their contributions to society. So let's imagine this becomes a universal law: Every nonprofit in the world starts doing this to prominent people, maybe prominent people in the line of work of the nonprofit. The end result is that people of the likes of Rob Pike would receive thousands of unsolicited emails like this. We could even take this a step further and say that if it's okay for nonprofits to do this, surely it should be okay for any random member of the population to do this. So now people like Rob Pike get around a billion emails. They've effectively been mailbombed and their mailbox is no longer usable.
My point is, why is it that this nonprofit thinks they have a right to do this, whereas if around 1 billion people did exactly what they were doing, it would be a disaster?
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.
Don’t upvote sealions.
Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable
equipment while blowing up society, yet taking the time to have your vile
machines thank me for striving for simpler software.
That's Rob Pike, having spent over 20 years at Google, must know it to be the home of the non-monetary wholesome recyclable equipment brought about by economics not formed by an ubiquitous surveillance advertising machine.> To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
You don't have to purely associate him with Google to understand the rant as understandable given AI spam, and yet entirely without a shred of self-awareness.
His viewpoints were always grounded and while he may have some opinions about Go and programming, he genuinely cares about the craft. He’s not in it to be rich. He’s in it for the science and art of software engineering.
ROFL his website just spits out poop emoji's on a fibonacci delay. What a legend!
I for once enjoy that so much money is pumped into the automation of interactive theorem proving. Didn't think that anyone would build whole data centers for this! ;-)
I've been pondering that given what the inputs are, llms should really be public domain. I don't necessarily mean legally, I know about transformative works and all that stuff. I'm thinking more on an ethical level.
Honestly, no reply would be better.
But an automated "thank you"? That's basically a f** you. Zero respect.
And to think the ancestor of this is those bloody Hallmark cards. Jesus.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
AI is not considered to be a net positive by even close to 100% of people that encounter it. It's definitely not essential. So its impact is going to be heavily scrutinized.
Personally, I'm kind of glad to see someone of Rob Pike's stature NOT take a nuanced take on it. I think there's a lot of heavy emotion about this topic that gets buried in people trying to sound measured. This stuff IS making people angry and concerned, and those concerns are very valid, and with the amount of hype I think there needs to be voices that are emphatically saying that some of this is unacceptable.
have you considered the possibility that it is your position that's incorrect?
The Greek philosophers were much more outspoken than we are now.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
(fwiw, I do agree gpl is better as it would stop what’s happening with Android becoming slowly proprietary etc but I don’t think it helps vs ai)
I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.
The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
You can't both take a Google salary and harp on about the societal impact of software.
Saying this as someone who likes rob pike and pretty much all of his work.
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.
https://bsky.app/profile/robpike.io
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io
This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
Anyone can use your software! Some of them are very likely bad people who will misuse it to do bad things, but you don't have any control over it. Giving up control is how it works. It's how it's always worked, but often people don't understand the consequences.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.
If they're going to thief GPL software, least we can do is thief it back.
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
Then, ask what's different this time.
“But where the danger is, also grows the saving power.”
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
Yes, but informedly choosing your slavedriver still has merit.
> Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
This is an interesting thought!
I'm sure he doesn't.
> The value proposition of software engineering is completely different past later half of 2025
I'm sure it's not.
> Can't really fault him for having this feeling.
That feeling is coupled with real, factual observations. Unlike your comment.
e.g. replacing logical syntax like "int x" with "var x int", which is much more difficult to process by both machine and human and offers no benefits whatsoever.
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
Of course we do. We don't live inside some game theoretic fever dream.
if anything the Chinese approach looks more responsible that that of the current US regime
First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.
Give me more money now.
1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?
2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.
Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
/signed someone actually interested in AI and SWE
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.
The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.
Nobody is really safe.
I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?
But no one is safe. Soon the AI will be better at CEOing.
Everybody in the company envy the developers and they respect they get especially the sales people.
The golden era of devs as kings started crumbling.