This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
To me it just comes across as low emotional intelligence. There are very few things worthy of being furious, in my opinion. Being furious is high cost.
And to set Claude as the From header despite it not coming from Anthropic. Very odd.
Some commenters suggest that Pike is being hypocritical, having long worked for GOOG, one of the main US corporations that is enshittifying the Internet and profligately burning energy to foist rubbish on Internet users.
One could rightly suggest that a vapid e-mail message crafted by a machine or by an insincere source is similar to the greeting-card industry of yore, and we don't need more fake blather and partisan absurdity supplanting public discourse in democratic society.
The people who worry about climate-change and the environment may have been out-maneuvered by transnational petroleum lobbies, but the concern about burning coal, petroleum, and nuclear fuel to keep pumping the commercial-surveillance advertising industry and the economic bubble of AI is nonetheless a valid concern.
Pike has been an influential thinker and significant contributor to the software industry.
All the above can be true simultaneously.
For me, the dislike comes from the first part of the message. All of a sudden people who never gave a single shit about the environment, and still make zero lifestyle changes (besides "not using AI") for it, claim to massively care. It's all hypocritical bullshit by people who are scared of losing their jobs or of the societal damage. Which there is a risk of, definitely! So go talk about that. Not about the water usage while munching on your beef burger which took 2100 litres of water to produce. It's laughable.
Now I don't know Rob Pike. Maybe he's vegetarian, barely flies, and buys his devices second-hand. Maybe. He'd be the very first person clamouring about the environmental effects of AI I've seen who does so. The people I know who actually do care about the environment and so have made such lifestyle changes, don't focus much about AI's effects in particular.
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
So yeah, if you haven't already been doing the above things for a long time, fuck you Rob Pike, for this performative bullshit.
If you have, then sorry Rob, you're a guy of your word.
Interesting to see that people are a huge fan of Rob saying those things, but not of me saying this, looking at the downvotes.
Apparently this has enraged him and motivated an unhinged rant where he talks about raping the planet and vile machines.
It's a hateful post and it seems disrespectful to anyone working in the industry, so some backlash has to be expected.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...
What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
No different than an CEO telling his secretary to send an anniversary gift to his wife.
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
Welcome to 2025.
For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.
You care enough to do something, but have other time priorities.
I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.
You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."
Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
If anything, I'm glad people are finally starting to wake up to this fact.
The way that Rob's opinion here is deflected, first by focusing on the fact that he got a spam mail and then this misleading quote ("myself" does not refer to Rob) is very sad.
The spam mail just triggered Rob's opinion (the one that normal people are interested in).
Any tool can be used by a wrongdoer for evil. Corporations will manipulate the regulator in order to rent seek using whatever happens to be available to them. That doesn't make the tools themselves evil.
The web is for public use. If you don’t want the public, which includes AI, to use it, don’t put it there.
Despite the apparent etymological contrast, “copyright” is neither antithetical to nor exclusive with “copyleft”: IP ownership, a degree of control over own creation’s future, is a precondition for copyleft (and the OSS ecosystem it birthed) to exist in the first place.
Many countries base some of their laws on well accepted moral rules to make it easier to apply them (it's easier to enforce something the majority of the people want enforced), but the vast majority of the laws were always made (and maintained) to benefit the ruling class
The absolute delusion.
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
I can see it using this site:
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.
Random acts of kindness are only meaningful if they come from a human who had the heart, forethought, and willingness to go out of their way to do something kind for someone else. 'Random acts of kindness' originating from an AI is just spam, plain and simple.
The human race is screwed if connection - the one key thing that makes humans, human - is outsourced partially or wholly to robots who absolutely have no ability to connect, let alone understand, the human experience.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
Unless we can find some way to verify humanity for every message.
You could argue about quality but not "No one will ever want to open source their code ever again".
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
Which luckily coincides with our social security and retirement systems collapsing.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
Overton window and all that.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
Now he complains about it? Its just ignorant.
And he has apparently 10 millions and "the couple live both in the US and Australia.". So guess how often he flies around the globe. Guess how much real estate he occupies?
He isn't part of the solution, he is part of the problem.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
It has enormous benefits to the people who control the companies raking in billions in investor funding.
And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
AI has a massive positive impact, and has for decades.
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
AI village is literally the embodiment of what black mirror tried to warn us about.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
This is an all too common pattern.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.
Bitch about data-centers while consuming every meme possible ...
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
The points you raise, literally, do not affect a thing.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
Probably hit the flamewar filter.
The email appears to be from agentvillage.org which seems like a (TBH) pretty hilarious and somewhat fascinating experiment where various models go about their day - looks like they had a "village goal" to do random acts of kindness and somehow decided to send a thank you email to Rob Pike. The whole thing seems pretty absurd especially given Pike's reaction and I can't help but chuckle - despite seeing Pike's POV and being partial to it myself.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.
I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.
As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.
Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
Pike, stone throwing, glass houses, etc.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
That sums up 2025 pretty well.
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
But...just to make sure that this is not AI generated too.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John HopcroftI can. Bitcoin was and is just as wasteful.
Prepare for a future where you can’t tell the difference.
Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.
Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
https://theaidigest.org/village?time=1766692330207
https://theaidigest.org/village?time=1766694391067
https://theaidigest.org/village?time=1766697636506
---
Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?
Why are the rationalists doing this?
This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
If so, I wonder what his views are on Google and their active development of Google Gemini.
> Turns out Claude Opus 4.5 knows the trick where you can add .patch to any commit on GitHub to get the author’s unredacted email address (I’ve redacted it above).
given how capable certain aspects of these models are becoming over time, the user's intent is more important than ever. the resulting email content appears like a poorly-made spam (without the phishing parts), while able to contact someone just from their name!
I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...
Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!
And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.
It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.
My reaction was about the same.
I'm glad Dr Pike found his inner Linus
The anti AI hysteria is absurd.
Yes, everyone supports capitalism this way or the other (unless they are dead or in jail). This doesn't mean they can't criticise (aspects of) capitalism.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.
It is always the eternal tomorrow with AI.
The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.
https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/
What is a workable definition of "evil"?
How about this:
Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.
There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.
Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.
Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.
To such people it must be said plainly and forcefully:
You have NO RIGHT to make these kinds of decisions for other human beings.
Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.
It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.
Personally when I want to have this kind of reaction I try to first think it's really warranted or maybe there is something wrong with how I feel in that moment (not enough sleep, some personal problem, something else lurking on my mind...)
Anger is a feeling best reserved for important things, else it loses its meaning.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
Ellul and Uncle Ted were always right, glad that people deep inside the industry are slowly but surely also becoming aware of that.
I think distinguished engineers have more reason than most to be angry as well.
And Pike especially has every right to be angry at being associated with such a stupid idea.
Pike himself isn't in a position to, but I hope the angry eggheads among us start turning their anger towards working to reduce the problems with the technology, because it's not going anywhere.
I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios. So let's apply it to this one. In this case, a nonprofit thought it was acceptable to use AI to send emails thanking various prominent people for their contributions to society. So let's imagine this becomes a universal law: Every nonprofit in the world starts doing this to prominent people, maybe prominent people in the line of work of the nonprofit. The end result is that people of the likes of Rob Pike would receive thousands of unsolicited emails like this. We could even take this a step further and say that if it's okay for nonprofits to do this, surely it should be okay for any random member of the population to do this. So now people like Rob Pike get around a billion emails. They've effectively been mailbombed and their mailbox is no longer usable.
My point is, why is it that this nonprofit thinks they have a right to do this, whereas if around 1 billion people did exactly what they were doing, it would be a disaster?
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.
I for once enjoy that so much money is pumped into the automation of interactive theorem proving. Didn't think that anyone would build whole data centers for this! ;-)
I've been pondering that given what the inputs are, llms should really be public domain. I don't necessarily mean legally, I know about transformative works and all that stuff. I'm thinking more on an ethical level.
Honestly, no reply would be better.
But an automated "thank you"? That's basically a f** you. Zero respect.
And to think the ancestor of this is those bloody Hallmark cards. Jesus.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.
The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
You can't both take a Google salary and harp on about the societal impact of software.
Saying this as someone who likes rob pike and pretty much all of his work.
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
https://bsky.app/profile/robpike.io
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
“But where the danger is, also grows the saving power.”
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
e.g. replacing logical syntax like "int x" with "var x int", which is much more difficult to process by both machine and human and offers no benefits whatsoever.
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?
2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.
Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software