Rarely do I read something that starts off with such promise!
>i'm going to change your diaper and burp you
>Carlito is a very good boy Go piss in your diaper you big baby
>He doesn't care. He is a big baby who filled up his diaper with pee pee and poo poo
>you are a big baby and i am going to change your diaper and burp you <
>To be clear I call executives of multi trillion dollar companies scumbags and if you can't deal with that I'm not sure what to do. Burp you? Change your diaper?
>I am going to change your diaper and burp you
>Yeah man it's real authoritarian to say your second name is doodoo. Go change your diaper you big baby
>Yeah because you're a big baby with a big full diaper
>Hello sir this is your uber outside. I have your order from the diaper store
[0] https://lovable.dev/blog/how-a-startup-replaced-a-salesforce...
[1] https://seekingalpha.com/news/4144652-klarna-shuts-down-sale...
It’s easy to assume the conditions in software companies are generalizable to everyone else but they’re really not. For the majority of companies, which have no software development expertise, it would be a catastrophe. Hiring someone to do it, and managing the initiative, would cost more than salesforce.
It doesn't have any material effect on this article, but it says something about his ethics.
> Darkfall lead developer Tasos Flambouras claims that game server logs show that the Eurogamer reviewer played the game for under three hours, a claim denied by the writer.
Even if we take the lead developer's word for it, what you are describing is simply false.
HN discussion: https://news.ycombinator.com/item?id=47114579
All I know is, whenever I read testimonies from people whose companies suddenly decided to force LLM usage for productivity to be "AI first", having colleagues opening PR's who are only machine reviewed with implementations they cannot justify themselves outside of "Claude wrote it", makes me burnout just reading them. And it's only going to get worse until it becomes better, but not for the developers.
Honestly, the one thing that I could see justify all the investment companies make for LLM-assisted coding is the full automation of software production. I can only see the current state of things as the "end game" for them, only if they suddenly decide to jack up pricing to tap directly on the corporate budget and not the individual developer's budget.
The unit cost is going down and has gone down by more than 20-30x over the years. Sure, the fixed cost of training is going up but that's because of the implied returns. Once the returns to training don't happen, it would simply reduce modulo cutoff date updates. The companies have a choice to just stop training and focus on inference cost reduction.
What am I missing here? Unless the consumers decide that they are no longer willing to pay the same amount as before and their expectations are rising with prices falling, what else?
He has been a perpetual bear
His argument is not "this tech doesn't work", but rather "these businesses aren't economically viable"
And that the smoke and mirrors accounting and perpetual thirst for more billions indicates just how unviable it is
Whilst he does dunk on LLM capabilities, the framing is the business angle - can Anysphere etc. actually form a moat and make a profit?
Why? because of cost?
> You cannot "fix" hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.
ChatGPT is fairly reliable.
>Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.
This is untrue in spirit.
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
Imagine if they’d done something else.
Imagine if they’d done anything else.
Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.
Imagine, because right now that’s the closest you’re going to fucking get.
This is what he said in 2024. He really thought ChatGPT is not in the future.
There are so many examples and its clear that he's not good faith and has consistently gotten the spirit wrong.
I'm lucky to have worked in the field for a long time, and be able to spend a lot of tokens. In the last month it's become clear to me that the tech works. The science is done, and what's left is engineering.
There are a lot of risks and mitigations and theory to build, but it's all solvable. The tech isn't mature, but neither was the Internet 30 years ago. And we built transatlantic cables and ran new wires to everyone's house.
People I care about, engineers with 20 years of experience, are having mental health breakdowns, caused by Zitron's work. They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If someone is having a crisis about this, I hope they start talking to a therapist. I don't need them to agree with me, but I do need them to not harm themselves.
They can always learn the technology later, when and if it proves itself to be useful :) I personally don't understand the hype, even after using Claude and other AI tools - but perhaps that will change in the future.
(And it is already useful, just not as much as some people sell it)
And let it be clear that nobody is being "actively hurt" by legitimate economic/business grievances. This is victim-blaming and disgusting rhetoric.
If you are right, and the tech works, both you and them will be continuing this conversation in a soup kitchen.
lmfao
Despite the vulgarity, it is exceptionally illuminating to how much some of these slop pieces are just a mere pretension of rhetoric. I see this pretty consistently with a lot of the material I come across on the job that's gone through the LLM meat-grinder.
Also, the comment made me giggle like a little kid.
It is a problem when your doomsday timeline for obsolescence is behind the minute you publish. The memo itself was fantasy doomer porn on day 1.
Well, AI partisans have applied grandiose terms like "thinking," "intelligence," and "soul" to these machines. It's not wrong to push back and remind people what they really are.
So I guess we all agree, except that some people think "just statistics" is derogatory phrasing!
Salesforce, SAP, etc exist for a reason.
An AI doomsday report shook US markets
Here goes:
The Epoch data everyone keeps citing measures the price per token charged to API customers. That's the sticker price. It tells you nothing about whether the business is viable, because the existential risk for AI companies isn't the marginal cost of running a query. It's the upfront capital expenditure on chips and datacenters, committed years before you know what demand looks like.
Anthropic CEO Dario Amodei spelled this out in his Dwarkesh interview. Here's the short version: 1. Data centers take 1-2 years to build out. 2. Each gigawatt costs roughly $10-15B per year. 3. The industry is currently at ~10-15 GW, scaling roughly 3x annually. 4. By 2028, ~100 GW. By 2029, ~300 GW. 5. We're talking multiple trillions per year in committed infrastructure spend across the industry.
Now NVIDIA's Q4 earnings [1], which printed today: 1. $68.1B in quarterly revenue, $62.3B from data center alone. 2. Full-year: $215.9B, up 65% YoY. Guiding $78B next quarter. 3. Someone is writing those checks. Those checks are not refundable.
Dario, who believes we're 1-3 years from a "country of geniuses in a data center," described his own demand prediction as a "hellish" problem.
His exact framing: If this revenue comes in at $800B instead of $1T, "there's no force on earth, there's no hedge on earth" that could stop him from going bankrupt if he'd bought compute at the higher projection.
He's at ~$10B annualized revenue today, and he won't commit to buying at the scale his own thesis demands, because being off by a single year is fatal.
This is the actual argument (I'm not saying this is Ed's argument, but this is the argument against these companies). Not "inference tokens are expensive."
The argument is structural: these companies must pre-commit billions in non-recoverable CAPEX based on demand projections that are, by the CEO's own admission, a coin flip.
The gross margins on serving tokens might be great. But the training spend for next-gen models grows exponentially, and it has to be funded before that model earns a dollar.
The Epoch chart measures what customers pay per token. It doesn't measure the $215.9B NVIDIA invoice those customers collectively funded this year, or that these chip purchases are one-way bets against future demand that may or may not materialize.
Inference costs going down 20x is wonderful for consumers. It tells you almost nothing about whether the companies making those chips, or the companies buying them, will survive the demand prediction gauntlet.
And if we're being honest, the Epoch data showing 9x to 900x price drops per year should make you more nervous, not less, because it means the asset you bought last year is depreciating at a rate that makes used cars look like gold bars.
[0] https://www.youtube.com/watch?v=n1E9IZfvGMA&t=2298s [1] https://nvidianews.nvidia.com/news/nvidia-announces-financia...
What is this document?
What is the context?
https://bsky.app/profile/edzitron.com/post/3mfkc63h6222l
> "Here is an annotated version of the Citrini Memo with my own intro. It is analyslop - scare-fiction written to ingratiate AI boosters and analysts/traders with tales of ultra-automation and socialist data center policies. Shameful that the markets reacted at all."
I wish everyone would just calm down a bit.
"AI fake, AI poo poo, AI going away!" is the only argument he ever had. Nothing more.
OpenAI will collapse, almost certainly. Anthropic might get by if they can make it to IPO before it all comes tumbling down. Google will buy up all the datacenters in a fire sale like they did with dark fiber after the .com bubble popped and continue building out stuff like NotebookLM.
Amazon and Microsoft will still be there selling server time to model providers and doing custom enterprise solutions like always. They already host the major proprietary models and sell API access.[0]
The top open models are already good enough. At this point prompting and coordination are the big bottlenecks. It would be nice if the bubble lasts long enough for open models to match at least the latest Opus.
His problem is the focus on the bubble and not on what usually happens after. People will bandy his pieces about insisting it's all short lived and they can just wait it out. Kimi K2.5, GLM 5, and MiniMax 2.5 aren't going away.
[0] For example: https://azure.microsoft.com/en-us/blog/claude-opus-4-6-anthr...
https://aws.amazon.com/about-aws/whats-new/2026/2/claude-opu...
It's one thing to dislike or even detest something, but to constantly claim it is worthless and without use when people are already benefitting from it everyday is nothing short of delusion.
That's an interesting way to start criticism about ignorance