Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
[1] https://www.anthropic.com/news/google-broadcom-partnership-c...
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.
On the other hand, it’s increasing Google’s investment in AI, in general.
So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.
Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.
Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
But they also have access to an unimaginably large data set plus reach into people’s daily lives.
Seems more like partners for world domination.
I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.
What if AI is never good or cheap enough to reach significant profitability?
Maybe a little bit of both.
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
~ TK
That kind of insane growth & demand is unprecedented at that scale.
https://www.anthropic.com/news/google-broadcom-partnership-c...
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
I presume I'm not the only one.
Coding velocity doesn't matter if it the net result is software that sucks massive schlong. The real world doesn't care if programmers can write code faster.
My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.
I other news, TQQQ is pretty high!
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
Now there are pockets of people who are extremely productive, and maybe 80-90% of the rest who will never adapt. When I mean extremely, I mean people producing weeks of effort of marketing teams and hundreds of unbillable hours of senior-level professionals with only a few minutes of human involvement. Paid software extensions for (awful) design software we are required by clients to use can all be duplicated in house. Technical leadership is now aware of this, and our spend on software licenses is going to drop fast. I think every project in my own portfolio has some kind of custom automation supporting it, which was unthinkable 5 years ago.
It's going to take years for practical knowledge of how to use these systems to spread and even more for market discipline to expel those who cannot or will not learn. The Industrial Revolution took nearly a century, depending on how you're counting. LLMs have only been producing coherent output in the last 5 years. They've only been as good or better than people at some things you would have done on a computer for about a year or so. Be patient. These are massive changes.
I'm at least 5x faster, if not more. With tooling I might be able to get to 10-15x.
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.
Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.
Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem
The software will only improve for so long before it hits a wall. The best models were just a proxy for early mainstream market adoption, keeping your head above the water … plus some useful marketing hype about longshots for developing something bigger than LLMs (“AGI”).
People who work in tech are biased to obsess about the technical side and short term uptime/performance outrage. Despite that being mostly just standard immature market issues.
Anthropic (all ex Open AI) knew the negatives of the deal, so they made a slightly better deal with AWS, not a full lock in. They also grounded it in hardware from the start, ie. being the flagship customer for Trainium, the flagship customer for external usage of TPU's.
If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.
OpenAI was Anthropic. Anyone involved in actually developing GPT jumped ship when Altman performed his coup.
But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?
Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.
This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.
Although I doubt this will stop them if they think it’s advantageous…
US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...
Now, that’s a name I haven’t heard in a long time.
couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.
Running at a loss long enough to kill the competition is basically the name of the game these days.
When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.
Buwahahahahahahahhahah
They drop a little cash on some shitcoin the president controls and those problems go away.
As a user and a consumer, I don't want them to have a moat. Moat means pseudo-monopoly. That is the exact opposite of what we want.
Only the investors and owners want a moat, to keep others out.
So what they're doing? They're competing. Good.
Because they are investors, VCs, or startup founders who hope to establish their own moats.
Users and consumers can get a lot of useful information from HN, but it's important to keep the local demographics in mind.
That's why it's interesting to try to pick apart where the moat is.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.
The integration of LLMs with tools and data via agent harnesses has created the opportunity for a real moat. As these products start differentiating, the moats will develop to be significant.
No YouTube competitor can rise because ad blocking is so pervasive and celebrated. People think (selfishly) that ad blocking is some niche thing, the but the reality is that it's around 30% on average, and up to 70% if you have a tech literate audience.
So we have paid competitors, like nebula and curiosity stream, but those are essentially dead because they have to compete with "free" YouTube.
The internet is, pretty predictablely, totally unwilling to look in the mirror.
So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.
The question is when will we get there.
If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.
https://en.wikipedia.org/wiki/Panic_of_1873#Factors
"In the United States, the panic was known as the "Great Depression" until the events of 1929 and the early 1930s set a new standard.[2]"
What are you counting in this category?
My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.
If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.
AI company revenues aren't driven by consumer subscriptions.
The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.
If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.
How many businesses are paying Ford $10 million per annum?
The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.
If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.
If we get to an end-state of monopoly/duopoly at this game, then we are truly screwed.
I was just stating my current use and revenue path. Anthropic has insane velocity, in April of 2026.
Will you have the hardware to run them? Perhaps. Will enough of Anthropic's/OpenAI's large enterprise customers have the hardware to run them and the money/desire to have their own internal teams set up and maintain them?
I think Deepseek is already there.
The math is pretty simple, and it's easy to justify still paying the price even if it goes up 10 fold, when compared to hirering more resources its still cheap.
So I guess having multiple players and competition in the market is the key?
100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!
Chinese models like Deepseek v4 are as good and 10 times cheaper. You can even run Deepseek locally. So no, cheap AI wont be over. Just the US investors won't be able to profit off of the artificial bubble that is there now but wont be in the future.
It is likely that 99% of the value created by Anthropic / OpenAI / friends will go the end user. Which is great news.
It's like insane hype marketing speak. "insanely agile products delivered" like huh?
I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.
I really hope that we can all laugh at how wrong I was.
However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.
Who could call me a starry-eyed idealist? I have invested in bunkers.
To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?
I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.
If you’re using it for personal work, why is $100 worth it?
You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.
"Disney cancels $1B deal with OpenAI after video platform Sora is shut down: 'The future is human'" https://finance.yahoo.com/sectors/technology/articles/disney...
And if I recall correctly the AI datacenter deal isn'tdoing Oracle stock any favours.
We need to run a SotA coding agent basically 24/7 uninterrupted and so far we didn’t find an easy solution for this (you can get provisioned TPUs for Gemini on GCP but it costs a fortune).
Surely that’s possible for under $5k a month? $10k?
Why should anyone feed the SV AI bubble if they can just use cheap Chinese models, even locally if they want to...
Or, more controversially, say EU green deal which decimated EU car industry and lost/will lose us few millions of jobs. Losses up to a trillion and nothing to show for that
Was this meaningful for humanity? Keeping 20% of the population in bondage because we didn't want to upset the productivity gains of slavers in the south? All progress IS progress after all right?
Government welfare programs have done more to decrease poverty than anything else in the history of human existence. Also one doesn't have to go back in time 200+ years either to see the massive failures of neoliberal economics. Who thinks their lives are better because they have slightly faster phone while they continue to not afford healthcare, can't educate themselves, provide for children, or own homes.
You think 50% of the population seeing their lives materially decrease is meaningful? Good grief, do you honestly care more about trinkets than children? Actually don't answer that for the sake of your soul.
This money could be invested in universal healthcare, or into AI research for medicine. But hey, I guess replacing developers and generating slop is more beneficial to our society.
anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.
Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?
But as far as i know it currently supports just that + tensorflow (which nobody uses it anymore, least here). And last we tried, so much of our kernels needs rework that it’s not worth the effort.
This may change since ironwood but we haven’t tried that generation.
Microsoft is in the same boat with Azure.
If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.
The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.
Especially in those days Microsoft was both a platform for software to run on, and a maker of software, and being flexible to emphasize one or the other aspect depending on the way the market is... has been good for them.
What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?
Still rooting for AMD to catch up too, especially if they can continue improving their software stack. They seem to be moving in the right direction.. though, they could benefit from speeding up a bit more.
Google now has it's fingers in all the pies.. is successfully fully vertically integrated and now expanding horizontally.
And it may very well be bad news for OpenAI.
including the option to acquire Anthropic.
Not possible anymore unless Anthropic collapses and goes on a multi-year decline.They're worth $1 trillion in private market. If they IPO today, I'm willing to bet my house that the hype will drive them to $2 trillion market cap or 50% of Google's marketcap.
OpenAI and Anthropic will be the biggest IPOs ever - bigger than SpaceX. That's my prediction.
I have feeling that Dario is not the type of man who would want to be acquired and then have Google's CEO telling him what to do.
If milquetoast Sundar ever told anyone what to do, we would have heard about it by now.
OpenAI crashing would be good news and bad news for Anthropic investors.
The drama on HN alone would last for days. Twitter would implode in on itself.
They are the forces of capitalism embodied.
"All that is solid melts into air"
This makes 19th century mechanization look like the paleolithic, but it's the same fundamental force.
The money does leave the room — it gets converted into physical infrastructure that's increasingly rationed. Capital is not the constraint anymore.
Dominion Energy can't serve new data center load in Loudoun County until 2028. NERC flagged grid emergency risk across most of CONUS. Phoenix is in active water rationing while hosting 47 hyperscale facilities. Helium supply (45% Qatar, used in EUV cooling) took real hits in 2026.
Even if Google funds Anthropic, who funds the new transformers, the grid interconnect queue position, the water permit reviews? That part of the cycle isn't circular — it's hitting physics.
Capital is fungible. Substations aren't.It's why eating at your own restaurant is cheaper, but still not free.
It's more understanding for Amazon or Microsoft to make such an investment, because they're not as competitive in the model space.
Google buys Anthropic.
Microsoft buys Open AI (or vice versa depending on how things go).
SpaceGrok buys Cursor, limps along in 3rd place.
Meta is the last man standing, get's stuck with Oracle, dies.
And then hopefully some open source models save us from this nightmare before China commadatises everything.Edit: I forgot Amazon. Who knows what they will do. They're the wildcard anyway.
Anything to invigorate the desktop.
Microsoft buying OpenAI.. 10 minutes later it's rebranded Copilot.. and.. nothing much changes in the world. Oh, except all the AI improvements are around Enterprise governance.
Why the euphemism? What Anthropic did was an aggressive degradation of their model to save compute, and it's not just “perceived downtrend”, Anthropic themselves have acknowledged the quality of service degradation.
Great position to be in if you're Amazon and Google
I assume Anthropic said something like "We'll give you 3% of our company for $30B, since we're valued at $1T now! So cheap!", and Google immediately came back with "Hell no. We'll give you even more, $40B... but it's for 11% of the company. Take it or leave it." With all the issues they're having, what leverage does Anthropic have at that point?
Basically, Google made them an offer they couldn't refuse.
(If anthropic didn't exist, ØpenAI would suck up all the capital and talent in the room. Anthropic's existence has helped divide capital+talent that'd otherwise be gobbled up by the single fastest growing player.)
~ TK
For example, you can buy KLM Air france for less than $3B.
It is a profitable business that does $30B in sales and $1B in profit. (and has been profitable since for the past 4-5 years)
[PDF] https://www.airfranceklm.com/sites/default/files/2026-02/202...
This margin seems terrible.
That said, certain sectors like software (as in custom enterprise grade software dev) pull revenues that are much much higher sitting around 35%, but it's not that common.
Text-only, no CAPTCHA, no Javascript, no DDoS on blogger, no geo-blocking
x=https://www.bloomberg.com/news/articles/2026-04-24/google-plans-to-invest-up-to-40-billion-in-anthropic
echo url=$x|curl -K/dev/stdin \
|egrep -o "(type\":\"paragraph)|(type\":\"text\",\"value\":\"[^\"]+)" \
|sed '/Read More:/{N;d;};s/type\":\"paragraph/<p>/;s/.\{22\}//' \
|sed '1s/^/<meta charset=utf-8><meta name=viewport content=width=device-width>/' > 1.htm
firefox ./1.htmI don’t know what to make of it
"Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.
Jeff Dean is asked this question by Geoffrey Hinton at 37:35 - might worth watching. Overall an interesting video.
I have an unusual set of metrics for evaluating AI. I am old and comfortably retired but I still like to experiment with AI tools for updating many of my old (or ancient) open source projects and creating new projects. I am blown away by how good my dedicated Hermes Agent setup on a VPS and also running Google AntiGravity with Claude and Gemini are. Both systems are unbelievably good.
I can only imagine how effective companies with a solid engineering process will be as appropriate roles for human and AI developers solidify. I can also imagine companies with a poor process and poor engineering taste will waste a lot of money.
If you were in AI for such a long time, of course you are biased and want to see it succeed.
Look at what has been written in open source since 2023. Very little. There are no efficiency gains and the incessant talk about prompts and AI just paralyzes the entire field. And people who love to talk have the ears of the managers.
if it runs of out of cash - then it's bad for the whole industry.
same as OpenAI. so all players - will provide cash & compute to keep them going.
Why? I don’t think we would suffer if anthropic disappeared tomorrow
Didn't Amazon AWS do the same recently?
And with cashback through gcp usage!
How much of this goes back to Google as cloud spend?
this is insane. on the secondary market the valuation is 2-3x that. what gives?
Google's deal from prior rounds likely lets them buy in at the same valuation other investors get every round, so they're just getting the February valuation.
Amazon did almost the same thing last week, at the same valuation.
If you gave anthropic 10b cash they couldn't get chips in the 0-6mo timeframe at scale. Anthropic is suffering reputational damage due to choices they have to make around capacity constraints.
Google, AWS, and Azure are the only people who can help them so they hold the cards, thus the good terms.
It is not uncommon to keep a round open after the formal announcement for a bit so that few investors who could not close for whatever reason are part of it. It can be hard to line up everyone at the same time, especially when they are public companies.
---
Specific to your point on why valuation can be lower than market at the same time - Goods(and stocks) while feel to be homogeneous, divisible, fungible, they are not. Size can value of its own.
A block of 10% shares may be worth more (or less) than unit share price, because them being available together has a property of its own, making it either more desirable when someone wants to acquire or harder to sell because there is not enough demand if all of them get dumped at the same time [1]
In this deal terms, just cause few ten millions are trading at $850B, or some investors can put in say $1-2B doesn't mean you can raise $40B at the same valuation.
There isn't depth in the market to raise $65B (including the AMZN deal) at $850B valuation. There is always some demand at any price point in the demand supply curve, you will probably find few people who will buy few shares at $10T, or $100T or some ridiculous number but that doesn't mean you can raise a large round on that.
Strictly speaking it is not even $350B per se, i.e. Google and AWS benefit from this as vendors. It very much like vendor financing with convertible debt. Meaning it is worth that much to them, but not to you and me because we are not getting some of the money back as sales that boosts are own stock.
---
[1] In the same vein, price can also depend on what you are getting in return, hard immediate dollars is the highest value. However if you are getting shares in return, you can usually negotiate a premium depending on risk of the shares you are getting.
The recent SpaceX - Cursor deal is a good example, any founder would likely take say $10B all cash offer over the $60B from SpaceX, or price would be closer to cash if it GOOG, AMZN, APPL shares instead - proven deeply liquid market etc.
Correct. But I think $5 to 10bn are sitting ready for $700 to 800, which strongly implies Google is getting a solid deal on this.
Not sure if it’s going to be good enough to replace IDEs with neatly integrated superior models.
Who are you quoting?
Google may reckon they can't (yet) reconcile their vision of Gemini with the raw coding performance of Claude and Codex.
> My main job isn't writing code but I try to keep Claude Code and OpenCode busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities
I’ve seen many people say this the past few weeks i.e that their daily job now is no longer coding and has flipped to being a full Claude Feeder making sure its always churning.
As someone who uses Claude Code daily, I still find myself reading code and thinking more vs just shoveling coal as fast as I can into the Claude steam train. Am I doing things wrong?
Its just amazing people that people talk about Anthropic and have never used it.
Nah, see Meta
I am still upset at these companies for driving up the RAM prices. "Free market" has evident problems - companies are way too dominating here. Average Joe suffers from this price mafia, assuming he or she needs to purchase RAM now.
There's been far to many "plans" and "commitments" and an awful lot of nothing actually happening.
I don’t think that’s the ultimate cause of the turnaround in fortunes. But it strikes me, at least from the investor and potentially urban-consumer perspectives, as a pivotal moment in both companies’ fortunes.
Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt
I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.
Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.
This is a good hypothesis. I suspect we are both correct.
The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.
My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.
I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.
I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.
This is true. OpenAI WAS the story of AI, now it is just 50% of it, at max. Losing the monopoly of imagination towards AGI is bad for them.
One thing I don't agree though, consumers aren't the important part of AI, they are a liability.
AI is too expensive, consumers can't pay for it. Instead they will compete with enterprise for the same tokens, with less money.
This is my suspicion. Consumers hadn’t previously heard of Anthropic and Claude. Now they had, particularly in cities.
> this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point
Also agree. Hence why I said “I don’t think” the fight is “the ultimate cause.”
"Stunt", eh?
Sure. Neither OpenAI or Anthropic do. Amazon and Google have followed institutional investors bidding up Anthropic over OpenAI in private markets, all of which—I suspect—followed user-pattern shifts following the fiasco. (Well, fiascos. Altman is a host unto himself.)
lol hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up. He didn't get fired the first time for no reason.
An former chess instructor told me most games are won not by brilliant maneuver, but by not screwing up. Repeatedly making the boring play is a winning strategy far more often than any mastermind play.
Opposite of what you said. The "dig" was not retrenching to more use, but rather I evaluated what I saw them doing and have migrated our company to much better options.
Individually, yes. Anthropic surging in private markets the weekend after the supply-chain risk designation, and raising from not only Google but also Amazon in such short clip (following credibly reports of it turning down $800+ billion valuation cheques from financial investors), all while OpenAI gets pilloried in the press and struggles to hold its $800bn valuation in private markets, collectively—to me—paints a bigger picture.
I always wondered why Anthropic was not out there feverishly scrambling to procure compute like the other big players. While Altman was being laughed at as a "podcasting bro asking for trillions in investment" Dario was on Dwarkesh expounding on how tricky it is to predict the demand for capacity. Now Dario has to give equity to a competitor to get compute. (OpenAI does this too, of course, but I suspect the terms are much better.)
At this point, it's pretty clear that compute is the only moat in this business. Even as an outsider, the extreme demand curves and compute crunch were painfully obvious, so this seems like a serious strategic error on Dario's part.
It’s concerning that the only thing that seems to be keeping the AI bubble inflated at this point is money from the folks selling things to AI companies. That’s very much not a good sign no matter how you spin it.
I’m a fan of AI and there’s clearly value to it… however that value seems completely out of whack with the money pumping into the ecosystem and at some point such irrational behaviors break.