It is not. It's a terrible comparison. Qwen, deepseek and other Chinese models are known for their 10x or even better efficiency compared to Anthropic's.
That's why the difference between open router prices and those official providers isn't that different. Plus who knows what open routed providers do in term quantization. They may be getting 100x better efficiency, thus the competitive price.
That being said not all users max out their plan, so it's not like each user costs anthropic 5,000 USD. The hemoragy would be so brutal they would be out of business in months
Opus isn't that expensive to host. Look at Amazon Bedrock's t/s numbers for Opus 4.5 vs other chinese models. They're around the same order of magnitude- which means that Opus has roughly the same amount of active params as the chinese models.
Also, you can select BF16 or Q8 providers on openrouter.
They do have different infrastructure / electricity costs and they might not run on nvidia hardware.
It's not just the models.
If Opus was 10x larger than the chinese models, then Google Vertex/Amazon Bedrock would serve it 10x slower than Deepseek/Kimi/etc.
That's not the case. They're in the same order of magnitude of speed.
They're goal (similar to Uber, DoorDash, Robin Hood, etc.) is to get mass adoption. Their business models only work at this kind of scale.
It's completely impossible to have consumers pay $20-60/mo and be a profitable business without mass adoption where some are not using it as much as others...and, perhaps more importantly, the masses put pressure on their employers to pay for their tooling. This is why pricing does not need to come down.
Quite literally I have engineers spending over $1,000/mo on Opus. That's the goal.
The quantisation is shown on the provider section.
I find it a good comparison because it is a good baseline since we have zero insider knowledge of Anthropic. They give me an idea that a certain size of a model has a certain cost associated.
I don't buy the 10x efficiency thing: they are just lagging behind the performance of current SOTA models. They perform much worse than the current models while also costing much less - exactly what I would expect. Current Qwen models perform as good as Sonnet 3 I think. 2 years later when Chinese models catchup with enough distillation attacks, they would be as good as Sonnet 4.6 and still be profitable.
Define "much worse".
+--------------------------------------+-------------+-----------+------------------+
| Benchmark | Claude Opus | DeepSeek | DeepSeek vs Opus |
+--------------------------------------+-------------+-----------+------------------+
| SWE-Bench Verified (coding) | 80.9% | 73.1% | ~90% |
| MMLU (knowledge) | ~91 | ~88.5 | ~97% |
| GPQA (hard science reasoning) | ~79–80 | ~75–76 | ~95% |
| MATH-500 (math reasoning) | ~78 | ~90 | ~115% |
+--------------------------------------+-------------+-----------+------------------+I find it really funny that anyone can call it this with a straight face when all the American models are based on heaps of illegally pirated books and TOS-breaking website scraping in the first place.
These are not cell phone plans which the average joe takes, they are plans purchased with the explicit goal of software development.
I would guess that 99 out of every 100 plans are purchased with the explicit goal of maxing them out.
When I have a feeling that these tools will speed me up, I use them.
My client pays for a couple of these tools in an enterprise deal, and I suspect most of us on the team work like that.
If my goal was to max out every tool my client pays, I’d be working 24hrs a day and see no sunlight ever.
I guess it’s like the all you can eat buffet. Everybody eats a lot, but if you eat so much that you throw up and get sick, you are special.
Why? Because in my experience, the bottleneck is in shareholders approving new features, not my ability to dish out code.
if i hit the limit usually i'm not using it well and hunting around. if i'm using it right i'm basically gassed out trying to hit the limit to the max.
This sloppy Forbes article has polluted the epistemic environment because now theres a source to point to as "evidence."
So yes this post author's estimation isn't perfect but it is far more rigorous than the original Forbes article which doesn't appear to even understand the difference between Anthropic's API costs and its compute costs.
The only thing these companies sell are tokens. That's their entire output. OpenAI is trying to build an ad business but it must be quite small still relative to selling tokens because I've not yet seen a single ad on ChatGPT. It's not like these firms have a huge side business selling Claude-themed baseball caps.
That means the cost of "inference" is all their costs combined. You can't just arbitrarily slice out anything inconvenient and say that's not a part of the cost of generating tokens. The research and training needed to create the models, the salaries of the people who do that, the salaries of the people who build all the serving infrastructure, the loss leader hardcore users - all of it is a part of the cost of generating each token served.
Some people look at the very different prices for serving open weights models and say, see, inference in general is cheap. But those costs are distorted by companies trying to buy mindshare by giving models away for free, and of those, both the top labs keep claiming the Chinese are distilling them like crazy including using many tactics to evade blocks! So apparently the cost of a model like DeepSeek is still partly being subsidized by OpenAI and Anthropic against their will. The cost of those tokens is higher than what's being charged, it's just being shifted onto someone else's books. Nice whilst it lasts, but this situation has been seen many times in the past and eventually people get tired of having costs externalized onto them.
For as long as firms are losing money whilst only selling tokens, that means those tokens are selling at a loss. To not sell tokens at a loss the companies would have to be profitable.
When people say “selling at a loss” they mean negative unit economics. No one ever means this much more expansive definition you’ve invented.
But we talk about inference separately for a reason: largely inference cost is the scaling cost. Once you have a model the margin on your inference is how you get to profitability, as long as your margin is positive you can make the entire enterprise profitable by just selling more tokens. This is the same fundamental business that chip fabs work on. Yes it costs them a lot to get to the next node, but what's important is the margin they can get on the wafers they sell, because they sell tonnes of wafers.
It's pretty core to the concept of SAAS businesses that yes, you do consider all costs. But you want to focus on the margin of the bit that scales. This is why WeWork exploded, the thing they were scaling only scaled up at negative margin.
The point is that if their inference margin is positive, they can "just" scale up and become profitable. If their inference margin is negative, then scaling up the business actually causes problems.
- Amortized training costs.
- SG&A.
- Capex depreciation.
All the above impact profitability over various time horizons and have to rolled into present and projected P&L and cash flow analysis.
Your right that all other costs are critical to measuring the profitability of the business but for such a young industry that’s the unknown. Does training get cheaper do we hit a theoretical limit on training. Are there further optimizations to be had.
You don’t have large capex in an industrial and then in year one argue that the business is doomed when your selling the product above the marginal cost but you have not recouped costs yet that have been capitalized.
Theres quite a lot of evidence, no proof I'd agree, but then there's no absolute proof I'm aware to the contrary either, so I don't know where you're getting this from.
The two pieces of evidence I'm aware of is that 1) Anthropic doesn't want their subsidised plans being used outside of CC, which would imply that the money their making off it isn't enough, and 2) last time I checked, API spending is capped at $5000 a month
Like I say, neither of these are proof, you can come up with reasonable arguments against them, but once again the same could be said for evidence on the contrary
I don't think this logically follows. An unlimited buffet doesn't let you resell all of the food out the backdoor. At some level of usage any fixed price plan becomes unprofitable.
I agree the 5k cap is interesting as evidence although as you said I suspect there are other reasons for it.
As for evidence against it: The Information reported that OpenAI and Anthropic are 30%+ gross margins for the last few years. Sam Altman and Dario have both claimed inference is profitable in various scattered interviews. Other experts seem to generally agree too. A quick search found a tweet from former PyTorch team member Horace He: https://x.com/typedfemale/status/1961197802169798775 and a response to it in agreement from Anish Tondwalkar former researcher at OpenAI and Google Brain.
Claude Code use-cases also differ somewhat from general API use, where the former is engineered for high cache utilization. We know from overall API costs (both Anthropic and OpenRouter) that cached inputs cost an order of magnitude less than uncached inputs, but OpenCode/pi/OpenClaw don't necessarily have the same kind of aggressive cache-use optimizations.
Vertically integrated stacks might also be able to have a first layer of globally shared KV cache for the system prompts, if the preamble is not user specific and changes rarely.
> 2) last time I checked, API spending is capped at $5000 a month
Per https://platform.claude.com/docs/en/api/rate-limits, that seems to only be true for general credit-funded accounts. If you contact Anthropic's sales team and set up monthly invoicing, there's evidently no fixed spending limit.
I think it’s fairly obvious that Anthropic is lighting cash on fire and focusing on whether or not they’re losing money per token on inference is missing the forest for the trees.
Tokens become less valuable when the models aren’t continuously trained and we have zero idea what Anthropic is paying for training.
We don’t have clear evidence either way but it heavily leans to API pricing at least covering inference cost. Models these days have less and less differentiation and for API use there must be some thought to compete on cost but it’s not going to be winner take all. They leap frog each other with each new model.
We have a way of determining if Anthropic is, or has the capability of being profitable, and what the levers to that may be. AI may be world-changing, but the accounting principles behind AI labs are no different than those behind a Pizza Hut.
Even if the cost of "inference + serving" is lower than the cost of selling a token, the relevant question is what is the depreciation schedule of the cost of training. ie, if I spend $1 on training, how long do I have before I have to spend $1 again?
Almost certainly, any reasonable depreciation schedule of the cost of training will result in leading labs being presently wildly unprofitable. So the question is:
What can be done to make training depreciate more slowly? Perhaps users can be persuaded to stick around using non-fronteir models for longer, although then there's a shift in the competitive landscape.
If users cannot be persuaded (forced?) to use legacy models, then the entire business model is thrown into question, because there's no reason why training frontier models would ever get cheaper: even if it gets cheaper on the margin, surely that will result in more compute used to generate an even "better" model, resulting in more spend in the aggregate.
This doesn't mean that the AI industry is "doomed". A couple things could happen, and this is where the fronteir labs should be focusing their attention:
1. They could find a way to climb up the value chain and capture more of the consumer surplus.
2. There could be a paradigm shift in compute architecture/compute cost.
3. We could reach a limit of marginal utility, shifting consumption to legacy models, thereby lengthening the depreciation/utility of training.
Edit: My assertion of "Almost certainly, any reasonable depreciation schedule of the cost of training will result in leading labs being presently wildly unprofitable." is made with no real information, just a gut feeling, and should not be taken seriously.
However, the GAAP P&L tells the opposite story. You book $200M revenue in the same year you spend $1B training the next model, so you report an $800M loss. Next year you book $2B against $10B in training spend, reporting an $8B loss. The business looks like it's dying when every individual model generation actually generates a healthy profit.
That's actually Dario's answer to your depreciation question. If each cohort earns back its training cost within its natural lifespan (however short that lifespan is), the depreciation schedule is already baked in. The model doesn't need to live forever, it just needs to return more than it cost before the next one replaces it. Whether that's actually happening at Anthropic is a different question, and one we can't answer without audited financials, but it's the claim Dario makes (and seems entirely reasonable from a distance).
so what happens on year 10 when Anthropic hits a $10B training and only returns $8T? they're cooked
And I admit that I made that assertion from my gut without actually knowing if it's true or not.
Every single time a company comes around and goes "Actually GAAP are wrong, look at my new math that says were good" its led to much wailing and gnashing of teeth in the future when it inevitably isnt.
To GAAP point - 200M or 1B or 10B is not a loss but cash converted into an asset. It won’t affect the bottom line at all. Unless the company re-evaluates the asset and say it now cost 1M instead of 200M. This would hit the bottom line.
Yes, this is exactly why OpenAI and Anthropic are hyping AGI. If LLMs ever become good enough to replace workers, the first sign will be frontier model companies launching competitor businesses. It doesn't make sense to sell the formula for gold when you can just use it yourself.
> There could be a paradigm shift in compute architecture/compute cost.
Possible, but no signs of this on the horizon. If it does happen, it's impossible to predict when it will.
> We could reach a limit of marginal utility, shifting consumption to legacy models, thereby lengthening the depreciation/utility of training.
I'm not sure market dynamics will allow this any time soon. We seem to have already achieved a marginal utility equilibrium in terms of model size, so training new models on trending use-cases (e.g. synthetic data targeting tool calls, agentic workflows, computer use, etc) is really the driving force behind product differentiation. Nobody wants to admit "training new models isn't profitable" because that deflates the AGI singularity narrative that all this investment hinges on.
Maybe not? This is an argument that has to be made using numbers. We can't do the estimate without the numbers.
Crazy that people can write sentences like this with a straight face these days.
This is what the elites of the gilded age called "ruinous competition", and the solution today will be the same as back then: monopoly power. This has been the business plan of the tech VC industry for 25+ years.
The models don't learn without training, and they have finite context windows. As software updates around the world, don't they have to be trained on the new information to stay up to date?
model completions read write cached_read cache_write
claude-opus-4-6 11000 16900000 5840000 1312000000 66120000
Ask Opus to figure out how much it would cost. Lol.
So getting Claude code subscriptions for developers should be permissable and not be against anything... However, if you created a rest endpoint to eg run a preconfigured prompt as part of your platform, that'd be against it
But I'm neither a lawyer nor work for anthropic
Claude Code has a Teams plan which includes Max tiers. Why would it be forbidden?
Could you quote the relevant part that you think forbids it for us?
This is the relevant quote from the original article.
Anthropic's models may be similar in parameter size to model's on open router, but none of the others are in the headlines nearly as much (especially recently) so the comparison is extremely flawed.
The argument in this article is like comparing the cost of a Rolex to a random brand of mechanical watch based on gear count.
Are Anthropic currently unable to sell subscriptions because they don’t have capacity?
Absolutely! Im currently paying $170 to google to use Opus in antigravity without limit in full agent mode, because I tried Anthropic $20 subscription and busted my limit within a single prompt. Im not gonna pay them $200 only to find out I hit the limit after 20 or even 50 prompts.
And after 2 more months my price is going to double to over $300, and I still have no intention of even trying the 20x Max plan, if its really just 20x more prompts than Pro.
I think it's the other way around? Sparse use of GPU farms should be the more expensive thing. Full saturation means that we can exploit batching effects throughout.
If you own equity in Anthropic you should care about that cost. Maybe you are willing to tolerate it to win market share, but for you to make the most profit you need that cost to shrink.
The entertainment industry. They still tell you about how much money they're leaving on the table because people pirate stuff.
What would happen in reality for entertainment is people would "consume" far less "content".
And what would happen in reality for Anthropic is people would start asking themselves if the unpredictability is worth the price. Or at best switch to pay as you go and use the API far less.
Only thing that matters is if the users would have paid $5000 if they don't have option to buy subscription. And I highly doubt they would have.
I mean... rolex is overpriced brand whose cost to consumers is mainly just marketting in itself. Its production cost is nowhere close to selling price and looking at gears is fair way of evaluating that
When has production cost had anything to do with selling price?
I'm sure Anthropic is making money off the API but I highly doubt it's 90% profit margins.
Unlikely. Amazon Bedrock serves Opus at 120tokens/sec.
If you want to estimate "the actual price to serve Opus", a good rough estimate is to find the price max(Deepseek, Qwen, Kimi, GLM) and multiply it by 2-3. That would be a pretty close guess to actual inference cost for Opus.
It's impossible for Opus to be something like 10x the active params as the chinese models. My guess is something around 50-100b active params, 800-1600b total params. I can be off by a factor of ~2, but I know I am not off by a factor of 10.
The Trillions of parameters claim is about the pretraining.
It’s most efficient in pre training to train the biggest models possible. You get sample efficiency increase for each parameter increase.
However those models end up very sparse and incredibly distillable.
And it’s way too expensive and slow to serve models that size so they are distilled down a lot.
Since then inference pricing for new models has come down a lot, despite increasing pressure to be profitable. Opus 4.6 costs 1/3rd what Opus 4.0 (and 3.5) costs, and GPT 5.4 1/4th what o1 costs. You could take that as indication that inference costs have also come done by at least that degree.
My guess would have been that current frontier models like Opus are in the realm of 1T params with 32B active
42 tps for Claude Opus 4.6 https://openrouter.ai/anthropic/claude-opus-4.6
143 tps for GLM 4.7 (32B active parameters) https://openrouter.ai/z-ai/glm-4.7
70 tps for Llama 3.3 70B (dense model) https://openrouter.ai/meta-llama/llama-3.3-70b-instruct
For GLM 4.7, that makes 143 * 32B = 4576B parameters per second, and for Llama 3.3, we get 70 * 70B = 4900B, which makes sense since denser models are easier to optimize. As a lower bound, we get 4576B / 42 ≈ 109B active parameters for Opus 4.6. (This makes the assumption that all three models use the same number of bits per parameter and run on the same hardware.)Of course, intense sparsification via MoE (and other techniques ;) ) lets total model size largely decouple from inference speed and cost (within the limit of world size via NVlink/TPU torrus caps)
So the real mystery, as always, is the actual parameter count of the activated head(s). You can do various speed benchmarks and TPS tracking across likely hardware fleets, and while an exact number is hard to compute, let me tell you, it is not 17B or anywhere in that particular OOM :)
Comparing Opus 4.6 or GPT 5.4 thinking or Gemini 3.1 pro to any sort Chinese model (on cost) is just totally disingenuous when China does NOT have Vera Rubin NVL72 GPUs or Ironwood V7 TPUs in any meaningful capacity, and is forced to target 8gpu Blackwell systems (and worse!) for deployment.
That said, for inference, the margins for OpenAI were estimated at 70% [1] [2], and the margins for Anthropic were estimated between 90% and 40% [3] [4], last year. They will not be profitable for years.
[1] https://phemex.com/news/article/openais-ai-profit-margin-cli... [2] https://www.saastr.com/have-ai-gross-margins-really-turned-t... [3] https://www.theinformation.com/articles/anthropic-projects-7... [4] https://www.investing.com/news/stock-market-news/anthropic-t...
Profit implies a GAAP accrual of some sort. On any accrual schedule tied to reality, the companies are profitable now - that is, inference margin on each given model has more than paid for capital costs of training and deploying those models.
That the companies get to show a loss is a feature of cash-basis accounting: they made $100m net on that last model? Good news, We’re spending $1b on the next! Infinite tax losses!
The companies will not be cashflow positive for years. Why does this persnickety difference matter? It matters to me because I care about the engineers here - and they seem collectively likely to either short every AI company IPOing, or just quietly ignore AI impact on their livelihood, or head off into a corner and go catatonic - all based on a worldview that “this is collective insanity and everything here is going to eventually go bankrupt” — none of those are good outcomes. Shorting might be, but it should be done judiciously, and understanding the financial factors at play. So, anyway, long plea over - but, allow me to plead: cashflow positive if you want to make the point you were making.
Alibaba is the primary comparison point made by the author, but it's a completely unsuitable comparison. Alibab is closer to AWS then Anthropic in terms of their business model. They make money selling infrastructure, not on inference. It's entirely possible they see inference as a loss leader, and are willing to offer it at cost or below to drive people into the platform.
We also have absolutely no idea if it's anywhere near comparable to Opus 4.6. The author is guessing.
So the articles primary argument is based on a comparison to a company who has an entirely different business model running a model that the author is just making wild guesses about.
If you remove the cached token cost from pricing the overall api usage drops from around $5000 to $800 (or $200 per week) on the $200 max subscription. Still 4x cheaper over API, but not costing money either - if I had to guess it's break even as the compute is most likely going idle otherwise.
The gamble with caching is to hold a KV cache in the hope that the user will (a) submit a prompt that can use it and (b) that will get routed to the right server which (c) won't be so busy at the time it can't handle the request. KV caches aren't small so if you lose that bet you've lost money (basically, the opportunity cost of using that RAM for something else).
I'm incredibly salty about this - they're essentially monetizing intensely something that allows them to sell their inference at premium prices to more users - without any caching, they'd have much less capacity available.
Why would it go idle? It would go to their next best use. At least they could help with model training or let their researchers run experiments etc.
Training currently requires nvidia's latest and greatest for the best models (they also use google TPU's now which are also technically the latest and greatest? However, they're more of a dual purpose than anything afaik so that would be a correct assesment in that case)
Inference can run on a hot potato if you really put your mind to it
Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content.
LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it".
Popular content is popular because it is above the threshold for average detection.
In a better world, platforms would empower defenders, by granting skilled human noticers flagging priority, and by adopting basic classifiers like Pangram.
Unfortunately, mainstream platforms have thus far not demonstrated strong interest in banning AI slop. This site in particular has actually taken moderation actions to unflag AI slop, in certain occasions...
And APIs are on-demand service equivalant.
Priority is set to APIs and leftover compute is used by Subscription Plans.
When there is no capacity, subscriptions are routed to Highly Quantized cheaper models behind the scenes.
Selling subscription makes it cheaper to run such inference at scale otherwise many times your capacity is just sitting there idle.
Also, these subscription help you train your model further on predictable workflow (because the model creators also controls the Client like qwen code, claude code, anti gravity etc...)
This is probably why they will ban you for violating TOS that you cannot use their subscription service model with other tools.
They aren't just selling subscription, but the subscription cost also help them become better at the thing they are selling which is coding for coding models like Qwen, Claude etc...
I've used qwen code, codex and claude.
Codex is 2x better than Qwen code and Claude is 2x better than Codex.
So I'd hope the Claude Opus is atleast 4-5x more expensive to run than flagship Qwen Code model hosted by Alibaba.
This hasn't been true in a long time.
In fact I'm more and more inclined to run my own benchmarks from now on, because I seriously distrust those I see online.
Even if the benchmarks are indeed valid, they just don't reflect my use cases, usages and ability to navigate my projects and my dependencies.
Maybe that's just CLAUDE.md and memory causing the difference of course.
As a matter of preference however I like the way Claude Code works just a lot better, instructing it to work with parallel subagents in work trees etc. just matches the way I think these things should work I guess.
1. It would be nice to define terms like RSI or at least link to a definition.
2. I found the graph difficult to read. It's a computer font that is made to look hand-drawn and it's a bit low resolution. With some googling I'm guessing the words in parentheses are the clouds the model is running on. You could make that a bit more clear.
[1] https://www.wheresyoured.at/anthropic-is-bleeding-out/ [2] https://www.wheresyoured.at/costs/
> this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code
There is of course this meme that "Anthropic would be profitable today if they stopped training new models and only focused on inference", but people on HN are smart enough to understand that this is not realistic due to model drift, and also due to comeptition from other models. So training is forever a part of the cost of doing business, until we have some fundamental changes in the underlying technology.
I can only interpret Ed Zitron as saying "the cost of doing business is 200% to 3000% of the price users are paying for their subscriptions", which sounds extremely plausible to me.
Like I wish it was simple as "if it wasn't viable, they wouldn't be in business," but alas that argument is kinda the more naive one in this world. Right?
Or is there some intuition about energy/cost here all the dump posters miss, that you could tell us about?
Please, anything, my company is dying.
> My LinkedIn and Twitter feeds are full of screenshots from the recent Forbes article on Cursor claiming that Anthropic's $200/month Claude Code Max plan can consume $5,000 in compute.
So the article's title is obviously sensationalized.
I thought there was no moat in AI? Even being 10x costlier, Anthropic still doesn't have enough compute to meet demand.
Those "AI has no moat" opinions are going to be so wrong so soon.
So no, Claude would not be getting NEARLY as much usage as it's currently getting if it weren't for the $100/$200 monthly subscription. You're comparing Kimi to the price that most people aren't paying.
So this turns into a death march.
If you are behind, the only thing you can do is make massive capital investments to catch up. Once you're ahead you can sell tokens until someone else catches up. And, breaking the model of normal of places like chip fabrication, your billions of investment may only keep you ahead for 2 months. So you have a tiny window to sell those tokens.
…You could take efficiency improvement rates from previous models releases (from x -> y) and assume; they have already made “improvements” internally. This is likely closer to what their real costs are.
If they never go public, there's our answer as well.
Cursor seems to be in a tough spot. Just heard the swix podcast on their big new cloud agents thing, and it’s looking like a pretty small moat these days.
The more interesting question is where the margins go as inference costs keep dropping. At some point the pricing pressure flows to users.
$200 worth of actual computation is an awful lot of computation.
Yeah , I tried gasTown. Not using it extensively.
API inference access is naturally a lot more costly to provide compared to Chat UI and Claude Code, as there is a lot more load to handle with less latency. In the products they can just smooth over load curves by handling some of the requests slower (which the majority of users in a background Code session won't even notice).
now, the consensus of the commentards on this website, who don't have access to any of anthropics financial data, is that the monthly subscriptions are a money loser!
so either the leading AI company's business dev team is wrong or the Jacker News comment section is wrong, it is a mystery
I wonder if a better proxy would be comparing by capability level rather than size. The cost to go from "good" to "frontier" is probably exponential, not linear - so estimating Anthropic's real cost from what it takes to serve Qwen 397B seems off.
People in comments have assumption that Atropic 10 times bigger than chinese models so calc cost is 10 times more.
But from perspective of Big O notation only a few algorithms gives you O(N). Majority high optimized things provide O(N*Log(N))
So what is big O for any open model for single request?
However I think it's fair to say the cost is roughly linear in the number of users other than that.
There may be some aspects which are not quite linear when you see multiple users submitting similar queries... But I don't think this would be significant.
As for LLM, there is probably some cost constant added once it can fit on a single GPU, but should probably be almost linear.
In the real world ..
Where I work, AI is used heavily, we are already tipping into cost management mode at a firm level. Users are being aggressively steered to cheaper models, usage throttled, and cost attribution reports sent. This is already being done at the under-$1k/mo per user cost level. So some indications of revenue per user leveling out already.
Meanwhile everyone I know who works anywhere near a computer has had AI shoved down their throat, with training, usage KPIs, annual goal setting and mandated engagement. So we are already pretty saturated, it's not like theres giant new frontiers of new users.
Which is probably a lot more correct than other claims. However it's also true that anybody who has to use the API might pay that much, creating a real cost per token moat for Anthropics Claude code vs other models as long as they are so far ahead in terms of productivity.
> Qwen 3.5 397B-A17B is a good comparison point. It's a large MoE model, broadly comparable in architecture size to what Opus 4.6 is likely to be.
I stopped reading here. Frontier models have been rumoured to be in TRILLIONS of parameters since the days of GPT-4. Besides, with agents, I think they’re using more specialized models under the hood for certain tasks like exploration and web searches.
So while their cost won’t be $5000 or anywhere close, I still think it would be in the hundreds for heavy users. They may very well be losing money to the top 5-10% MAX users. Their real margin likely comes from business API customers.
Here’s an interesting bit - OpenAI filed a document with the SEC recently that gave us a peek into its finances. The cost of all infrastructure stood at just ~30% of all revenue generated. That is a phenomenal improvement. I fell off the chair when I first learned that.
> Anthropic is looking at approximately $500 in real compute cost for the heaviest users.
but $5 that I amortize over 7 years might end up being $1.7 maybe if I don't rapidly combust (supply chain risk)
Everyone else pays them at API prices
In my case, the access logs alone from bots scanning for vulns grew so large that the server started creating.
Fortunately I wasn't running anything vulnerable!
It’s worth it, but I know they aren’t making money on me. But, of course I’m marketing them constantly so…
Aren't they losing money on the retail API pricing, too?
> ... comparisons to artificially low priced Chinese providers...
Yeah, no this article does not pass the sniff test.
No, they aren't, and probably neither is anyone else offering API pricing. And Anthropic's API margins may be higher than anyone else.
For example, DeepSeek released numbers showing that R1 was served at approximately "a cost profit margin of 545%" (meaning 82% of revenue is profit), see my comment https://news.ycombinator.com/item?id=46663852