> This format replaces average per-message estimates with a direct mapping between token usage and credits.
It's to replace the opaque, per-message calculation, not the subscription plan.
https://help.openai.com/en/articles/12642688-using-credits-f...
Well, I know why. I just wanted to be snarky. It's just that trying to hide the actual price is getting a bit old. Just tell me that generating this much code will cost me $10.
- Not everyone uses dollars.
- The price of credits in some currency could change after you bought them.
- The price of credits could be different for different customers (commercial, educational, partners, etc)
- They can ban trading of credits or let them expire
> The price of credits in some currency could change after you bought them.
> The price of credits could be different for different customers (commercial, educational, partners, etc)
Maybe I'm missing something, but doesn't every other compute provider manage that without introducing their own token currency? Convert to the user's currency at the end of the month, when the invoice comes in. On the pricing page, have a table that lists different prices for different customers. I fail to see how tokens make it clearer. Compare:
"This action costs 1 token, and 1 token = $0.03 for educational in the US, or 0.05€ for commercial in the EU"
"This action costs $0.03 for educational in the US, or 0.05€ for commercial in the EU"
> They can ban trading of credits or let them expire
That sounds extremely user-hostile to me
in any case, better than what anthropic does
> user-hostile
credits do expire (I thought they always do?), apparently it's not really up to them: https://news.ycombinator.com/item?id=46230848
Even for a single standalone LLM that's the case, and the 'agentic' layers thrown on top just make that problem exponentially worse.
One'd need to entirely switch away from LLMs to fix this problem.
And now it feels like the are gamifying the compute we use for work for all the same reasons.
If you have some left over that you can’t spend, it feels like you’ve “wasted” them.
The answer is so that they can charge different prices per credit. If you buy low amounts, they can charge one price. If you buy in bulk, they can offer a discount. The usage is the same, but they can differentiate price per usage to give people more a favorable price if they are better customers.
Is there anything wrong with that?
That's not true.
First of all, there's no dollar amount tied to how many credits you get for a subscription.
Second, if you look at the prices for bundles of _extra_ credits and then do some math on the Codex rate card, you'll see that there's no way they would work out to be the same or similar.
I don't understand what you mean here; their official comms is:
Customers on existing Plus, Pro and Enterprise/Edu plans should continue to use the legacy rate card. We’ll migrate you to the new rates in the upcoming weeks.
To me, anyway, that means that GP was exactly right - they'll give the $20 subscriptions $20 worth of credits, and the $200 dollars subscriptions $200 worth of credits. That is what the "New Rates" are!I think it would be more rational to discount a subscription (standard is about 10% in most industries) vs PAYG and agree in principal with your assertion - they haven't specified what the discount is on credits bought in a subscription plan - but there is no indication that they are going to continue allowing thousands of dollars of credits on a $200/m plan.
My guess would be a 10% (or similar) discount if you buy a subscription.
So, 1.3ish million tokens for Codex? Following the token limit from here https://openai.com/api/pricing/
And I just subscribed for a year's worth of Claude... Terrible timing I guess. Do you know if the open models are viable?
Now I'm going to have to find the new best deal.
It's very variable though recently I'm noticing it's more reliable but there was a patch where it was nearly unusable some days.
I guess I won't complain for the price and YMMV.
I need to try the command line version.
Is there any other?
Unfortunately gemini as a coding agent is a steaming useless pile. They have no right selling it, cheap open weight Chinese models are better at this point.
It's not stupid it just is incompetent at tool use and makes bad mistakes. It constantly gets itself into weird dysfunctional loops when doing basic things like editing files.
I'm not sure what GOOG employees are using internally, but I hope they're not being saddled with Gemini 3.1. It's miles behind.
There's a few complaints online about the same happening to multiple users.
Otherwise anti-gravity has been great.
In the last month they have all clamped down quite heavily. I use to be able to deep-dive into a subject, or fix a small Python project, multiple times per day on the free Web UIs.
Claude, this morning, modified a small Python project for me and that single act exhausted all my free usage for the day. In the past I could do multiple projects per day without issue.
Same with ChatGPT. Gemini at least doesn't go full on "You can use this again at 1100AM", but it does fallback to a model that works very poorly.
Grok and Mistral I don't really use that much, but Grok's coding isn't that bad. The problem is that it is not such a good application for deep-diving a topic, because it will perform a web search before answering anything, making it take long.
Mistral tends to run out of steam very quickly in a conversation. Never tried code on it though.
I still review every line generated.
Gemini 3.1 pro on the web interface still works if my problems are scoped to a single module or two and my better model quotas are exhausted in the IDE.
For $7 over what I was already paying for storage, primarily using flash is still a good development experience for me.
My next step is going to be evaluating open and local models to see if they are sufficiently close to par with frontier models.
My hope is that the end of seat based pricing comes with this tech cycle. I was looking for document signing provider that doesn't charge a monthly, I only need a few docs a year.
If you have an M processor then I would recommend that you ditch Ollama because it performs slowly. We get double or triple tok/s using omlx or vmlx, respectively, but vmlx doesn't have extensive support for some models like gpt-oss.
you can tell gemma4 comes from gemini-3
Opencode was able to create the library as well. It just took about 2x longer.
The infrastructure build out just can't keep up with it.
Ultimately, we need to know the true cost of this technology to evaluate how effectively or ineffectively it can displace the workforce that existed before it.
With the hidden reasoning tokens and tool calls, I have no idea how many tokens I typically use per message. I would guess maybe a quarter of that, which would make the new pricing cheaper.
If I recall correctly, Ed Zitron noted in a recent article that one of the horsemen of his AI-pocalypse would be price hikes from providers.
At any rate, this observation is not unique to Ed, lots of people have made the same conclusion that the math doesn’t add up from a business profitability perspective.
It's why I started to pay attention to what he says.
Dude is a bit verbose, but his rationale is solid. If it gets the panties of some people here in a bunch, he may be on to something.
Did you mean instead "The articles are okay if overly wordy"?
[I'm an AI-doomer myself, but I am an AI-doomer because by and large this stuff increasingly works, not because it doesn't.]
That said, Ed Zitron still does a lot of useful research into the economics of the industry and I also believe that continued progress in AI can disrupt the world (for better and for worse) while the economics propping up all the frontier model providers can also implode spectacularly.
Some people talk about how AI doom comes about either way because it could take all of our jobs OR crash the economy when the current bubble bursts. But as an uber-AI-doomer I happen to think there is a very real possibility of a double downside (for the labor class, at least) where both of those things can happen at the same time!
Hot take, but really it's more of an observation than a take: We saw this exact response in Blockchain & crypto circles a few years ago. (Though HN wasn't quite as culturally "central" to those)
Economic Bubbles are subject to the Tinkerbell Effect. They exist so long as people exist in them, and collapse when either 1) They become so financially unsustainable as to collapse, having consumed all the money the economy could possibly give them, or 2) People stop believing in the bubble and stop feeding it money.
In this regard, the statement "NTFs are stupid" was not merely ridiculing those who bought them, but a direct attack on the bubble and those invested in it. And this is something the people involved in the bubble understand instinctively, even if they aren't consciously aware of it. (There's a psychological mechanism to that, but it's not relevant)
So consequently, they react aggressively to dissent. They seek to enforce their narrative, because not doing so is a threat to the bubble and their financial interests.
---
AI's not much different to that. It's clearly a bubble to everyone including the AI execs saying it out loud.
And people react aggressively to dissent like Ed's, because if the wider public stops believing in AI's future, the bubble bursts. They'll stop tolerating datacenter construction, they'll sell their Nvidia shares, they'll demand regulators restrict AI.
(And to those who can feel their aggression rising reading this comment. Hi, yes. I see you. If I were wrong, nothing I said would matter. You'd be wasting your time engaging with it, history would simply prove me wrong. But by all means, type up that reply or click that button.)
I wouldn't call it psychosis though. He's committing a natural fallacy where expertise in one area doesn't lend itself to expertise in another.
As I see it, the only thing close to a moat is CC for Anthropic, and since it is a big ol' fucking mess that is a) apparently now beyond the ability of any current SOTA LLM to fix, and b) understood by absolutely no human, I'd say it's not much of a moat. The other agents will catch up sooner rather than later.
The other providers? I don't see a moat. We jump ship at the drop of a hat.
I'm still running local LLMs and finding perfectly acceptable code gen.
But let's not cry for the founders, they managed to get away with tons of money. The problem is for the fools holding the bag.
I pay for it, but I don't think it's worth much more than the 20 bucks a month I have been paying.
Once they start charging something that makes sense, I doubt it will be as good.
Gemini burned me too many times but maybe the situation has improved since.
I find it sad that some people are already at the point where "My only options are to leave it as spaghetti or pay for another LLM to fix it". Already their skills are atrophied.
Jesus, the spin on this message is making me dizzy.
They finally try to stop running at a loss, and you see that as "they've been so successful"?
Here's how I see it: they all ran out of money trying to build a moat, and now realise that they are commodity sellers. What sort of profit do you think they need to make per token at current usage (which is served at below cost)?
How are they going to get there when less-highly-capitalised providers are already getting popular?
I've also heard that, we're near the end of the exponential.
That's the problem - these small businesses are writign code, models from last year are good enough for them, and as a small business they can easily shell out for hardware to self-host.
The minute businesses take-up AI for their business processed, the will to buy each employee a subscription is going to go the way of the dodo.
I meant that I thought the exponential with the models is slowing down (AGI, etc). The application though for regular people will continue to go forward.
For home projects, I almost exclusively use the web chat interface to code. I haven't done anything large yet so I will iterate and get the web chat to update code, print out the code that I copy and paste.
How does this differ in terms of pricing than Codex?
> This format replaces average per-message estimates for your plan with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.
Qwen has also been improving recently, in fact most have, so depending on when you last tried them you can try again and see how they work for you
My local Qwen is decent for some things, Kimi is decent for most things and occasionally it has been able to do better than Opus and GPT 5.4 on particular tasks
it will soon be very costly to stay in just one provider
The idea, as far as I can tell from all the pro-AI developers, was that it will never explode, and the performance will continue increasing so the slop they write today doesn't need maintenance, because when that time comes around there will be smarter models that can clean it up.
If the providers are tightening the screws now (and they are all doing it at the same time), it tells me that either:
1. They are out of runway and need to run inference at a profit.
or
2. They think that this is as good as it is going to get, so the best time to tighten the screws is right now.
Just spitballing.
But Gemini's API based usage also has a free tier and if that doesn't work for you (they train on your data) and you've never signed up before you get several hundred dollars in free credits that expire after 90 days. 3 months of free access is a pretty good deal.