Hi,
Starting April 4 at 12pm PT / 8pm BST, you’ll no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw. You can still use them with your Claude account, but they will require extra usage, a pay-as-you-go option billed separately from your subscription.
Your subscription still covers all Claude products, including Claude Code and Claude Cowork. To keep using third-party harnesses with your Claude login, turn on extra usage for your account. This will be enforced April 4 starting with OpenClaw, but this policy applies to all third-party harnesses and will be rolled out to more shortly (read more).
To make the transition easier, we’re offering a one-time credit for extra usage equal to your monthly subscription price. Redeem your credit by April 17. We’re also introducing discounts when you pre-purchase bundles of extra usage (up to 30%).
We’ve been working to manage demand across the board, but these tools put an outsized strain on our systems. Capacity is a resource we manage carefully and we need to prioritize our customers using our core products. You will receive another email from us tomorrow where you’ll have the ability to refund your subscription if you prefer.
In other words this is about Anthropic subsidizing their own tools to keep people on their platform. OpenClaw is just a good cover story for that. You can maximize plans just as easily w/ /loop. I do it all the time on max 20x. The agent consuming those tokens is irrelevant.
For what it's worth I don't use OpenClaw and don't intend to, but I do use claude -p all the time.
If Anthropic miscalculated the amount of tokens, or simply pushed too hard to capture market share, that is a costly mistake because people in this market are very sensitive to price hikes.
They have to be honest about what they can offer for $200. Sure, people don't max their subscriptions but when they're large they make the best of it, or they will likely cancel it. The typical subscription works well below capacity because it's cheap enough that the optionality may be worth it. $200 is not the typical subscription.
The whole point is that the users can have it doing shit for them instead of them having to babysit the computer.
The fact that users still have to sit there and argue with it erodes their value proposition. The proposition you can pay fewer salaries.
As a senior engineer, you get an assistant that never gets tired and can do quite a lot on its own. For me, it’s been an eye-opening experience. I used to have a collaborator called M that had a good general culture, but was not too smart. The calculation going into my mind every time I ask Claude for something is: how much would that cost, in terms of time and effort, to get M to do that? M was a resource that costed many thousand dollars per month, plus the time I spent correcting and directing, while Claude is actually smarter and does what it is asked with a degree of autonomy and common sense that M could never dream of.
The flipside of the coin is obvious: Anthropic will find a way to claw back - no pun intended - some of this value by raising the cost of subscription. They would be crazy not to.
Sucks to be pushed back to Claude Code with opaque system behavior and inconsistency. I bet many would rather pay more for stability than less for gambling on the model intelligence.
That sounds like their problem, not ours
But the interesting thing is, my actual token usage running agents is way less than people here seem to assume. Most of the time the agent is waiting for tools, reading files, thinking. The bursts are intense but short. I probably use less tokens per hour than someone doing a long manual coding session with lots of back and forth.
The real issue for me isnt cost, its that they can just change the rules whenever. I had to drop everything today to verify my setup still works. Thats the tax of building on someone elses platform I guess.
Indeed. And this model breaks in several cases that overlaps with the current AI business model:
- marginal cost of incremental usage is too high (Movie Pass)
- adverse selection (all you can eat monthly steak subscriptions)
- demand is synchronized (WeWork)
This is (almost) universally true of flat rate subscriptions; but there are usage-billed ones, too (and even those often have an aspect of subsidies).
A great example of the shakeup is when dial-up went from "connect, do the thing, disconnect" to "leave the computer online all the time" - they had to change the billing model because it wasn't built for continuous connections.
Customers have their own value calculations. If they can't use Claude for autonomous agent at reasonable price they will move to providers that are cheaper and more flexible. Autonomous agent adds way more utility than a marginally better LLM (assuming that's even true).
You don’t use more tokens than with Claude Code
My meal kit delivery service doesn't.
The Anthropic subs are likely priced at marginal cost (Amp‘s CEO recently said that in a podcast). It just doesn’t serve Anthropic to be operating as the service layer for OpenClaw.
I don't think this is particularly about the financial impact of people using OpenClaw - they can adjust the amount of tokens in a subscription quite easily.
I think the root cause is that Anthropic is capacity constrained so is having to make choices about the customers they want to serve and have chosen people who use Claude Code above other segments.
We know Anthropic weren't as aggressive as OpenAI through 2025 in signing huge capacity deals with the hyperscalers and instead signed smaller deals with more neo-clouds, and we know some of the neo-clouds have had trouble delivering capacity as quickly as they promised.
We also know Claude Code usage is growing very fast - almost certainly faster since December 2025 than Anthropic predicted 12 months ago when they were doing 12-month capacity planning.
We know Anthropic has suffered from brown-outs in Claude availability.
Put this all together and a reasonable hypothesis is that Anthropic is choosing which customers to service rather than raising prices.
It's pretty clear that they do continually adjust the amount of tokens in a subscription, per se (and at best they offer sort-of estimates of quotas). The same activity exhausts my session quota on one day, yet it's a minor contributor on another. They make this very explicit with the "2x" event for the past two weeks, but anyone who uses it knows this is basically an ongoing reality: If you stick to using it off hours, you generally enjoy a more liberal usage grant.
But if they just "adjust the amount of tokens in a subscription", they would be punishing everyone for the outliers. The average normal user has spurts of usage where occasionally they need more and then there are gaps where they use little.
Subscription services rely upon this behaviour, and the economics only work if they "oversell". That's why OpenClaw users want to sneak in under a subscription, because the tokens come at a discounted rate over using the API based upon that assumption, but they are breaking the model because those users aren't conforming to expectations. It's basically the tragedy of the commons and a small number of users want to piss in the well.
I think that's part of it, the other part is that OpenClaw is OpenAI IP now, and Anthropic want to allow users to ensloppify the internet through their own features now instead.
Dealing with Claude going into stupid mode 15 times a day, constant HTTP errors, etc. just isn't really worth it for all it does. I can't see myself justifying $200/mo. on any replacement tool either, the output just doesn't warrant it.
I think we all jumped on the AI mothership with our eyes closed and it's time to dial some nuance back into things. Most of the time I'm just using Opus as a bulk code autocomplete that really doesn't take much smarts comparatively speaking. But when I do lean on it for actual fiddly bug fixing or ideation, I'm regularly left disappointed and working by hand anyway. I'd prefer to set my expectations (and willingness to pay) a little lower just to get a consistent slightly dumb agent rather than an overpriced one that continually lets me down. I don't think that's a problem fixed by trying to swap in another heavily marketed cure-all like Gemini or Codex, it's solved by adjusting expectations.
In terms of pricing, $200 buys an absolute ton of GLM or Minimax, so much that I'd doubt my own usage is going to get anywhere close to $200 going by ccusage output. Minimax generating a single output stream at its max throughput 24/7 only comes to about $90/mo.
I must be missing something or supremely lucky because I feel like I’ve never hit these “stupid” moments.
If I do, it’s probably because I forgot to switch off of haiku for some tiny side thing I was doing before going back to planning.
2 weeks ago, I had only hit my limit a single time and that was when I had multiple agents doing codebase audits.
Oh no, there's plenty of us willing to say we told you so.
What's more interesting to me is what it's going to look like if big companies start removing "AI usage" from their performance metrics and cease compelling us to use it. More than anything else, that's been the dumbest thing to happen with this whole craze.
Dealing with these right now with ChatGPT. Bricked a thread which I didn’t even know was possible.
I’m kind of confused by these takes from HN readers. I could see LinkedIn bros getting reality checked when they finally discover that LLMs aren’t magic, but I’m confused about how a developer could go all-in on AI and not immediately realize the limitations of the output.
OpenClaw was still using Claude Code as the harness (via claude -p)[0]. I understand why Anthropic is doing this (and they’ve made it clear that building products around claude -p is disallowed) but I fear Conductor will be next.
[0]: See “Option B: Claude CLI as the message provider” here https://docs.openclaw.ai/providers/anthropic#option-b-claude...
Imagine not being able to connect services together or compose building-blocks to do what you want. This is absolute insanity that runs counter to decades of computing progress and interoperability (including Unix philosophy); and I'm saying this as someone who doesn't even care for using AI.
But OpenClaw is not a product. It's just a pile of open source code that the user happens to choose to run. It's the user electing to use the functionality provided to them in the manner they want to. There's nothing fundamental to distinguish the user from running claude -p inside OpenClaw from them running it inside their own script.
I've mostly defended Anthropic's position on people using the session ids or hidden OAuth tokens etc. But this is directly externally exposed functionality and they are telling the user certain types of uses are banned arbitrarily because they interfere with Anthropic's business.
This really harms the concept of it as a platform - how can I build anything on Claude if Anthropic can turn around and say they don't like it and ban me arbitrarily.
When they shut down open code, I thought it was a lame move and was critical of them, but I could understand at least where they're coming from. With this though, it's ridiculous. Claude core tools are still being used in this case. Shelling out to it to use it there's no different than a normal user would do themselves.
If this continues, I'll be taking my $200 subscription over to open AI.
When this happens I will have to look at other providers and downgrade my subscription. Conductor is just too powerful to give up. It’s the whole reason why I’m on a max plan.
EDIT: confused by downvotes. In this thread people are saying it runs on top of `claude -p` and others saying it's on pi.
The `claude -p` option is allowed per https://x.com/i/status/2040207998807908432 so I really don't understand how they're enforcing this.
Also what's the point of Claude -p if not integration with 3rd party code? (They have a whole agents SDK which does the same thing.. but I think that one requires per token pricing.) I guess they regret supporting subscription auth on the -p flag
I would like to point out something else. I have Z.ai subscription and they have a dashboard on my usage.
When trying out Openclaw a while ago, I noted something worrying. Its constantly consuming tokens, every single hour during the day, it consumed tokens. I could see over a period of 30 days, token usage would climb and climb and climb and then shrink to bottom again, as if Openclaw did a context window compaction.
Note, this usage was happening even though I wasn’t using it. It were always running and doing something in the background.
I believe its their Heartbeat.md mechanism. By default it’s set to run every half an hour. I changed it to twice a day, that was enough to me.
I can imagine if thousands of users where connecting their Openclaw instance with default config to Claude with the latest and greatest Opus model, that must’ve felt a bit.
It’s really that straightforward. If tomorrow they decide GPUs are better allocated to enterprise use, they could start removing the $20 plan just as quickly overnight, the same way they did tonight.
For a good existing example developed by a known company, check Cline Kanban: https://cline.bot/kanban
They don't have the MCP-bundling idea that I'm experimenting with, however.
I imagine how they treat these things will be contextual and maybe inconsistent. There aren't really hard lines between what they probably want editors that integrate with them to do and generic tools that try to sit a layer above the vendors' agent TUIs.
https://news.ycombinator.com/item?id=46936105 Billing can be bypassed using a combo of subagents with an agent definition
> "Even without hacks, Copilot is still a cheap way to use Claude models"
20260116 https://github.blog/changelog/2026-01-16-github-copilot-now-...
https://github.com/features/copilot/plans $40/month for 1500 requests; $0.04/request after that
https://docs.github.com/en/copilot/concepts/billing/copilot-... Opus uses 3x requests
Mind you, I think GHCP is a great service at an excellent price, but the hardcore vibe coders complain about the rate limits that I've never personally experienced using the CLI.
I think the usage patterns of a lot of harnesses are pushing against their planned capacity. I would say they can certainly explain themselves a lot better.
OpenClaw managed to burn 2.46 trillion tokens just in the last 30 days.
I'm not even gonna judge why someone needs an AI Assistant running 24/7, the core issue is that coding plans are being ruined because they're not paying for ridiculous amount of tokens burned.
Anthropic is actually making the right decision: You want a LOT of tokens for your 24/7 agent? Ok, just use the API and pay for your tokens.
I enjoy paying for a sub that I actually use to code, and what we pay today is not even enough to cover the costs of running AI servers.
AKA when you fully use the capacity you paid for, that's too much!
Similarly, on a home internet connection you might pay for a given size of pipe, but most residential ISPs don't allow running publicly accessible servers on your connection because you'll typically use way more of the bandwidth.
You can pay for the capacity, using the per token price.
If you are not aware, ACP creates a persistent session for steering rather than using the models directly.
And you don't have to get anyone's permission to use tmux.
Is it infrastructure? Are they unable to control costs?
Everyone else is spending like money is water to try to get adoption. Claude has it and is dialing back utility so that its most passionate users will probably leave.
I don’t understand this move.
For SaaS, use the SaaS API. For product, use the product.
They subsidize the product with "don't care how much" pricing so they have users to build out features without users worrying about cost. If it's not actual users using the product, then features will be built in OpenClaw instead of Claude.
The earlier they draw this line, the better.
However, announcing it the day before it is effective is a huge unforced error, even if it were just a consequence of the TOS. They gain nothing by making people scramble.
Also better to announce at the same new ways to support plugging in to Claude Code - something to encourage integration/cooperation. No fences unless the field inside is flowering.
Despite their power, frontier models are threatened by open-source equivalents. If AGI is not on the horizon and model performance is likely not going to be enough of a differentiator to keep the momentum going, the only other way is to go horizontal - enterprise solutions, proprietary coding agent harnesses, market capture, etc.
If AGI is in sight, none of these short-term games really matter. You just need to race ahead.
In that context, I don't understand the difference between a "third party harness" and a shell script.
How are they even detecting OpenClaw?
I switched OpenClaw to MiniMax 2.7. This combined with Claude over telegram does enough for me.
OpenClaw used to burn through all my Claude usage anyway.
1. Make a better product/alternative to Openclaw and start eating their userbase. They hold the advantage because the ones "using their servers too much" are already their clients so they could reach out and keep trying to convert. Openclaw literally brought them customers at the door.
2. Do everyone royally and get them off their platform - with a strong feeling of dislike or hatred towards Anthropic.
Let's see how 2 goes for them. This is not the space to be treating your clients this way.
Why hatred btw? They're not even banning accounts left and right like Google?
There's a good chance they do not have the infrastructure to do that.
Graceful handling from Anthropic
Less than 24 hours notice and on a holiday weekend
The API is there. It's straightforward and easy to use. But these users want to piss in the well, tragedy of the commons style.
When I do use AI, I already have a solid plan of what I need. Sometimes I ask it to look something up. I never do both in one prompt.
GLM 5.1 can do both, and its way way cheaper. I also don't hit my limit that fast (Plus I get to use it in OpenCode).
I'm hoping that they won't bother you unless you specifically max out the subscription limits every time
Claude Code seems designed to terminate quickly- mine always finds excuses to declare victory prematurely given a task that should take hours.
The Anthropic casino wants you to continue gambling tokens at their casino only on their machines (Claude Code) only by giving more promotional offers such as free spins, $20 bets and more free tokens at the roulette wheels and slot machines.
But you cannot repurpose your subscription on other slot machines that are not owned by Anthropic and if you want it badly, they charge you more for those credits.
The house (Anthropic) always wins.
Im hitting rate limits within 1:45 during afternoons.
I can’t justify extra usage since it’s a variable cost, but I can justify a higher subscription tier.
My guess is a plan with double the limits would need to be 5-10x as expensive.
https://support.claude.com/en/articles/12429409-manage-extra...
Usage of such tools should be forbidden in companies - its cheating and using code you didn't even wrote. thsts literally a crime
Thanks openclaw for getting me ahead, I’ve taken that and am in Claude code again.
Anthropic's current business model is to sell access to their tools to subscribers at a loss. Users maxing out their $200/month plan can realistically cost Anthropic $500-600 in actual compute costs.
Anthropic is okay with this right now because they want to amass as many users as they can, and eventually hope that GPUs will increase in power and efficiency, and their LLMs will become more efficient as well. They can eventually profit off of their current pricing, or with modest price increases, if that comes to fruition.
But letting OpenClaw wake up every 30 minutes and start sending requests is a surefire way to max out your weekly limits, and that certainly isn't something Anthropic planned for.
I'm doing a side-by-side with GPT-5.4 for $20/mo and Sonnet for $20/mo and I can tell you that all my 5 hour tokens are eaten in 30 minutes with Claude. I still haven't used my tokens for OpenAI.
Code quality seems fine on both. Building an app in Go
I think using it to write small documentation or small scripts would be a good use case for it, but serious development work you Hit the usage limits way too fast.
Only thing now is that the cheaper (worse) chinese model coding plans have huge limits, so I lean on those now. Requires a lot more hand-holding though.
If you haven't been paying attention anthropic burned a lot of their developer good will in the last 2 weeks, with some combination of bugs and rate limits.
But the writing is on the wall about how bad things are behind the scenes. The circa 2002 sentiment filter regex in their own tool should have been a major clue about where things stand.
The question every one should be asking at this point is this: is there an economic model that makes AI viable. The "bitter lesson" here is in AI's history: expert systems were amazing, but they could not be maintained at cost.
The next race is the scaling problem, and google with their memory savings paper has given a strong signal what the next 2 years of research are going to be focused on: scaling.
If you started plugging tools into GPT5.4 you may soon discover that you don't need anything beyond a single conversation loop with some light nesting. A lot of the openclaw approach seems to be about error handling, retry, resilience and perspectives on LLM tool use from 4+ months ago. All of these ideas are nice, but it's a hell of a lot easier to just be right the first time if all you need is a source file updated or an email written. You can get done in 100 tokens what others can't seem to get done in millions of tokens. As we become more efficient, the economic urgency around token smuggling begins to dissipate.
Even the $20 subscription is ridiculously limited and they keep adding more and more limits. The $200 a month sub is insane and only going to get worse and yet still limited
For example...
We recently moved a very expensive sonnet 4.6 agent to step-3.5-flash and it works surprising well. Obviously step-3.5-flash is nowhere near the raw performance of sonnet but step works perfectly fine for this case.
Another personal observation is that we are most likely going to see a lot of micro coding agent architectures everywhere. We have several such cases. GPT and Claude are not needed if you focus the agent to work on specific parts of the code. I wrote something about this here: https://chatbotkit.com/reflections/the-rise-of-micro-coding-...
inb4 skill issue I could probably beat you coding by hand with you using Claude code
> Obviously step-3.5-flash is nowhere near the raw performance of sonnet
I feel like these two statements conflict with each other.
Like an API key on a subscription that could be used for 3rd party tools would count 2x towards usage when compared to the same model when used through Claude Code.
Or it’d count the same towards weekly or 5 hour limits across all models BUT would have a separate API keys under subscriptions limit that’d be more grounded. A bit like how they already have a separate Sonnet usage counter.
That’d both allow them not to go broke and also not lose so much community goodwill AND give subscription users an alternative to paying for their enterprise-oriented (overpriced) tokens.
but couldn't i use this in off times only?
It seems the April 4th automated sweep for third-party harnesses (OpenClaw/OpenCode) has a high false-positive rate for users on the official CLI. I don't use my Anthropic token with any unofficial tools. Has anyone else on the Max tier been flagged while using the official tools?
Extra usage is very sneaky you don't get any notice that you are using extra usage and could end up with unnecessary costs in case you would have preferred to wait an hour or so.
During a recent subscription upgrade, the system started burning through my extra usage at an enormous visible clip.
So -ironically perhaps- I've turned off extra usage completely, turned off auto-reload, and -should current trends persist- will probably end up with the extra credit still on balance by the time I delete the account.
That said, I currently don't see a viable alternative to Anthropic for me - yet. I'm actively looking, and other options are improving rapidly.
What's the exact definition of third-party harnesses? They have an Agent SDK in Claude Code that can be used. Are they trying to say that only Anthropic products can use pro/max plans?
The problem Anthropic is running into is that OpenClaw made it easy for everyone to become one of those folks that washes their car three times a week or more.
I’m sure they were losing money on subscriptions in general but now they are really losing money. Shutting off OpenClaw specifically probably helps stem some of the bleeding.
https://focusoverfeatures.substack.com/p/claude-max-blocks-o...
UPDATE:
reply on x Thariq @trq212 only flagged accounts, but you can still claim the credit
Why would they actively subsidize the ticking timebomb? When OpenClaw has an especially large security incident, Anthropic will probably be affected just for the association.
Like, right alongside this post on the front page, we have a post about a relatively serious privilege escalation vulnerability in OpenClaw.
So this change has actually forced a reckoning of sorts. Maybe the best option is to outsource the thinking to another model, and then send it back to Opus to package up.
Ironically this is how the non-agent works too to an extent.
Forgive me if someone asked this already and I can't find it in the comments.
headers['X-Title']
You can change that
The other simple method is to only accept certain system prompts
I've been meaning to do some dumb little proxy system where all your i/o can pass through any specified system such as a web page, harness, whatever...
Essentially a local model toolcalls to an "Oracle" which is just something like a wrapper around Claude code or anything you've figured out how to scrape and then you talk to the small model that mostly uses the Oracle and.... There you go.
There's certainly i/o shuffling and latency but given model speeds and throughput it'll be relatively very small
Now people probably care
Doesn't mean I know how to market it, I'll certainly fail at that, but at least I can build it
We're all just getting too used to having great models for a fraction of the the value they give us.
We are paying for a certain amount of token consumption
Why then, is this an outsized strain on your system Anthropic?
It's like buying gasoline from Shell, and then Shell's terms of services forcing you to use that gas in a Hummer that does 5 MPG, while everyone else wants to drive any other vehicle.
To use your analogy, if Shell sold you a subscription to fill up your Hummer up to 30 times a month, they wouldn't let you use that subscription to fill gas cans with a GMC logo taped to the side. They couldn't, without overcharging the people who just want to average out their cost of driving.
> We are paying for a certain amount of token consumption
I dont think you are. The specific arrangement you have is you pay for a subscription to be used with Claude Code. It isnt access to tokens, so you can do whatever you please.
---
An analogy would be a refillable cup for a soda at a restuarnt. They will allow you to refill how many ever times you want, but only using the store provided cup - and you cant bring your own 2L hydroflask or whatever. You're paying not just for the liquid, but for the entire setup.
Real PMF sells itself. The risk is of course the competition catching up, I bet switching costs are very low on this setup.
I understand people from the US will have an anti-Chinese reaction, but for us in the "third world" that can use both techs, the openess is always good.
https://upload.wikimedia.org/wikipedia/commons/1/14/Bill_Gat...
Instead of not driving to work to save fuel, frugal companies are going to have their engineers work on weekends to save tokens.
So, to me its a "we built it into our world use ours"
Edit: FWIW I am an avid hater of all claw things, they're security nightmare.
Btw even at insane markups $200/mo means GPUs break even pretty fast.
I use Claude -p for a lot if not most of my coding workflows
It's simply identical to how people use Claude Code locally.
Claude Code is subsidized because of data collection.
Our engineering team averages 1.5k per dev per month on credit costs, without busting Max limits today.
The lines drawn by their consumer vs commercial TOS was clear and I never subscribed because of it.
Now they can't keep up because they never built the infrastructure to actually power any of it
We have had the ability to automate browser activities for a long time—but, online service providers don’t want to be behind a layer of automation, which is why captchas were invented.
Automating things on the Internet has never been a technology obstacle, it has been a social one.
I don’t see how anything has changed!
In fact I recently received an updated ToS from eBay saying I am not allowed to use an AI agent to buy stuff on their site. Just a matter of time until others follow suit!
Edit: I misunderstood what was happening. Thanks to the comment below for clarifying.
Say goodbye to my 600$/ month Anthropic.
Interestingly, it looks like I haven't received a non-receipt email from them since August 2025.
They become how you think, then company has you: hook line and sinker
I do also bundle a default agent with it, also forked from ZeroClaw, with a goal of being more or less prompt injection proof and hopefully able to centralize some configuration and permissions for most or all of the agents it manages, though that part is very rough sketch/plan at the moment I’d love feedback and help on from anyone interested…. Two projects, clash and nono caught my eye in this space, I think both leverage Linux landgrant but I may also use landrun for similar control of other processes like openclaw that it may manage for the user, still figuring out how and where to fit all the pieces together and what’s pragmatic/what’s overkill/what overlaps or duplicates across various strategies and tools. Right now there’s real bash wrappers that evaluate starlark policies, hoping to fully validate better end to end but if you’re interested a few others users testing, validating and/or contributing Claude tokens to the project could be invaluable at this stage. Plan to open source ASAP, maybe tonight or tomorrow if there’s interest and I have time to finish cleanup and rename (I was calling it PolyClaw but that confuses with some weird polymarket Claude skill, so now the router is going to be ZeroClawed and the agent will stay NonZeroClaw in homage to ZeroClaw who it’s forked from… we may also integrate the new Claw Code port which is also rust, just for good measure/as a native coding agent in addition to the native claw agent )
Anyway the main reason I mention is it already has a working ACP integration for any code agent, and working now on using Claude codes native channel integration to make it appear as a full fledged channel of its own, as it now more or less does already to OpenClaw, for anyone wanting to gradually migrate away from their existing OpenClaw installation using this, towards Claude or some other agent. Email me or respond here if interested, or I’ll try and post link here once it’s fully public/open source
If it isn't obvious by now, this problem is only going to get worse. The only reason we have subscriptions still is because they're waiting to pull off the biggest bait and switch in history. Don't get sunk on this ecosystem, or you're in for a world of pain in the future. As has always been the case; competition and open-source are our only hope.
Ive been calling for local LLM as owning the means of production. I aint wrong.
Public model inference quality is almost at SOTA levels, why would anyone pay these VC-subsidized companies even a cent? For a shitty chat interface? Give me a break.
Claude innovation will come from being open, not closed.
No, Anthropic, just because you added a clause that says "we can change these terms whenever" doesn't make it right. I'm paying you a set amount of money a month for a set amount of tokens (that's what limits are), and I should be able to use these tokens however I want.
Luckily, there are alternatives.
Anthropic not allowing Claiude Code subscriptions to be used with other projects isn't "pulling the rug out"; you paid for an API subscription to use Claude Code, and now you're using it for a different purpose and a different product.
If Tesla offered $10/month charging for your Tesla, and then a bunch of people turned around and use their Tesla Charge subscription to charge all different electric vehicles, and battery packs, and also hooked up a crypto mining rig to it, would you be surprised if they said "Nope, we're cutting this off. You can only use your Tesla Charge subscription for your Tesla vehicle"?
ChatGPT found it was a great idea and that I can use Claude for planning and gave me instructions on how to best hand off the building part. Claude told me it’s a horrible idea.
Claude also burns much more liberally through tokens, eg reading through entire irrelevant docs.
Openclaw is great for resolving this since I much more control which work goes where and also gives a much better user experience without all the back and forth to understand what context it has (my use case is to build things from my phone while I’m in senseless meetings in my day job).
Fully agree on the alternatives. In the end Claude’s experience is worse, while it still makes bad decisions if you let it. Better to get a good workflow on a less capable model.
This one passes the Golden Rule test for me. I treat them as I would have them treat me which is that we both will work with whatever makes economic sense.
It's like if I buy a hot dog every month and they tell me they're raising the price next month, or discontinuing honey mustard. Inconvenient but they're not doing anything wrong.
Especially since, given my back of the napkin math, they're giving us a pretty decent discount on the subscription plans.
the $200 tier math only works because humans have to type, read, and eventually go to sleep. OpenClaw replaced that human latency with a non-blocking while true loop. tbh they aren't really defending an ecosystem here, they are just desperately patching a hole in their unit economics that collapsed the second the meat bottleneck was removed.
It is a pity though. For less than an hour of setup the Nanoclaw bot proved enormously useful at tracking meal times, training progress, etc and the interface was easy enough for the family to get involved. The ease of setup was really remarkable, and Anthropic creating artificial barriers just seems user hostile.
Personally idk why they dont just make Claude Code more open source friendly. Let the community do PRs for Claude Code. Let us change the tooling, if I could use their own client but swap out the tools it calls and how, I would use like 90% less tokens.
I suspect the same for the forced high AI usage quotas for developers at MS etc. We've had multiple generations of models trained on all of the code that's available and there are diminishing returns on how much that data can do for training now. Newly published publicly available data is also made up of a significant portion of slop.
The best way to get fresh training data from real human brains might be to have real humans use your first party tools where you control all of the telemetry.
How is what you are asking for different from what they are saying?
> you’ll no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw.
My understanding is that Conductor and others aren't using it.
The GPU also takes around $500-$1000 in electricity, and even then you won't be able to run a model of as good quality as anthropic.
It's also hard to justify since who knows how quickly it will be outdated, like maybe soon you'll need a blackwell chip (like a $100k PC, check out the NVIDIA DGX Station) to run a decent model.
... It'll take a lot more than a year to pay back a model capable of running openclaw with any sort of reasonable performance.
Or can you report that you've had good luck with a Strix Halo or local GPU for less than $40k up-front costs?
You can use your Claude Code subscription with third-party tools, but you have to use the Claude Code harness. Or, you use the API. OpenClaw could use the Claude Code harness, but they don't.
Just look at how Sam Altman has led OpenAI step by step to dominate—and choke out—Anthropic, a company founded by the group of engineers who were once part of the turmoil at OpenAI.
Anthorpic's product thinking is terrible even though it is technically very good.
OpenAI seems to mostly be chasing the consumer market, but not doing great at it.
IMO, the goal here is clear: they want them to use their software, have people build an ecosystem around their software, they want to have visibility around their software.
It's never about capacity or usage, they just want to have the claude ecosystem, there is a reason why they don't support AGENTS.MD or other initiatives, they want everything to be theirs and theirs alone. You can argue that 'well fair', but to me this is clear abuse of their position in the market.