Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.
Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.
I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.
I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".
They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.
(on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)
I think they are just focusing on where the dough is.
Growth isn't a problem unless you dont actually pay for the cost of every user you subscribe. Uber, but for poorly profitable business models.
Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?
And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.
What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.
This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.
I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.
Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.
Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.
At least I was until Claude started crapping the bed lately.
I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.
Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me
It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.
This made me chuckle.
That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.
I had this start happening around August/September and by December or so I chose to cancel my subscription.
I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.
This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.
Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.
The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.
The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.
For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.
Mistral is of course so far removed in quality that it's apples to oranges.
So yeah, codex kinda sucks to me. Maybe I'll try mistral.
Gemini CLI, Google Antigravity ...?
Can you sell or share farm-saved seed?
"It is illegal to sell, buy, barter or share farm-saved seed," warns Sam. [1]
Can feed grain be sown?
No – it is against the law to use any bought-in grain to establish a crop. [1]
FTC sues John Deere over farmers' right to repair tractors
The lawsuit, which Deere called "meritless," accuses the company of withholding access to its technology and best repair tools and of maintaining monopoly power over many repairs. Deere also reaps additional profits from selling parts, the complaint alleges, as authorized dealers tend to sell pricey Deere-branded parts for their repairs rather than generic alternatives. [2]
[1] https://www.fwi.co.uk/arable/the-dos-and-donts-of-farm-saved...[2] https://www.npr.org/2025/01/15/nx-s1-5260895/john-deere-ftc-...
(ex: Palestine got their utilities and food cut off so that thousands starved, Ukraine's infrastructure is under attack so that thousands will die from exposure, and that's after they went for their food exports, starving more that people that depended on it)
Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.
...
Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.
Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.
As their ads say: "Keep thinking. There has never been a better time to have a problem."
I've been thinking since then, what was the problem. But I guess I will "Keep thinking".
Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).
From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"
If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.
I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?
Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.
It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.
My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.
Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.
I have pro subscriptions to all three major providers, and have been planning to drop one eventually. Anthropic may end up making the decision for me, it sounds like, even though (or perhaps because) I've been using Claude CLI more than the others lately.
What I'd really like to do is put a machine in the back room that can do 100 tts or more with the latest, greatest Deepseek or Kimi model at full native quantization. That's the only way to avoid being held hostage by the big 3 labs and their captive government, which I'm guessing will react to the next big Chinese model release by prohibiting its deployment by any US hosting providers.
Unfortunately it will cost about $200K to do this locally. The smart money says (but doesn't act like) the "AI bubble" will pop soon. If the bubble pops, that hardware will be worth 20 cents on the dollar if I'm lucky, making such an investment seem reckless. And if the bubble doesn't pop, then it will probably cost $400K next year.
First-world problems, I guess...
Edit: my only other comment on HN is also complaining about this 11 months ago
I then had more success signing up with the mobile app, despite using the same phone number; I guess they don't trust their website for account creation.
I have a friend that had a similar experience with amazon, and using an european online platform specific for this he actually got amazon to reopen his business account.
There is a useful list of these european complaints platforms at the bottom of this page: https://digital-strategy.ec.europa.eu/en/policies/dsa-out-co...
I think I kind of have an idea what the author was doing, but not really.
Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.
There are so many things about this article that don't make sense:
> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.
I can't even understand what they're trying to communicate. I guess they're referring to Google?
There is, without a doubt, more to this story than is being relayed.
Non-disabled organization = the first party provider
Disabled organization = me
I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.
It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.
I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.
Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.
God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.
It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.
Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?
It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.
> Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.
But this isn't service where you can "grief other users". So that reason doesn't apply. It's purely "just providing a service" so only reason to be outright banned (not just rate limited) is if they were trying to hack the provider, and frankly "the vibe coded system misbehaving" is far more likely cause.
> Every once in while someone would take it personally and go on a social media rampage. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.
The company chose to arbitrarily some rules vaguely related to the ToS that they signed and decided that giving a warning is too much work, then banned their account without actually saying what was the problem. They deserve every bit of bad PR.
>> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.
> I can't even understand what they're trying to communicate. I guess they're referring to Google?
They are saying getting banned with no appeal, warning, or reason given from service that is more important to their daily lives would be terrible, whether that's google or microsoft set of service or any other.
I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...
The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.
And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.
The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.
Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?
if this is true, the learning is opus 4.5 can hijack system prompts of other models.
I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?
Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)
Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:
- "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".
- Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).
- Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.
---
[1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."
The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit
The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.
There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.
Worse, the controls that governments have over financial systems are being viewed as a model for what they should have over technology.
Recital (71) of the GDPR
"The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."
https://commission.europa.eu/law/law-topic/data-protection/r...
[0] https://digital-strategy.ec.europa.eu/en/policies/digital-se...
"The right to obtain a copy referred to in paragraph 3 shall not adversely affect the rights and freedoms of others."
and then you will have to sue them.
You are only allowed to program computers with the permission of mega corporations.
When Claude/ChatGPT/Gemini have banned you, you must leave the industry.
When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.
> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.
> Or I don't know. This is all just a guess from me.
And no response from support.
Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)
Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.
The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.
Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.
Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?
But at the same time, they have been hiring folks to help with Non Profits, etc.
At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.
It seems now they have a policy of
Warning on First Offense → Ban on Second Offense
The following behaviors will result in a warning.
Continued violations will result in a permanent ban:
Disrespectful or dismissive comments toward other members
Personal attacks or heated arguments that cross the line
Minor rule violations (off-topic posting, light self-promotion)
Behavior that derails productive conversation
Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.
Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview
OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.
Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.
Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.
I was banned two weeks ago without explanation and - in my opinion - without probable cause. Appeal was left without response. I refuse to join Discord.
I've checked bot support before but it was useless. Article you've linked mentions DSA chat for EU users. Invoking DSA in chat immediately escalated my issue to a human. Hopefully at least I'll get to know why Anthropic banned me.
My assumption is that Claude isn’t used directly for customer service because:
1) it would be too suggestible in some cases
2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.
These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.
If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.
I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.
It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.
> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
Are there enough people who need support that it matters?
In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.
'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.
Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.
I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.
But do those frustrated customers matter?
The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.
It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.
For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.
If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.
Because if you don't believe that boy, do I have some stories for you.
Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.
(My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)
API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.According to Claude:
I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.
One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.
Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.
I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.
You’re a fucking adult, you can control your emotions around a little skin or a bad word.
Not once have I been reprimanded in any way. And if anyone would be, it would be me.
As in, for example: "No, fuckface. You hallucinated that concept."
I've been doing this years.
shrug
Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.
It is also the only provider out of those that permits use for legal adult content.
There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.
What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.
Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.
However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.
I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.
We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.
Same goes for HN, yet it does not take kindly to certain expressions either.
I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.
> I suppose the trouble is that machines do not operate without human involvement
Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.
I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.
I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.
_Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.
I didn't really think about this until now (I am just solving my problem), but I guess I could get OpenCode'd for this. Similar to the OP I don't find I am doing anything particularly weird, but if their use case wasn't looked upon favorably by Anthropic, mine probably won't be either.
After the OpenCode drama where some people got banned for using it I saw some people from Anthropic on Twitter asking folks to DM them if they got banned and they'd get unbanned. I know I wouldn't be doing that, so I guess if I get banned, I am back to Codex for a while.
I wonder if Anthropic realizes the chilling effect this kind of event has on developers. It's not just the ones who get locked out -- it's a cost for everybody, because we can't depend on the tool when it's doing precisely what it's best at.
Personally, I am already avoiding Gemini because a) I don't really understand their policy for training on your data; and b) if Google gets mad at me I lose my email. (Which the author also notes.)
If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."
Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.
Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.
I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.
If anyone from Claude is reading it, you're c**s.
X was something like "Foundation", Y was something like "Nebula", Z was something like "Hugo".
How far did you have to stretch your imagination?
Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?
Or maybe all scaffolding activity (back and forth) looked like automated usage?
I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.
For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.
The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)
Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.
(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)
My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.
I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.
I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.
Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.
Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.
Curious whether others are seeing the same behavior.
But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.
Is it me or is this word salad?
I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.
This... sounds highly concerning
By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.
and for some weird reason chrome does not keep my settings
Why use chrome? Firefox is easily superior for modern surfing.PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46
PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there
I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.
Code is open source: https://github.com/mohsen1/claude-code-orchestrator
Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.
Blocking xAI is also bad karma.
fyi: tried GLM-4.7, its good, but closer to Sonnet 4.5
Who knew that using Claude to introspect on itself was against the ToS?
Lol, what is the point in this software if you can't use it for development?
I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.
Of course none of it is actually written anywhere so this guy just tripped the heuristics even though he wasn't doing anything "abusive" in any meaningful sense of the word.
I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.
All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.
So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.
Writing the best possible specs for these agents seems the most productive goal they could achieve.
Looks like Claude.ai had the right idea when they banned you.
Is this going to get me banned? If so i'll switch to a different non-anthropic model.
Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.
OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic
If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.
(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)
> We may modify, suspend, or discontinue the Services or your access to the Services.
Saying this is "late Capitalism" is an irresponsible distraction. Capitalism runs fine when appropriately regulated with strong regulations on corporations, especially monopolies, high taxes on the wealthy, and pervasive unionization. We collectively decided to let Capitalism go wild without boundaries and the results are caused by us and our responsibility. Just like driving fast with a badly maintained vehicle may lead to a crash, Capitalism is a system that requires some regulation to run properly.
If you have an issue with LLMs and how they are managed then you should take responsibility for your own use of tools and not blame the economic system.
But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.
Like the system prompt.
But can be as simple as "respond to queries like X in the format Y".
That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)
Nothing in their EULA or ToS says anything about this.
And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.
Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.
Absolutely disgusting behavior pirating all those books. The founder spreading fear to hype up his business. The likely relentless shilling campaigns all over social media. Very likely lying about quantizing selectively.
What are you gonna do with the results that are usually slop?
I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.
This blog post could have been a tweet.
I'm so so so tired of reading this style of writing.
Nothing about this story is complex or interesting enough to require 1000 words to express.
1. Claude Code stopped working. 2. I received an email about the ban. 3. Fine, time to contact support. I've wrote to them. 4. I got an automated message saying they were reviewing my case. 5. I received a refund (I had a Pro plan) in the meantime. 6. After a few days I got this funny email:
Hi there,
We're reaching out to people who recently canceled their Claude Code subscription in order to understand why you decided to cancel.
We'd like to invite you to participate in an AI-moderated interview about your experience with Claude Code—including what improvements you'd like to see us make.
This approach uses an AI interviewer to ask you questions and respond to your answers, creating a conversational experience you can complete at your convenience.
Here's what you need to know:
The interview takes 15-20 minutes to complete This interview will be available until Monday October 13 at 9pm PT For completing the interview, you'll receive a $40 USD (or local equivalent) Amazon gift card within 3-5 business days Please complete only one interview per person As much as possible, help us know you're not a bot by showing your beautiful human face!
Your survey may terminate early if you record illegible video content (ex: overly loud environments, aren't well lighted, etc) Participate Now
This interview is administered by a third party, Listen Labs. By participating in the interview, you agree to Listen Labs' Privacy Policy. Anthropic may use your responses to improve our services and follow up.
Your honest feedback—whether your experience was positive, challenging, or mixed—is invaluable in helping us understand how to make Claude Code work better for developers like you.
Thank you for your time and insights!
–The Anthropic Team
7. Wait, you banned me and now you’re sending me this email? Seriously? Okay, I decided to participate in the survey. Unfortunately, when I selected the option that it was due to a bug or some issue, they ended the survey. No gift card.
8. A few days later, I received an email saying they couldn’t reinstate my account because I had violated their usage policy. How? No idea.
9. After a few more days, I got an email saying they had reinstated my account. They also mentioned they believed it was a bug.
It was crazy ¯\_(ツ)_/¯
I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.
Isn't that the point of capitalism?
Even filled in the appeal form, never got anything back.
Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.
I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.
Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.
Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.
… right ?
But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.
I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.
Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."
"I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."