Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.
Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.
I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.
I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".
They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.
Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?
And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.
What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.
Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me
It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.
This made me chuckle.
That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.
I had this start happening around August/September and by December or so I chose to cancel my subscription.
I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.
This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.
Gemini CLI, Google Antigravity ...?
Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.
Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.
As their ads say: "Keep thinking. There has never been a better time to have a problem."
I've been thinking since then, what was the problem. But I guess I will "Keep thinking".
Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).
It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.
My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.
Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.
I have pro subscriptions to all three major providers, and have been planning to drop one eventually. Anthropic may end up making the decision for me, it sounds like, even though (or perhaps because) I've been using Claude CLI more than the others lately.
What I'd really like to do is put a machine in the back room that can do 100 tts or more with the latest, greatest Deepseek or Kimi model at full native quantization. That's the only way to avoid being held hostage by the big 3 labs and their captive government, which I'm guessing will react to the next big Chinese model release by prohibiting its deployment by any US hosting providers.
Unfortunately it will cost about $200K to do this locally. The smart money says (but doesn't act like) the "AI bubble" will pop soon. If the bubble pops, that hardware will be worth 20 cents on the dollar if I'm lucky, making such an investment seem reckless. And if the bubble doesn't pop, then it will probably cost $400K next year.
First-world problems, I guess...
Edit: my only other comment on HN is also complaining about this 11 months ago
I then had more success signing up with the mobile app, despite using the same phone number; I guess they don't trust their website for account creation.
I have a friend that had a similar experience with amazon, and using an european online platform specific for this he actually got amazon to reopen his business account.
There is a useful list of these european complaints platforms at the bottom of this page: https://digital-strategy.ec.europa.eu/en/policies/dsa-out-co...
I think I kind of have an idea what the author was doing, but not really.
Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.
There are so many things about this article that don't make sense:
> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.
I can't even understand what they're trying to communicate. I guess they're referring to Google?
There is, without a doubt, more to this story than is being relayed.
I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...
if this is true, the learning is opus 4.5 can hijack system prompts of other models.
Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:
- "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".
- Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).
- Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.
---
[1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."
The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit
The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.
There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.
Recital (71) of the GDPR
"The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."
https://commission.europa.eu/law/law-topic/data-protection/r...
You are only allowed to program computers with the permission of mega corporations.
When Claude/ChatGPT/Gemini have banned you, you must leave the industry.
When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.
> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.
> Or I don't know. This is all just a guess from me.
And no response from support.
Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.
It seems now they have a policy of
Warning on First Offense → Ban on Second Offense
The following behaviors will result in a warning.
Continued violations will result in a permanent ban:
Disrespectful or dismissive comments toward other members
Personal attacks or heated arguments that cross the line
Minor rule violations (off-topic posting, light self-promotion)
Behavior that derails productive conversation
Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.
Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.
My assumption is that Claude isn’t used directly for customer service because:
1) it would be too suggestible in some cases
2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.
These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.
If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.
I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.
It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.
> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
Are there enough people who need support that it matters?
Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.
I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.
The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.
It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.
For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.
If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.
Because if you don't believe that boy, do I have some stories for you.
Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.
API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.According to Claude:
I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.
One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.
Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.
Not once have I been reprimanded in any way. And if anyone would be, it would be me.
As in, for example: "No, fuckface. You hallucinated that concept."
I've been doing this years.
shrug
Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.
It is also the only provider out of those that permits use for legal adult content.
There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.
What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.
Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.
However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.
I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.
We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.
Same goes for HN, yet it does not take kindly to certain expressions either.
I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.
I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
I didn't really think about this until now (I am just solving my problem), but I guess I could get OpenCode'd for this. Similar to the OP I don't find I am doing anything particularly weird, but if their use case wasn't looked upon favorably by Anthropic, mine probably won't be either.
After the OpenCode drama where some people got banned for using it I saw some people from Anthropic on Twitter asking folks to DM them if they got banned and they'd get unbanned. I know I wouldn't be doing that, so I guess if I get banned, I am back to Codex for a while.
I wonder if Anthropic realizes the chilling effect this kind of event has on developers. It's not just the ones who get locked out -- it's a cost for everybody, because we can't depend on the tool when it's doing precisely what it's best at.
Personally, I am already avoiding Gemini because a) I don't really understand their policy for training on your data; and b) if Google gets mad at me I lose my email. (Which the author also notes.)
If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."
Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.
Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.
I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.
If anyone from Claude is reading it, you're c**s.
Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?
Or maybe all scaffolding activity (back and forth) looked like automated usage?
I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.
For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.
Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.
(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)
I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.
I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.
Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.
Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.
Curious whether others are seeing the same behavior.
But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.
Is it me or is this word salad?
I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.
This... sounds highly concerning
By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.
and for some weird reason chrome does not keep my settings
Why use chrome? Firefox is easily superior for modern surfing.PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46
PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there
I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.
Code is open source: https://github.com/mohsen1/claude-code-orchestrator
I'm thinking about trying it after my Github Copilot runs out end of month. Just hobby projects.
Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.
Blocking xAI is also bad karma.
fyi: tried GLM-4.7, its good, but closer to Sonnet 4.5
Who knew that using Claude to introspect on itself was against the ToS?
Lol, what is the point in this software if you can't use it for development?
I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.
I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.
All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.
Looks like Claude.ai had the right idea when they banned you.
Is this going to get me banned? If so i'll switch to a different non-anthropic model.
Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.
OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic
If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.
(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)
> We may modify, suspend, or discontinue the Services or your access to the Services.
Saying this is "late Capitalism" is an irresponsible distraction. Capitalism runs fine when appropriately regulated with strong regulations on corporations, especially monopolies, high taxes on the wealthy, and pervasive unionization. We collectively decided to let Capitalism go wild without boundaries and the results are caused by us and our responsibility. Just like driving fast with a badly maintained vehicle may lead to a crash, Capitalism is a system that requires some regulation to run properly.
If you have an issue with LLMs and how they are managed then you should take responsibility for your own use of tools and not blame the economic system.
But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.
Like the system prompt.
But can be as simple as "respond to queries like X in the format Y".
That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)
Nothing in their EULA or ToS says anything about this.
And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.
Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.
Absolutely disgusting behavior pirating all those books. The founder spreading fear to hype up his business. The likely relentless shilling campaigns all over social media. Very likely lying about quantizing selectively.
What are you gonna do with the results that are usually slop?
This blog post could have been a tweet.
I'm so so so tired of reading this style of writing.
1. Claude Code stopped working. 2. I received an email about the ban. 3. Fine, time to contact support. I've wrote to them. 4. I got an automated message saying they were reviewing my case. 5. I received a refund (I had a Pro plan) in the meantime. 6. After a few days I got this funny email:
Hi there,
We're reaching out to people who recently canceled their Claude Code subscription in order to understand why you decided to cancel.
We'd like to invite you to participate in an AI-moderated interview about your experience with Claude Code—including what improvements you'd like to see us make.
This approach uses an AI interviewer to ask you questions and respond to your answers, creating a conversational experience you can complete at your convenience.
Here's what you need to know:
The interview takes 15-20 minutes to complete This interview will be available until Monday October 13 at 9pm PT For completing the interview, you'll receive a $40 USD (or local equivalent) Amazon gift card within 3-5 business days Please complete only one interview per person As much as possible, help us know you're not a bot by showing your beautiful human face!
Your survey may terminate early if you record illegible video content (ex: overly loud environments, aren't well lighted, etc) Participate Now
This interview is administered by a third party, Listen Labs. By participating in the interview, you agree to Listen Labs' Privacy Policy. Anthropic may use your responses to improve our services and follow up.
Your honest feedback—whether your experience was positive, challenging, or mixed—is invaluable in helping us understand how to make Claude Code work better for developers like you.
Thank you for your time and insights!
–The Anthropic Team
7. Wait, you banned me and now you’re sending me this email? Seriously? Okay, I decided to participate in the survey. Unfortunately, when I selected the option that it was due to a bug or some issue, they ended the survey. No gift card.
8. A few days later, I received an email saying they couldn’t reinstate my account because I had violated their usage policy. How? No idea.
9. After a few more days, I got an email saying they had reinstated my account. They also mentioned they believed it was a bug.
It was crazy ¯\_(ツ)_/¯
I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.
Isn't that the point of capitalism?
… right ?
But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.