Also related: https://www.ccleaks.com
The full conversation is preserved in the JSONL file, and messages
are filtered before being sent to the API.
Key mechanisms:
1. JSONL is append-only — old pre-compaction messages are never deleted. New messages (boundary
marker, summary, attachments) are appended after compaction.
2. Messages have flags controlling API visibility:
- isCompactSummary: true — marks the AI-generated summary message
- isVisibleInTranscriptOnly: true — prevents a message from being sent to the API
- isMeta — another filter for non-API messages
- getMessagesAfterCompactBoundary() returns only post-compaction messages for API calls
3. After compaction, the API sees only:
- The compact boundary marker
- The summary message
- Attachments (file refs, plan, skills)
- Any new messages after compaction
4. Three compaction types exist:
- Full compaction — API summarizes all old messages
- Session memory compaction — uses extracted session memory as summary (cheaper)
- Microcompaction — clears old tool result content when cache is cold (>1h idle) The logic is:
1. Anthropic's API has a server-side prompt cache with a 1-hour TTL
2. When you're actively using a session, each API call reuses the cached prefix — you only pay
for new tokens
3. After 1 hour idle, that cache is guaranteed expired
4. Your next message will re-send and re-process the entire conversation from scratch — every
token, full price
5. So if you have 150K tokens of old Grep/Read/Bash outputs sitting in the conversation, you're
paying to re-ingest all of that even though it's stale context the model probably doesn't need
The microcompact says: "since we're paying full price anyway, let's shrink the bill by clearing
the bulky stuff."
What's preserved vs lost:
- The tool_use blocks (what tool was called, with what arguments) — kept
- The tool_result content (the actual output) — replaced with [Old tool result content cleared]
- The most recent 5 tool results — kept
So Claude can still see "I ran Grep for foo in src/" but not the 500-line grep output from 2
hours ago.
Does it affect quality? Yes, somewhat — but the tradeoff is that without it, you're paying
potentially tens of thousands of tokens to re-ingest stale tool outputs that the model already
acted on. And remember, if the conversation is long enough, full compaction would have summarized
those messages anyway.
And critically: this is disabled by default (enabled: false in timeBasedMCConfig.ts:31). It's
behind a GrowthBook feature flag that Anthropic controls server-side. So unless they've flipped
it on for your account, it's not happening to you. NEVER include in commit messages or PR descriptions:
- The phrase "Claude Code" or any mention that you are an AI
- Co-Authored-By lines or any other attribution
BAD (never write these):
- 1-shotted by claude-opus-4-6
- Generated with Claude Code
- Co-Authored-By: Claude Opus 4.6 <…>
This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.[0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...
[0] https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-5...
> which is published with the purpose of informing the public on matters of public interest
From your link, that's the only case where text needs to be attributed to AI.
https://code.claude.com/docs/en/settings#attribution-setting...
1. the LLM is instructed on how to write a commit message and never include co-authorship
2. the LLM is asked to produce a commit message
3. the LLM output is parsed by a script which removes co-authorship if the LLM chooses to include it nevertheless"includeCoAuthoredBy": false,
in your settings.json.
Unless you literally vibe coded it, Claude is just a tool. This is the equivalent of Apple appending "Sent from my iPhone" as a signature to outgoing emails. It's advertising tool use, not providing attribution. The intent isn't to disclose that AI was used in creating the code, the intent is to advertise the AI product.
A quick PR where I've found the bug myself in the code, and ask Claude to write the fix because it's faster, and verified it - I don't include Claude's co-authorship.
* The authors made the code very broad to improve its ability to achieve the stated goal
* The authors have an unstated goal
I think it's healthy to be skeptical but what I'm seeing is that the skeptics are pushing the boundaries of what's actually in the source. For example, you say "says on the tin" that it "pretends to be human" but it simply does not say that on the tin. It does say "Write commit messages as a human developer would" which is not the same thing as "Try to trick people into believing you're human." To convince people of your skepticism, it's best to stick to the facts.
~/.claude/settings.json
{
"attribution": {
"commit": "",
"pr": ""
},
The rest of the prompt is pretty clear that it's talking about internal use.Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.
Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.
For a company calling chinese companies out for distillation attacks on their models, this very much looks like a distillation attack against human maintainers, especially when combined with the frustration detector.
Needing to flag nontrivial code as generated was standard practice for my whole career.
If this is not the case you should not be sending it to public repos for review at all. It is rude and insulting to expect the people maintaining these repos to review code that nobody bothered to read.
This is not at all the case with LLM-generated code - mostly because you can't regenerate it even if you wanted to, as it's not deterministic.
That said, I do agree that LLM code is different enough from human code (even just in regards to potential copyright worries) that it should be mentioned that LLMs were used to create it.
How about compiler?
Replace gRPC compiler with LLM. Can you reproduce? (probably not 100%). Can anybody fix it short of throwing more english phrases like "DO NOT", "NEVER", "Under No Circumstances"?
Probably not.
I thought the argument was that AI-users were reviewing and understanding all of the code?
At least at my workplace though, it's just assumed now that you are using the tools.
I don't understand why people consider Claude-generated code to be their own. You authored the prompts, not the code. Somehow this was never a problem with pre-LLM codegen tools, like macro expanders, IPC glue, or type bundle generators. I don't recall anybody desperately removing the "auto-generated do not edit" comments those tools would nearly always slap at the top of each file or taking offense when someone called that code auto-generated. Back in the day we even used to publish the "real" human-written source for those, along with build scripts!
Ideally, if I contribute to any codebase, what needs to be judged is the resulting code. Is it up to the project's standards ? Does the maintainer have design objections ?
What tool you use shouldn't matter, be it your IDE or your LLM.
But that also means you should be accountable for it, you shouldn't defend behind "But Claude did this poorly, not me !", I don't care (in a friendly way), just fix the code if you want to contribute.
The big caveat to this is not wanting AI-Generated code for ideological reasons, and well, if you want that you can make your contributors swear they wrote it by themselves in the PR text or whatever.
I'm not really sure how to feel about this, but I stand by my "the code is what matters" line.
Since AI tools are constantly obsoleted, generate different output each run, and it is often impossible to run them locally, the input prompts are somewhat useless for everyone but the initial user.
I don't meant this as a drive by bazinga either, the practice of copying code or thinking you understand it when you don't is nothing new
I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.
Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.
Absolutely. That would be hilarious.
Assisted-by: Claude:claude-3-opus coccinelle sparse
If your linter is able to action requests, then it probably makes sense to add too.
This is also useful for keeping your prompts commit-sized, which in my experience gives much better results than just letting it spin or attempting to one-shot large features.
The point isn't to hijack accountability. It's free publicity, like how Apple adds "Sent from my IPhone."
Kinda, yeah. If I automatically apply lint suggestions, I would title my commit "apply lint suggestions".
co-authoring doesn't hide your authorship
if I see someone committing a blatantly wrong code, I would wonder what tool they actually used
In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.
It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).
I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.
The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/
(I didn't think to include a UUID checker though - nice touch)
Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.
Interesting!
(I know you know this, since you submitted it! but others might want to know)
thank you so much for having built and shared this
No shit they have secrets. I have secrets too. That doesn't make it ok for me to deceive you in any way.
How would you feel if I deceived you and my excuse was "oh I was just trying some new secret technique of mine"?
How did we get to this point where we let enormously powerful companies get away with more than individuals?
"Write commit messages as a human developer would — describe only what the code change does."
VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.
Ultimately I stopped using it, because editing the messages cost me more time than it saved.
/rant
[edit] Never mind, find in page fail on my end.
> Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:
> Fixed the foobar bug by adding a baz flag - dkenyser
Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.
- "Fix bug found while testing with Claude Capybara"
- "1-shotted by claude-opus-4-6"
- "Generated with Claude Code"
- "Co-Authored-By: Claude Opus 4.6 <…>"
This makes sense to me about their intent by "UNDERCOVER"
It did not have a copy of the leaked code...
Anthropic thinking 1) they can unring this bell, and 2) removing forks from people who have contributed (well, what little you can contribute to their repo), is ridiculous.
---
DMCA: https://github.com/github/dmca/blob/master/2026/03/2026-03-3...
GitHub's note at the top says: "Note: Because the reported network that contained the allegedly infringing content was larger than one hundred (100) repositories, and the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository, GitHub processed the takedown notice against the entire network of 8.1K repositories, inclusive of the parent repository."
They constantly love to talk about Claude Code being "100%" being vibe coded...and the US legal system is leaning towards that not being copyrightable.
It could still be a trade secret, but that doesn't fall under a DMCA take down.
Anthropic really needs to embrace it
On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.
Edit: Everyone is responding "comments are good" and I can't tell if any of you actually read TFA or not
> “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”
This is just revealing operational details the agent doesn't need to know to set `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`
Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.
You get free long term agent memory with zero infrastructure.
Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.
This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.
"Self-descriptive code doesn't need comments!" always gets an eye-roll from me
Its also annoying to have to go through this stack
code -> blame -> commit message -> jira ticket -> issue in sales force...
Or the even better "fixes bug NNNNN" where the bug tracking system referenced no longer exists.
Digging through other systems (if they exist) to find the nugget in an artifact is a problem for humans too.
You don’t have to rely on humans doing it. The agent’s entire existence is built around doing this one mundane task that is annoying but super useful.
That's revealing waaaay more than the agent needs to know.
Seems to me like everyone's just grasping at straws to nitpick every insignificant little thing.
I think a big question is whether one wants your agent to know the reason for all the reasons for guidelines you issue or whether you want the agent to just follow the guidelines you issue. Especially, giving an agent the argument for your orders might make the agent think that can question and so not follow those arguments.
Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.
They didn't expect to leak their source code.
It's hardly a trade secret, what value is this to a competitor?
I suspect that's the logical endpoint of trying to provide everything as context to an agent. Why use a separate markdown file and have to waste extra tokens explaining what part of the codebase something applies to when you can just put it right there in the code itself?
I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970
Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).
So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain
Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.
>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.
I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.
I'm so tired man, what the hell are we doing here.
The frustration regex is funny but honestly the right call. Running an LLM call just to detect "wtf" would be ridiculous.
KAIROS is what actually caught my attention. An always-on background agent that acts without prompting is a completely different thing from what Claude Code is today. The 15 second blocking budget tells me they actually thought through what it feels like to have something running in the background while you work, which is usually the part nobody gets right.
Hooks is an official documented feature for quite a long time now https://code.claude.com/docs/en/hooks
Tangentially, I wonder if the world trade federation or the Washington tennis foundation have any projects on GitHub :)
I love that it only supports English. AI bubble in a nutshell.
It also somehow messed up my alacritty config when I first used it. Who knows what other ~/.config files it modifies without warning.
I mostly mentioned it because it is pre-installed on some (linux) systems. Though of course if you're trying to obfuscate the sourcecode you need to bundle an interpreter with the code anyway.
But it has historically been used for big programs, and there are well established methods for bundling python programs into executables.
False.
.bash_profile .bashrc .claude .env .gitconfig .gitmodules .idea
.mcp.json .profile .ripgreprc .vscode .zprofile .zshrc config
https://github.com/anthropic-experimental/sandbox-runtime/is...Me. My .config is git-versioned :)
Or am I missing sarcasm?
- Claude Chat: built like it's 1995, put business logic in the button click() handler. Switch to something else in in the UI and a long running process hard stops. Very Visual Basic shovelware.
- Claude Cowork: same but now we're smarter, if you change the current convo we don't stop the underlying long-running process. 21st century FTW!
- Claude Code: like chat, but in the CLI
- Claude Dispatch: an actual mobile client app, not the whole thing bundled together.
- Daemon mode: proper long-running background process, still unreleased.
Interesting based on the other news that is out.
And actually just looking this up, it appears claude-code itself was just added to that whitelist : D
https://github.com/oven-sh/bun/commit/5c59842f78880a8b5d9c2e...
In the span of basically a week, they accidentally leaked Mythos, and then now the entire codebase of CC. All while many people are complaining about their usage limits being consumed quickly.
Individually, each issue is manageable (Because its exciting looking through leaked code). But together, it starts to feel like a pattern.
At some point, I think the question becomes whether people are still comfortable trusting tools like this with their codebases, not just whether any single incident was a mistake.
The only thing I found interesting about this leak is just how much of a rats nest the code base is. Like it actually feels vibe coded without a shred of intelligent architecture behind it.
Regardless, you can't beat the subscription and model access despite the state of the code base, so I still use Claude Code daily and love it.
Two months later it was CVE after CVE.
Power(money) lies with NVDA and people who can best harness this power.
" ...accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?
That’s already possible, nothing prevents you from doing it.
They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.
At least that’s how things worked last week xD
https://alex000kim.com/posts/2026-03-31-claude-code-source-l...
Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.
They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.
Now that the indicators have leaked, they will most likely be rotated.
https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...
> Right now for most products at Anthropic it's effectively 100% just Claude writing
- Mike Krieger, chief product officer of Anthropic
Not only that, wouldn't allow other CLIs to be used either.
I wrote a short piece explaining the 3 policy implications for teams using Claude Code (or any AI coding tool) — without the technical jargon: https://www.aipolicydesk.com/blog/claude-code-leak-what-ceo-...
The short version: rotate API keys as a precaution, check what audit logs you actually have, and add a clause to your AI policy requiring vendor disclosure of new autonomous capabilities before they get enabled.
The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?
I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?
The recent rulings on copyright though also need to be further tested, different judges may have different ideas on what "significant human contribution" looks like. The only thing we know for certain is that the prompt doesn't count.
My guess is that instead of enforcing via copyright, companies will use contracts & trade secret laws. Source code and algorithms counts as a trade secret, so in your example copyright doesn't even matter, the employee would be liable for stealing trade secrets.
AI generated code slowly stripping the ability of a project to enforce copyright protections though is a much bigger risk for free software.
Of course with lease intent is a very important concept. I doubt anyone is getting away with what I described.
It’s just interesting stuff to potentially rethink.
My guess is companies will simply pretend like generated code is copyrighted, file fraudulent DCMA notices if leaks happen and hope no one decides to challenge them in court.
I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.
it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".
Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).
I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.
But I did read the prompt and it did say hide the fact that you are AI.
Being written by a LLM is a signal that the submission is of low effort and therefore probably low quality, which then puts the onus on the people reviewing and reading the submission instead of the original generator of the submission. Hence I would classify it as spam.
Open source communities also have rules against LLM generated contributions, for various moral, ethical, or legal reasons.
At this point I would consider any employee of an AI provider to be tainted.
> This was one of the first things people noticed in the HN thread.
> The obvious concern, raised repeatedly in the HN thread
> This was the most-discussed finding in the HN thread.
> Several people in the HN thread flagged this
> Some in the HN thread downplayed the leak
when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?
Or, more simply: Because folks wanted it enough to upvote it.
> It's basically
> Anthropic doesn't just ask
> The fix? `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`
> Not a push-button bypass, but
The irony in saying "this is what I found" when an AI found it, not you.
Plus there's demand for skilled TS software devs that don't ship your company's roadmap using a js.map
20,000 agents and none of them caught it...
How much approximate savings would this actually be?
/\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful| piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)| fucking? (broken|useless|terrible|awful|horrible)|fuck you| screw (this|you)|so frustrating|this sucks|damn it)\b/
Personally, I'm generally polite even towards AI and even when frustrated. I simply point out the its mistakes instead of using emotional words.
So it counts how many times I was angry?
I'd discovered, perhaps mid-2025, that Cursor was noticeably better at fixing bugs if I started cursing at it. Better yet, after a while it would seem to break and start cursing itself ("Oh yes, I see the f*** problem now" and so on). Hilarity ensued.
What a world, where cursing at your machines can make them get their act together.
AGI is definitely around the corner. Or not.
They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.
But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.
Unless they would charge for the fake tools anyway so you never know they were there
Prompts are not hard constraints—they can be interpreted, deprioritized, or reasoned around, especially as models become more capable.
From what’s visible, there’s no clear evidence of structural governance like voting systems, hard thresholds, or mandatory human escalation. That means control appears to be policy (prompts), not enforcement (code).
This raises the core issue: If governance is “prompts all the way down,” it’s not true governance—it’s guidance.
And as model capability increases, that kind of governance doesn’t get stronger—it becomes easier to bypass without structural constraints.
Has anyone actually implemented structural governance for agent swarms — voting logic, hard thresholds, REQUIRES_HUMAN as architecture not instruction?
Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?
And so now the copy cats can ofc claim this is totally not a copy at all, it's actually Opus. No license violation, no siree!
It's fucking hilarious is what it is, it's just too much.
:-D
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.
Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.
To err is human. AI is trained on human content. Hence, to err is AI. The day it stops making mistakes will be the beginning of the end. That would mean the existence of a consciousness that has no weakness. Great if it’s on your side. Terrible otherwise.
Genius level AI marketing
> change the code!!!! The previous comment was NOT ABOUT THE DESCRIPTION!!!!!!! Add to the {implementation}!!!!! This IS controlled BY CODE. *YOU* _MUST_ CHANGE THE CODE!!!!!!!!!!!
Edit: it gets sent to Anthropic via telemetry and it ends up on the fuck chart!
https://old.reddit.com/r/ClaudeCode/comments/1s99wz4/boris_t...
You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.
Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.
Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?
...what we did at Snap was just wait for 8-24 hours before acting on a signal, so as not to provide an oracle to attackers. Much harder to figure out what you did that caused the system to eventually block your account if it doesn't happen in real-time.
(Snap's binary attestation is at least a decade ahead of this, fwiw)Sans the ability to JIT, I don't see non-hardware-assisted binary attestation for Snap and others lasting very long in a post-LLM world.
1. They are loved, and for good reasons, Sonnet 4 was groundbreaking but Opus 4.6 was for many a turning point in realizing Agentic SDLC real potential. People moved from Cursor to Claude Code in droves, they loved the CLI approach (me too), and the LOVED the subsidized $200 max pro plan (what's not to love, pay $200 instead of $5000 to Cursor...) They are the underdog, the true alternative to "evil" OpenAI or "don't be evil" Google, really standing up against mass surveillance or use of AI for autonomous killing machines. They are standing for the little guy, they are the "what OpenAI should have been" (plus they have better models...) They are the Apple of the AI era.
2. They are too loved, so loved that it protects them from legitimate criticism. They make GitHub's status page look good, and they make comcast customer service look like Amazon's. (At least Comcast has customer service), They are "If Dario shoots a customer in the middle of 5th avenue it won't hurt their sales one bit" level of liked. The fact they have the best models (for now) might be their achilles heel, because it hides other issues that might be in the blindspot. And as soon as a better model comes out from a competitor (and it could happen... if you recall OpenAI were the undisputed kinds with GPT 4o for a bit) these will become much more obvious.
3. This can hurt them in the long run. Eventually you can't sustain a business where you have not even 2 9s of SLA, can't handle customer support or sales (either with humans or worse for them - if they can't handle this with AI how do they expect to sell their own dream where AI does everything?). I'm sure they'll figure it out, they have huge growth and these are growth pains, but at some point, if they don't catch up with demand, the demand won't stay there forever the moment OpenAI/Google/someone else release a better model.
4. They inadvertently made all of the cybersecurity sector a potential enemy. Yes, all of them use Anthropic models, and probably many of them use Claude Code, but they know they might be paying the bills of their biggest competitor. Their shares drop whenever Anthropic even hints of a new model. Investors cut their valuations because they worry Anthropic will eat them for breakfast. I don't know about you, but if you ask me, having the people who live and breath security indirectly threatened by you, is not the best thing in the world, especially when your source code is out in the open for them to poke holes in...
5. the SaaS pocalypse - many of Claude Code's customers are... SaaS companies, that the same AI is "going to kill", again, if there was another provider that showed a bit more care about the entire businesses it's going to devour, if they also had even marginally better models... would the brand loyalty stay?
Side note: I'm an Claude Enterprise customer, I can't get a human to respond to anything, even using the special "enterprise support" methods, and I'm not the only one, I know people who can't get a sales person, not to mention support, to buy 150 + seats (Anthropic's answer was - release self serve enterprise onboarding, which by the way is "pay us $20 which does not include usage, usage is at market prices, same as getting an API key", you pay for convenience and governance, p.s. you can't cancel enterprise, it's 20 seats min, for 1 year, in advance, so make sure you really need it, the team plan is great for most cases but it lacks the $200 plan, only the $100 5x plan).