I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
[1]: https://www.anthropic.com/news/statement-department-of-war
The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
[1]: https://www.astralcodexten.com/p/the-pentagon-threatens-anth...
Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.
Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?
If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.
So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.
So one thing to call out here is that the assumption that DoW is working on specifically these use cases is not bullet proof. They simply may not want to share with anthropic exactly what they are working on for natsec issues. /we can't tell you/ could violate the terms.
It is also dumb that DoW accepted these terms in the first place.
[1] "only an act of Congress can formally change the name of a federal department." https://en.wikipedia.org/wiki/Executive_Order_14347
(edited to add the url I omitted)
They will just have to recompete!
Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?
No doubt the US Gov't will be using A I to perform automated military strikes without human supervision. and spying on US citizens (which they already have been doing for decades now).
Look no further than the case of patriot Mark Klein, a former AT&T technician, exposed a massive NSA surveillance program in 2006, revealing that AT&T allowed the government to intercept, copy, and monitor massive amounts of American internet traffic. Klein discovered a secret, NSA-controlled room—Room 641A—inside an AT&T facility in San Francisco, which acted as a splitter for internet traffic.
Anthropic wouldn't have walked away from a multi million contract if their two redlines could be respected. OpenAI on the other hand is a fast, willing and ready company. I would love to see Anthropic's proposed contract
In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.
We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.
Our agreement includes:
1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).
Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.
2. Our contract. Here is the relevant language:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.
[1] https://www.anthropic.com/news/statement-department-of-war
This whole thing seems like people talking past each other, and that there’s something being left unsaid.
Anthropic doesn’t make a product that would assist with kill drones, and they don’t have the right to deny subpoenas.
[1] https://www.nytimes.com/2026/02/27/technology/defense-depart...
Obviously Palentir and others need time to migrate off Anthropic’s products. The way i read it is that Anthropic made a serious miscalculation by joining the DoD contracts last year, you can’t have these kind of moral standards and at the same time have Palentir as a customer. The lack of foresight is interesting.
1 https://www.axios.com/2026/02/15/claude-pentagon-anthropic-c...
Congress is negligent in not reigning this kind of thing in. We’re rapidly falling down so many slippery semantic slopes.
For this administration the law isn't something that binds them, but something they can use against others.
Obviously the DoD would not want limited use. Strange they don't make their own given their specific needs.
That does seem to be what Hegseth is arguing, yes; and that is presumably his justification for doing something drastic here. Although I assume he is lying or wrong.
And as a cynic, let me just add that the image of someone going to the political overseers of the US military with arguments about being "effective" or "altruistic" is just hilarious given their history over the last ~40 years.
Anthropic is holding firm on incredibly weak red lines. No mass surveillance for Americans, ok for everyone else, and ok to automatic war machines, just not fully unmanned until they can guarantee a certain quality.
This should be a laughably spineless position. But under this administration it is taken as an affront to the president and results in the government lashing out.
It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.
Blood is on their hands already
In fact, as a patriotic American veteran, I'd be ok with Anthropic moving to Europe. It might be better for Claude and AGI, which are overriding issues for me.
Rutger Bregman @rcbregman
This is a huge opportunity for Europe. Welcome Anthropic with open arms. Roll out the red carpet. Visa for all employees.
Europe already controls the AI hardware bottleneck through ASML. Add the world's leading AI safety lab and you have the foundations of an AI superpower.
Anthropic made it quite clear they are cool with spying in general, just not domestic spying on Americans, and their "no killbots" pledge was asterisked with "because we don't believe the technology is reliable enough for those stakes yet". The implication being that they absolutely would do killbots once they think they can nail the execution (pun intended).
I suppose you could say they're taking the high road relative to their peers, but that's an extremely low bar.
For Americans and international researchers it's easy to get visas there quickly. It's not far at all for Americans to relocate to or visit. Electricity is cheap and clean. Canada has the most college educated adults per capita. The country's commitment to liberalism, and free markets, is also seeming more steadfast than the US at this point in time.
Canada faces obstacles with its much smaller VC ecosystem, its smaller domestic market, and the threat of US economic aggression. Canada's recent trade deals are likely to help there.
I say this all as an American who is loyal to American values first and foremost. If the US wants to move away from its core values I hope other countries, like Canada or the EU, can carry on as successful examples for the US to eventually return to.
But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?
I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-hole.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
If a commenter who supports the government makes the same argument that the government is making, the guidelines tell us to assume good faith.
My conclusion is that any topic where a commenter might be making a bad faith argument is outside the scope of Hacker News.
Politics and ideology are not off topic, provided the subject matter is of interest, or "gratifying", to colleagues in the tech/start-up space.
What's important is that we don't use rhetoric, bad faith or argumentation to force our views on others. But expressing our opinions about how policy affects technology and vice versa has always been welcome, in my observation.
So, what do you think about the US government's decision, and why?
Everything is politics and "ideology"
Our whole society runs on technology. All tech is inherently political.
A "no politics" stance is merely an endorsement of the status quo.
HN likes to pretend otherwise, especially when it's inconvenient.
This is the new McCarthyism. Do what the administration says, or you will be blacklisted, or worse.
The designation says any contractor, supplier, or partner doing business with the US military can’t conduct any commercial activity with Anthropic. Well, AWS has JWCC. Microsoft has Azure Government. Google has DoD contracts. If that language is enforced broadly, then Claude gets kicked off Bedrock, Vertex, and potentially Azure… which is where all the enterprise revenue lives. Claude cannot survive on $200/mo individual powerusers. The math just doesn’t math.
Anthropic is going to be fine. The DoD is going to walk this back and pretend it never happened to save face.
The designation only applies to projects that touch the federal government, or software developed specifically for the federal government.
Contractors can still use Claude internally in their business, so long as it is not used in government work directly.
A complete ban would be adding Anthropic to the NDAA, which requires congress.
The DoD designation allows the DoD to make contractors certify that Anthropic is not used in the fulfillment of the government work.
That label forbids contractors on DoD contracts for billing DoD for Anthropic, or including Anthropic as part of their DoD solution.
So - AWS can keep claude on bedrock, but can't provide claude to the DoD under its DoD contracts
I am both dumb and without access to Claude, thus I must ask: My fellow smart HN'ers, what kind of impacts would this likely have on the economy?
Has a lot of money and resources not been pumped into Anthropic (albeit likely less than OpenAI)? I imagine such a decision would not be the ROI that many investors expected.
This is authoritarian behavior. You're having trouble negotiating a contract, so instead of just canceling it - you basically ban all of F500 from doing business with that firm.
Soon enough the midterms will be effectively cancelled.
Americans remain blissfully unaware.
I’m sure the lawyers just got paged, but does this mean the hyperscalers (AWS, GCP) can’t resell Claude anymore to US companies that aren’t doing business with the DoD? That’s rough.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
Generally any machine that touches Supply chain Risk software cannot ship any software to DoD. AWS has separate clouds but software comes from same place.
You got it backwards, can't use claude if you ARE doing business with DoD.
Presumably AWS/GCP don't care, its up to the end customer to comply. Not like GCP KYC asks if you work with DoD.
Look no further than the famous expose by Mark Klein, the former AT&T technician and whistleblower who exposed the NSA's mass surveillance program in 2006, revealing the existence of "Room 641A" in San Francisco. He discovered that AT&T was using a "splitter" to copy and divert internet traffic to the NSA, proving the government was monitoring massive amounts of domestic communication.
Worse, they act like it's virtuous.
Meanwhile, irrelevant "AI Czar" David Sacks, member of the PayPal mafia alongside known Epstein affiliates Elon Musk and Peter Thiel, is furiously retweeting all the posts from Trump, Hegseth, and other accounts. He is such a coward and anti American:
I dunno, safeguard seems like a weasel word here. It’s just reserving control to one party over another. It’s understandable why the DoD(W) wouldn’t like that.
They still pay taxes, which fund the US government, which kills innocent human beings around the world...
How is it that high!?
That means that more than 1-in-3 of your countrymen are ride-or-die, and it's just heartbreaking to see that we're going to have to launch that many people into the sun.
Zero percent chance of that happening as long as xAI exists.
China's AI Safety Governance Framework: https://www.cac.gov.cn/2025-09/15/c_1759653448369123.htm
Most Americans hate AI and it's effectively the ostrich effect where they hope to outright ban it and ignore everything else. Meanwhile, all the evil people are running the show. While Anthropic continues to propagate Sinophobic messaging, DeepSeek and other companies have a much more muted tone.
When we have first politician blown to bits by autonomous AI FPV there will be sheer panic of every politician in the world to put the genie back into the bottle. It will be too late at that point.
Anthropic is correct with its no killbot rule.
Even during the Nagorno-Karabakh war, Azeri loitering munitions were able to suppress Armenian air defenses by hitting them when they rolled out of of concealment. I believe that killchain requires a level of autonomous functionality.
So OpenAI will also be marked as a supply chain risk too, right?
[1]: https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
Supply-chain risks means "the potential for adversaries to sabotage, subvert, or disrupt the integrity and delivery of defense systems, including software, hardware, and services, to degrade national security".
So now Anthropic is an adversary, because it does not want "fully autonomous weapons" or automated mass surveillance? Sure thing, DoD. Go use Grok or whatever, I'm sure that will go great.
Open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run a 100% transparent organization so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. Diffuse it as much as possible. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, aligned with millions of different individuals. It is a necessary condition for humanity's survival.
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1508 comments)
Dario is Lando, complaining “We had a deal!” Only to be told, “I’m altering the deal. Pray I don’t alter it any further.”
I wish I thought enough Americans had the spine required to stand up to this, and I know for a fact that a lot do... the solution is literally written into your constitution.
Hacking is using a system in a way it was not intended to be used.
Here it is that, but applied to the law.
Hegseth and friends are a bunch of black hat legal hackers.
This administration consistently exploits what were designed to be emergency powers because no such requirement exists. Leave no room for interpretation.
Q: "Is there anything we could do to change your mind?"
A: "Yes! Stand up to the current administration."
Does this mean Azure & AWS will have to stop offering Claude as a model?
AWS Bedrock has deployed Anthropic models under an interesting structure. It is fully hands off - the models are copied into the AWS infrastructure and don't use anything from Anthropic. I think if push came to shove, Anthropic could cut ties with Amazon and AWS could probably still keep serving the models it has with Anthropic forgoing revenue until this is resolved, while asserting they are not "conducting commercial activity" between each other.
All speculation of course.
Nevermind Claude, does that mean Anthropic's offices can't use a power company if that same company happens to supply electricity to a US military base? What about the water, garbage disposal, janitorial services? Fedex? Credit card payments? Insurance companies? Law firms? All the normal boring stuff Anthropic needs that any other business needs.
This is a corporate death penalty. Or corporate internal exile or something, I don't know of a good analogy.
Come to EU guys, we'll prepare a warm welcome!
TIL Fully automated killbots and mass domestic surveillance are American principles.
I mean, I should have known but there's no clearer sign saying "leave the country now if you don't agree with this admin" than now I guess.
Edit: I should perhaps clarify I'm more interested in paid users, rather than free. It's harder to tell if free users switching would help them or hurt them... curious if anyone has thoughts on that too.
i told myself if anthropic does not back down on their current stipulations to the DoD, then i’d cancel and switch over to claude
they said there is a line they do not want to cross, and stuck to that stance, at great personal and financial risk to themselves
I don't think it will hold, in the end this is mafia behavior, but if it does, we are yet again in uncharted waters.
Kesha tried to hug Jerry Seinfeld vibes.
> Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Strange way of saying "this vendor doesn't meet our software requirements".
> they have attempted to strong-arm the United States military into submission
Err... You approached them?
> a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
It's an orthogonal point, but "Silicon Valley ideology" has made up a significant portion of the USA's GDP for the last however many years.
> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Again... You approached them?
> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.
Like most companies in the world I imagine. They just haven't been approached yet.
> to allow for a seamless transition to a better and more patriotic service.
Internally re-framing all the recent "EU moving away from American tech!" articles as "EU builds more patriotic services!"
> This decision is final.
Nothing says "final" like a Tweet. The most uncontroversial and binding mechanism of all communication.
It is an interesting point. What's the difference between this use license and others?
Anthropic folks: I've been a bit salty on HN about bugs in Claude Code, but I feeling pretty warm and fuzzy about sending you my cash this month.
And here’s the irony: Musk, who claimed only he is virtuous enough to defend us from AI, who insisted he always wanted model labs to be non profit and research focused, will now bring his for profit commercial entity into service to aid in mass domestic censorship and fully autonomous weapons of war.
In fact it won’t surprise me further if NVIDIA is strong armed into providing preference to xAI, in the interest of security, or if the government directly funds capital investments.
Anthropic saves some dignify and they’re the losers today, but we are the losers tomorrow.
But anyway, I guess the question is, will any other big AI companies stand with them? It's what needs to happen, but I am not hopeful.
Oddly, though, it seems like that should solve this problem as well. I'm not sure why the Department of Defense insists on Anthropic's models in particular; one would think one of the other players, at the very least least xAI, would be willing to step in and provide the capability Anthropic doesn't want to provide.
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
Makes for very confusing reading when comments from "1 hour ago" are actually on preceding events from earlier, before TFA news (announcement of designation).
mods: Especially in sensitive and rapidly developing situations like this, please don't mess with timestamps of comments. It's effectively revisionism.
Model collapse making models identify everyone as a potential threat who needs to be eliminated.
Companies should have a right to refuse such requests on moral grounds though.
This stance is vindictive. Just don't use Claude in the military. Extending it to all government agencies is not right. They do great work. Can't deny that.
So it would make sense now for Anthropic to move outside the US, e.g. to Europe or Canada to at least be able to make deals with European governments.
Every conservative needs to do some very deep, very serious soul-searching. As for me, as a hyper-progressive, I'm drawing up proposals for nationalizing real estate developers in order to force them to build new houses to sell below cost.
Government: We will destroy any company that refuses to create the Torment Nexus
> You don’t anthropomorphize your lawnmower, the lawnmower just mows the lawn - you stick your hand in there and it’ll chop it off, the end.
Except this is like two lawnmowers going at it, which would be a sight to behold indeed.
He called me and he seemed like a nice enough guy, but I realized that he's one of the DOGE/Elon acolytes and he started talking about how he's "fixing" the Treasury and that every engineer is apparently supposed to use Claude for everything.
It would have been a considerable pay downgrade which wouldn't necessarily be a dealbreaker but being managed by DOGE would be, but mostly relevant is that I found it kind of horrifying that we're basically trusting the entire world's bank to be "fixed" with Claude Code. It's one thing when your ad platform or something is broken, but if Claude fucks something up in the Treasury that could literally start a war. We're going to "fix" all the code with a bunch of mediocre code that literally no one on earth actually understands and that realistically no one is auditing [1].
If they're going to "fix" all the Treasury code with stuff generated by Claude, I'm not sure they will have a choice but to stick with it, because very it seems very likely to me that it will be incomprehensible to anything but Claude.
[1] Be honest, a lot of AI generated code is not actually being reviewed by humans; I suspect that a lot of the AI code that's being merged is still basically being rubber-stamped.
And people here are debating legalese...
Because that could be absolutely staggering.
TACO
It almost is parody that a former Fox News host is the SECRETARY OF WAR.
The place to set policies on the use of hammers and police enforcement is not at the counter of the hardware store. “You want a hammer but don’t have a contractors license? Are you in a training program? Oh you just want to hang framed art - can I see your lease, does it allow hammering metal into the walls?”
We govern these things through laws and a democratic process. Police enforce the laws.
I don’t want some overconfident Silicon Valley engineering firm telling me how to use my digital tools, and you shouldn’t either.
Whatever you think of this administration, our military should not have to ask contractors permission for their operations.
To stop mass surveillance and autonomous lethality, pass laws. Asking unelected tech executives to do this is asking for trouble. They have no business doing it.
How going against the most powerful army on Earth is coward?
I have just purchased a chunk of extra usage credit. I encourage my peers to do the same. Let's send a message to those that work forces.
I don't think that Secretary Hegseth is qualified to speak on American principles.
Cheating on multiple spouses[1], being an active alcoholic, and being accused of multiple sexual assaults and paying off the accusers[3] is fundamentally incompatible with being a Secretary of Defense and a good leader.
Also, this violates freedom of speech and will probably get shot down in the courts.
1. https://en.wikipedia.org/wiki/Pete_Hegseth#Marriages
2. https://en.wikipedia.org/wiki/Pete_Hegseth plus multiple recent media pieces
3. https://en.wikipedia.org/wiki/Pete_Hegseth#Abuse_and_sexual_...
It's true that Chinese companies are extensions of the state. But they serve the state. And the state has thus far served the citizenry eg raising 800M people out of extreme poverty. China's HSR network of 32,000 miles of track was built in 20 years for ~$900B. That's less than the annual US military budget.
You can look at the relationship between the US government and US companies in one of two ways:
1. US companies serve the government but the government doesn't serve the people. After all, where's our infrastructure, healthcare, housing and education? or
2. The US government serves US corporate interests to enrich the ultra-wealthy.
Either way a handful of people are getting incredibly wealthy and all it takes is for a little corruption. Political donations, jobs after government, positions on boards and so on.
> 1. No mass domestic surveillance of Americans
> 2. No fully autonomous weapons (kill decisions without a human in the loop)
Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth).
The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs).
New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes.
> Anthropic said no, and now the admin is trying to destroy the company in retaliation.
From https://bsky.app/profile/bbkogan.bsky.social/post/3mfuuprph5...
A few arrests and a few in detention centres, will be enough to make them fold and grovel.
They are now categorised as "radical left" and woke.
The elections will be controlled to "prevent the radical left take over of the greatest country on the planet".
edit : The stage is also being set for total media control. My prediction is that the next target will be Google, specifically Youtube. You should start seeing talks about how the radical left is inflitrated youtube.
But there's some irony in this happening to Anthropic after all the constant hawkish fearmongering about the evil Chinese (and open source AI sentiment too).
Trump orders federal agencies to stop using Anthropic AI tech 'immediately'
https://news.ycombinator.com/item?id=47185528
Statement from Dario Amodei on our discussions with the Department of War
And nobody in the administration is concerned at all that the model itself might be somewhat against their own views?
If it was so radically woke, wouldn’t the model, as used in fully autonomous weapons, be potentially harmful to ICE officers that the left considers as a threat to the American people?
Wouldn’t the mass surveillance of Americans be biased against the right?
These people are so dumb.
https://www.trumpstruth.org/statuses/36981
Don't worry, this is an archive/mirroring site for his account, not the actual TS site.
I'd comment on how wackadoo this all is, but, 1) that applies to almost everything these days, and 2) the post's right there, see for yourself.
Sometimes it pays to think even two steps ahead of your most immediate thought…
One, it’s going to fuck with the AI fundraising market. That includes for IPO. If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Two, Anthropic will win in the long run. In corporate America. Overseas. And with consumers. And, I suspect, with investors.
If an employee of the government makes a decision that subsequently turns out to be very very unpopular, that unpopularity is sooner or later going to coalesce and land on them, and the more unpopular it turns out to be the less of a shield legal arguments about immunity or pardons will be because so many people are increasingly out of patience with a system they deem to be corrupt. Being able to offload the political, legal, and personal risks of extremely consequential decisions onto The Bad Computer System is the political equivalent of crack cocaine - you might know that the feeling of freedom and power it provides is wholly illusory, you might know that it's likely to ruin your own and many other lives, you might know that it's a disaster for the health of the body politic...but it also offers the possibility that you can have an absolute blast and get away with it.
My anecdotal experience of being around wealthy and powerful people over the years inclines me to think that not only do our social systems select in favor of people who take big risks for big rewards, but that virtually everyone in that class has a) done a lot of getting away with things legally speaking and b) enjoys using illegal drugs. Even if they've given up recreational drug taking or limit it to strictly defined times and places so as not to interfere with their business/personal success, they like thrills and have confidence about their ability to enjoy them without negative consequences. You need some of that risk-taking, high personal autonomy attitude if you aspire to be a mover and shaker as opposed to a leading figure in risk management or regulatory compliance.
Everyone enjoys the feeling of power without responsibility; it's a fundamental underpinning of games and many other kinds of recreation. Add in significant amounts of money and people think differently about risk, as in the topical case of the experienced Supreme Court litigator who turned out to have have a secret life as a high-stakes poker gambler and eventually started betting against the IRS while filing his taxes (https://www.politico.com/news/2026/02/25/supreme-court-litig...).
Now, if you're in the political-military sphere and you get your thrills by literally redrawing lines and relationships on the map of the world and deciding what the news on TV is going to be for the next day/week/month/year, and you get offered a tool that promises to give a significant edge over other players in this game but which also gives you a versatile and widely accepted excuse for avoiding consequences for the inevitable losing hands, there are massively compelling psychological incentives for using it. And correspondingly, there's going to be massive emotional disruption (and bad decision-making and behavior) if your supply is threatened. You might start labeling the people who are interfering with your good time as cognito-terrorists and telling all your friends and supporters that your formerly trustworthy supplier did you dirty...
If we don't impeach for this, we might as well surrender to MAGA.
> Populist nationalism + “infallible” redemptive leader cult
> Scapegoated “enemies”; imprison/murder opposition/minority leaders
> Supremacy of military / paramilitarism; glorify violence as redemptive
> Obsession with national security / nation under attack
TBH could be worse.
As someone who wants America to win, ripping out Claude and putting in xAI is a terrible idea. Definitely setting us back a few months on capabilities
“You won’t let us use your product unrestricted for military applications? Fuck you, we’re going to stop using it for anything at all across the entire federal government, even if not remotely related to military.”
....................../´¯/)
....................,/¯../
.................../..../
............./´¯/'...'/´¯¯`·¸
........../'/.../..../......./¨¯\
........('(...´...´.... ¯~/'...')
.........\.................'...../
..........''...\.......... _.·´
............\..............(
..............\.............\...
>@grok what type of political system is most often associated with the government forcing private companies to change their policies and do whatever the government wants?
>Fascism, via its corporatist model: private ownership remains, but the state directs industry to serve national goals...
Trump's behaviour seems fairly normal fascism but thankfully the rest of the US system seems unenthusiastic.
More taxpayer funded lawsuits to come.
Fascist
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE..." - President Donald J. Trump
"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox"
I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.
Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".
One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
I wouldn't want a bullet manufacturer to hold back on my government based on their own internal sense of ethics (whether I agreed with it or not, it's not their place)