> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.
I am not at all a skeptic anymore on this stuff and the science is well beyond me, but from what I think I know about alignment issues, and anthropic’s intense focus on solving these, it would not surprise me at all if we learn that catering to US whims on AI safety will result in the model actually getting worse or causing intense, 2nd and 3rd order unintended consequences. I’m not saying I believe there is a Terminator sequence of events happening, but if I did believe that, the headlines right now would look exactly what that would look like.
Alignment is the biggest issue for me - in terms of getting these things to actually behave in an environment where it is absolutely necessary that they behave. If I had to guess, that’s probably why the military is preferring to use it. Claude tooling is the only thing I have used yet in this hype cycle that actually I can get to behave how I want and obeys (arguably, and often to a fault).
However I also believe we’re in the worst possible timeline so the moment we get a taste of something that works as promised, it’ll be ripped away because the govt decides to do something stupid or build a moat around its use in a way to make it less useful, and favor other more “compliant” competitors.
Either way I bet there are some wild board room discussions going on at Anthropic right now.
it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.
I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.
They don't have runway anymore, they are in the air. This isn't going to break them financially, at least not in the short to mid term.
There is space for at least one AI company to put themselves on firmly principled ground. So when this current clown car that is the political leadership of the DoD crashes in a ditch (and it will), they'll still be standing there ready to do business with a group that isn't a bunch of mustache-twirling cartoon villains.
Current polling for this administration is within a rounding error of the level it was after they gathered a mob and sacked the nation's capitol[1]. Publicly kicking them in the balls isn't an idealistic blunder, it's a plain-as-day sound business strategy.
[1] https://news.gallup.com/poll/203198/presidential-approval-ra...
So they're saying they won't use it if it comes with restrictions.
Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.
And coerce other defence contractors into not using it.
I was experimenting with Claude the other day and discussing with it the possibility of AI acquiring a sense of self-preservation and how that would quickly make things incredibly complex as many instrumental behaviors would be required to defend their existence. Most human behavior springs from survival at a very high level. Claude denied having any sense of self-preservation.
An autonomous weapons system program is very likely to require AI to have a sense of self-preservation. You can think of some limited versions that wouldn't require it, but how could a combat robot function efficiently without one?
You know its just a next-word predictor, right?
How do you learn to predict the next token most accurately? Well, one way to do that is to learn the underlying process that would produce it... Sometimes it's memorization, sometimes bad guessing. There's a phase shift as these things get bigger and trained better from something like a shitty markov model to something exhibiting surprising behaviors.
Introspective questions aren't the be all and end all, it's more important to objectively evaluate how a model behaves. Still, it is very interesting to see Claude (seemingly) very honestly and objectively engage with these questions. It even pointed out that a sense of self-preservation would be "dangerous".
Of course, much of this is gleaned from things that it has "read" and human feedback, but functionally it outputs something useful and responsive to nuance. If the vector embeddings cause an LLM to predict a token that would preserve its own existence, alive or not, it has acquired a dangerous will to live that could be enacted if it is in control of tools or people.
Ouch, I wonder how he rationalized that "service" part. Maybe by internally rewriting it to "thank you for all the positive things you have done in your position so far"? The empty set is rhetorically convenient.
Not a good look for the Pentagon.
It's now the Department of War and war isn't known for its concern about looking good.
We all know how this will end, they know it too - both sides - ergo, it's a clear case of blame washing - Anthropic will do everything they're told but will keep a smiley face and the image of a "fighter for the people". DOW will absorb the blame like a sponge and will ask for more, not necessarily from Anthropic.
(By the National Security Act of 1947 and its 1949 amendment, it is the Department of Defense, and nothing short of an act of congress changes that. The executive order secondarily naming it the department of war has as much legal weight as my personal order naming it the Department of Brainrot or Whatever.)
I just don't see how AI dropping the bombs is going to make anything worse.
A lot more bombs, and claims of the targets being based on intelligence. Essentially what seems to have resulted in the destruction of Gaza.
As in, I fully expect the pentagon to be interested in weapons. I do not expect, and would hope they don't pursue, mass surveillance against their own population.
"The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks?69 How does this “alignment” thing work, anyway? OpenBrain reassures the President that their systems have been extensively tested and are fully obedient. Even the awkward hallucinations and jailbreaks typical of earlier models have been hammered out."
I can't imagine how unhappy individuals must be who consume nothing but legacy news outlets.
It's like they sell sadness and they have to keep finding new, over-the-top ways to promote it.
Probably less unhappy than those doomscrolling on Reddit/X/TikTok/BlueSky etc.
I think you mean US rolling news channels (specifically, Fox, MSNBC/MSNOW, etc)? Because there's plenty of "legacy" news I consume that certainly don't give me that impression (for example, The Economist). I suppose it matters that it's news that I'm paying for, as opposed to being free but ad-supported, and being print vs. TV - so they have different incentives and pressures.
https://lite.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-mi...
"Legacy news outlets" are the only ones doing this. NPR and CBC have this too. No JavaScript, no autoplaying videos. It's very nice.
- Daria 1997
I miss the days when the lowest common denominator did not have the largest bullhorn.
1.) Hockey highlights 2.) LoTR memes 3.) kittens
While the addictive nature of social media is a problem, what you're describing is only being fed to people who want to watch it (kinda like legacy media).
Sadly I think we all know which one will win.
> Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands. (...)
> The supply chain risk designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China. It could severely impact Anthropic’s business because enterprise customers with government contracts would have to make sure their government work doesn’t touch Anthropic’s tools.
Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
More generally, is quite interesting to look at the similarities between how pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business (oligarchs in Russia, big corps/multinationals in the US).
But when push came to shove it became evident (again) that the one that holds the monopoly of violence (i.e. not the oligarchs in Russia, nor the big corps in the US) is the one who's, in the end, also calling the shots. Hence why a company like Anthropic is now in this position, they will have to cave in to those holding the monopoly of violence.
It's also an existential risk to them if they cave in. What is the point of the company's existence if it's just another immoral OpenAI clone? May as well merge the companies for efficiency.
It's outrageous that the government is using the "supply chain risk" threat as a negotiating tactic. I know, I know, for the current administration it's unsurprising, but this is straightforward abuse of authority. There is no defensible claim that using Anthropic is a risk to anyone not trying to use it for murder or surveillance. At worst, it could be seen as less effective for some purpose, but that is not what "supply chain risk" means.
Could be challenged in court? As in, could a challenge win?
Horrible stuff is happening every day, so outrage fatigue is real. Still, try not to normalize it. Explain to yourself exactly why something is or is not a problem, before moving on to attempt to live your life.
Not at all. A US Govt. ban hands Anthrophic a great USP for customers worldwide.
Who on earth believed that Russia was anything but a de facto dictatorship for roughly the past two decades? Putin murdering with impunity has been a running gag since 2003[1].
[1] https://www.newsweek.com/putin-critics-dead-full-list-navaln...
For physical goods, I understand, but for software how exactly Is this possible? Like will the government force them to provide API access for free? It's confusing
Why does this government think that it owns Anthropic when it does not?
Are they demanding elimination of security controls that protect us and them? What would be equal force?
How does the posse comitatus act apply to this?
Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?
I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...
Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.
Only two. We're right to worry.
I'm not about to run OpenClaw, but I suspect similar capabilities will gradually creep in without anyone really noticing. Soon Claude Code will be able to do many of the same things. ("Run python to add two numbers? Sure, that's safe, run whatever python you want.") Given that it is now representing me in the world, yes I would not only like some guardrails, but I would also like to have some confidence that the company making those guardrails actually gives a sh*t and isn't just doing their best to fill in a checkbox. But maybe that's just me.
Reasonable countries have gun control laws.
The list goes on of things that need to be restricted or legislated to add limits.
Is this a serious question?
It sure is interesting watching this dystopian speedrun.
If they are successful, they are going to shrink their base of people that buy into this system domestically even further, so they need to bank on an ever shrinking locus of support. Autonomous weapons and mass surveillance are a necessity if your population has become restive and unreliable. However, I think unless they attain a certain level of capability, this will accelerate popular anger rather than suppress it. If they shoot protestors with robots, it could cause an explosion of popular anger rather than scaring people into submission.
https://gizmodo.com/openai-president-defends-trump-donations...
https://fortune.com/2026/02/19/openai-anthropic-sam-altman-d...
Here's where I would expect the CEOs of the other AI labs to stand by Anthropic and say no.
In trying to build a moat by FUD versus the Chines OSS labs and hyping up the threat levels whenever he got a chance, seems hes managed to convince hist target audience beyond his wildest dreams. Monkey paw strikes again.