Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
Sometimes, it's even a very odd prerequisite.
No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.
After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".
And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.
Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.
Road to Hell and all that.
Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.
So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.
And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?
Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."
But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).
I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.
Just by calling them "department of war" you know what side they're on. The side of money.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
Sure, but what happens when the suits eventually take over? (see Google)
This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.
Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".
But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.
Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.
Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.
which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"
:)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
Well let's see... it says in the post:
* worked proactively to deploy our models to the Department of War and the intelligence community.
* the first frontier AI company to deploy our models in the US government’s classified networks,
* the first to deploy them at the National Laboratories, and
* the first to provide custom models for national security customers.
* extensively deployed across the Department of War and other national security agencies
* offered to work directly with the Department of War on R&D to improve the reliability of these systems
* accelerating the adoption and use of our models within our armed forces to date.
* never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.
This is structural and has nothing to do with individuals.
Literally just giving business away. This is not a cynical take, this is a realistic one.
This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".
They will simply go to another vendor... Anthropic is not THAT far ahead.
Also, the US’s enemies are not similarly restricted. /eyeroll
Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.
Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<
And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…
… since it all goes through their servers.
Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.
Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.
Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.
So what? Every business is driven by values.
It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
> driven by values
> well-intentioned
What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.
These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.
It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.
Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.
I have a feeling they see themselves more as evangelists than scientists.
That makes their models unusable for me as general AI tools and only useful for coding.
If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.
-----
The Department of War is threatening to
- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
- Label the company a "supply chain risk"
All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.
The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.
They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
We are the employees of Google and OpenAI, two of the top AI companies in the world.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Signed,
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
> I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.
He is trying to win sympathies even (or especially?) among nationalist hawks.
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
Odd.
You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.
“Even fully autonomous weapons (…) may prove critical for our national defense”
FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
The military should never be allowed to dictate how Private corporations act
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
Is there an example of such a system existing successfully in any other country of the world that has a standing army?
Good thing the US is led by such figures as Donald Trump or Joseph Biden, stalwart trustworthy men with their hands firmly on the wheel.</sarcasm>
You know who doesn't have as much power? The swiss head of state, so weak you can't even reliably name them! THATS what it looks like to defeat personalization, not some hand wringing hoping a system does something that it wasn't designed to do.
This is a common but far too passive description.
Republicans in Congress support everything Trump and friends are doing.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
https://en.wikipedia.org/wiki/United_States_Department_of_De...
They are only contradictory if you think about it.
Why the hell should companies get to dictate on their own to the government how their product is used?
It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.
I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.
- https://the-decoder.com/anthropics-head-of-safeguards-resear...
- https://the-decoder.com/anthropics-ceo-admits-compromising-w... (see also https://news.ycombinator.com/item?id=44651971, https://futurism.com/leaked-messages-ceo-anthropic-dictators)
- https://the-decoder.com/anthropic-ceo-dario-amodei-backs-pre...
the country jumped the shark post 9/11 and has been on a slow rot since then.
There was a coup by a foriegn adversary and Americans lost.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
I hope I am wrong.
But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?
I don't think the moral high ground Anthropic is taking here is high enough.
Because as far as I know, Anthropic is taking the most moral stance of any AI company.
Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.
On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.
What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.
This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.
Other than that, good on ya.
If they had called it DoD, then that would have been another finger in his eye.
All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"
Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.
This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.
It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.
The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.
One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.
On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.
Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.
Nicely put. In other words: Department of Morons.
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."
The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.
The "values" on display are everything but what they pretend to be.
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.
So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.
What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).
It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.
I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."
Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.
Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.
(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)
EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.
A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.
Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
I mean, I guess from '65 to around 96? We had a good run.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)
2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.
3. It's something almost completely unrelated to what's going on in the news.
His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.
To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.
For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.
If something like that existed, it wouldn't be impossible to uncover:
1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.
2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.
3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.
Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).
I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.
> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.
Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.
> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.
Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.
Previous case of tangling with the Government.
https://youtube.com/watch?v=OfZFJThiVLI
Jolly Boys - I Fought the Law
Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.
[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...
Implying other civilians can be put at risk
Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.
I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.
I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.
“It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...
The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.
"In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman said in a post on X."
https://www.reuters.com/business/openai-reaches-deal-deploy-...
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
If the limitations are contractual, then there is some room for negotiation.
It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.
I feel like what most corpos would do, would be to just roll along with it.
You may not agree with it, but I appreciate that it exists.
That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.
Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.
Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...
The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.
It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.
Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
There are military officials saying they need anthropic because it is so good. They can't live without it.
All of this really helps Anthropic.
Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.
Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.
I'm not sure who's targeted here. The folks that want to invade the EU ?
"We will build tools to hurt other people but become all flustered when they are used locally"
this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool
Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.
The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.
Mark my words: this will be Patriot Act++
I can never tell how much of this is puffery from Anthropic.
I do think they like to overstate their power.
I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).
Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...
Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.
Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?
A clarification would help.
"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."
It also acknowledged that this is not what is happening...
> importance of using AI to defend the United States
> Anthropic has therefore worked proactively to deploy our models to the Department of War
So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.
You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.
the name is still the department of defence by law. department of war is a subheading tagline
Every trigger pressed should have its moral consequences for those who push the trigger.
After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."
Ugh.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.
Really? Is China non-imperialist regarding Taiwan and Tibet?
the one china policy is imperialism
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
Your comment is ridiculous. It reads like satire.
https://futureoflife.org/open-letter/lethal-autonomous-weapo...
He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.
Was this written by the state department?
How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?
As a European I’m kinda... concerned now.
the Chinese are releasing equivalent models for free or super cheap.
AI costs / energy costs keep going up for American A.I companies
while china benefits from lower costs
so yeah you've to spread F.U.D to survive
Cheney's office touched the presentation presented by Gen. Colin Powell which led Congress to believe that there was need to invade Iraq to save US from WMDs. Tours of duty were extended from 3 months to 24 months because "stop loss". Subsequently, the United States paid out trillions for debt-financed war and some $39 billion to Cheney's company KBR.
Today you learned that the oil company Cheney worked for (Chevron) was trying to bully Afghanistan into a pipeline deal in 1998 and also in 2001.
Cheney donated less than $10 million dollars of his Haliburton/KBR returns; mostly to a heart medicine program in his own name and retained a compensation package.
The power lies with the US Govt.
And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.
Ultimately, Anthropic will fold.
All this is to show to their investors that they tried everything they could.
Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?
How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.
Oh, and that includes Palentir, who is deeply embedding in the govt.
Side example: remember the 6 congresspeople who made the video about military orders? They won.
Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.
You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.
They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.
Claude was just being the little bot that could, and until now, flying under the radar
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
Delete ChatGPT and Grok.
They get to look good by claiming it’s an ethical stance.
I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.
Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.
What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?
It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.
He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.
My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.
I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.
It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.
That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.
I am not particularly AI alarmist, but these are facts staring us right in the face.
We are so fucked.
But Hegseth and Trump are abusing federal powers at a rapid clip.
I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.
(Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
genuinely curious, I got nothing
> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.
The main thing that's disappointing is how some people here see him or his company as "well-intentioned".
I understand the risk, but that is the pill.
we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets
is a bit ridiculous.
Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.
Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
Do these rules apply to them too?
Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.
they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.
We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?
Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.
Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.
There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.
The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.
He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.
And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?
Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.
We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.
And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.
United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.
This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.
And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.
Ads are coming.
Total humiliation for Hegseth, sure there will be a backlash
What a shit name
It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.
AI should never be used in military contexts. It is an extremely dangerous development.
Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
Disclaimer: I'm not a US citizen.
I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.
I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.
Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.
I guess they're evil. Tragic.
In that climate this is a more of a stand than what everyone else is doing.