Couldn’t help but laugh at the irony here— you’re not wrong! The fact of the matter is that anthropic is an “unacceptable risk”… that the government had
contracted with to use with classified milnet.
source:
https://www.hoyerlawgroup.com/what-the-dod-anthropic-dispute...
That contract was already signed and active, the government had already agreed to Anthropic’s terms, and contractors were already cleared to use Claude on the classified networks; only until anthropic started enforcing those pre-existing guardrail clauses (probably for good reason) did Hegseth get pissy.
Guess it should go without saying: if you cannot support clause A.) surveillance of Americans, and clause B.) AI assisted weapons systems, then you are a /supply chain risk/. Lord knows we don’t need heroes here.
But you know, if abiding those terms is a legitimate threat to your supply chain, then why would you agree to those stipulations to begin with ;)
Edit:
So to more respond to your point: big disagree, this can absolutely be used for compliance. The crucial thing you’re missing is that the government /threatened/ to designate them a risk in response to the CEO’s enforcement of the clause. The government gave them a -timeline- to desist and comply… which debases the claim that they are a supply chain risk. The judge is a moron.
The -only- legal argument for the designation is the ugliest one: the fact that Anthropic is willing to play dead canary. “You’re not a supply chain risk a priori, but you’re a supply chain risk for asserting this work violates 1 and 2”
By the way… the same two stipulating terms exist with OpenAI’s contract with them… nudge nudge wink wink