Just because they're not 1:1 model of reality or predictions doesn't mean that the ideas they communicate are worthless.
In the Soviet Union the reasons might have been "to beat the Capitalists", "for the pride of our country" or "Stalin asked us to and saying no means we get sent to Siberia". Though a variant of the last one may well have happened here, and the justification we read is just the one less damaging to everyone involved
Hegseth was planning on getting the model via the Defense Production Act or killing Anthropic via supply chain risk classification preventing any other company working with the Pentagon from working with Anthropic. So while it wasn't Siberia, it was about as close as the US can get without declaring Claude a terrorist. Which I'm sure is on the table regardless
"Pentagon officials said the Defense Department is planning to keep using Anthropic's tools, regardless of the company's wishes."
NPR - Hegseth threatens to blacklist Anthropic over 'woke AI' concerns
Clearly the threat to go to Grok was just a bluster, which says volumes about what the admin thinks of Grok vs Claude.
"AI safety and anti-capitalism [...] are at least strongly analogous, if not exactly the same thing." [0]
[0] Nick Land (2026). A Conversation with Nick Land (Part 2) by Vincent Lê in Architechtonics Substack. Retrieved from vincentl3.substack.com/p/a-conversation-with-nick-land-part-a4f