What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).
Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.
Of course.
> Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.
All technology and labour can be abused, yes. All the more reason to ensure a strong system of law so that the government can't just seize businesses or their technology on a whim. Back in WW2 such seizures happened, but not too often because it was not popular.
But then the United Mine Workers coal miners went on strike in 1943, and the War Labor Disputes Act was created (even overriding an FDR veto), threatening to nationally seize the mines and conscript the miners with the Selective Service Act. Thankfully cooler heads prevailed. The US populace turned against unions due to the popularity of the war effort, and the miners went back to work after getting assurances that their pay demands would be negotiated.
Ultimately I think we're far away from this in today's era (though the US or Canadian governments forcing back-to-work legislation is increasingly normal), but the point is, pacifists have limited options in wartime if a majority of public opinion is in support of the war effort.
This is the oft-spoken fallacy of the benefit of hindsight. Folks in that situation 80 years ago did what they had to do, to stop Japan from continuing to rape and murder hundreds of thousands of people in southeast Asia. But of course, you would have found a better option. How's the view, standing on the shoulders of giants?
And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.
Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.