On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.
Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.
I don’t think Anthropic is wrong to include that clause with this particular administration, and I doubt the administration is internally framing the issue the way I did rather than defaulting to simple authoritarian instincts.
But a more reasonable administration could raise the same concern, and I think I would agree with them.
Maybe the argument is that they should, but I don't agree with that. If Anthropic or any of these other vendors have reservations about the logical conclusion of how these tools will be/are used then they should not sell to the government. Simple as. However ... if the claims Anthropic et al make about how these systems will develop and the capabilities they will have are at all true, then the government will come knocking anyway.
Dario has even said something along these lines at one point: As the technology matures, it’s very possible the government either nationalizes or semi-nationalizes companies like Anthropic.
That doesn’t seem out of the realm of possibility if they can’t land on a relationship similar to existing defense contractors like Raytheon, where these kinds of discussions obviously don't seem to happen.
I can't agree that this is the right comparison. What is being sold here is not just another missile or tank type, it is the very agency and responsibility over life and death. It's potentially the firing of thousands of missiles.
I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.
The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.
>The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated.
You are also correct here imo, with one important caveat. Even if private companies have the means for enforcing that clause, it is not their business to do so. Maybe that's the crux of the problem, one of perspective. The for-profit entity in these arrangements is not and can never be trusted as the mechanism of enforcement for whatever we, as a republic, decide are the rules. That is the realm of elected government. Anthropic employees are certainly making their voice heard on how they believe these tools should be used, but, again, this is an is versus ought problem for them.
In a version of a trolley problem where you're on a track that will kill innocent people, and you have the opportunity to set up a contract that effectively moves a switch to a track without anyone on it, is it not imperative to flip that switch?
(One might argue that increased reaction times might save service members' lives - but the whole point is that if the autonomous targeting is incorrect, it may just as well lead to increased violence and service member casualties in the aggregate.)
And we're not talking about the ethics board manipulating individual token outputs subtly, which would indeed be a supply chain risk - we're talking about a contractual relationship in which, if a supplier detects use outside of the scope of an agreed contract, it has the contractual right to not provide the service for that novel use, while maintaining support for prior use cases.
The fact that the government would use the threat of supply chain risk to enforce a better contract is unprecedented, and it deteriorates the government's standing as a reliable counterparty in general.
This problem is really difficult to discuss because we are all wrapping the capabilities of these tools into our response framing. These are tools, or weapons. Your hypothetical could just as easily be applied to GBU-39s, a smaller laser guided bomb that's meant to take out, say, a single vehicle in a convoy versus the entire set of vehicles. If you're not confident in what the product is supposed to do, and you've already sold it to the government, you have lied and they are going to come back to you asking some direct questions.
On the other hand, why should the government have infinite power to override how a business operates? If you're not able to refuse to sell to the government, isn't that basically forced speech and/or forced labor?
And now that we see the government blatantly disrespecting the constitution and the rule of law the civil community must react.
The government shouldn’t be able to set the terms of its contracts with private companies and walk away if those terms aren’t acceptable? That seems like a stretch.
The constitution is a wildly different premise from government contracting with private companies.
The government shouldn't be able to coerce a business to do whatever it wants.