I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
[0] https://x.com/sama/status/2027578652477821175
[1] https://x.com/UnderSecretaryF/status/2027594072811098230
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
In that case, what on earth just happened?
The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.
Do you not see something very, very wrong with this picture?
At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?
> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)
If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.
Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.
You, and your colleagues, should resign.
And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.
You're fooling yourself if you think this administration is following any kind of rule.
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.
And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?
> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.
So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?
I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.
Know that when things go wrong (not if, when), the blood will be on your hands too.
The evidence seems to overwhelmingly point in the opposite direction.
If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.
What is your red line?
OpenAI agrees to be put in the same position as Anthropic.
It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?
There's surely no way that's actually what you believe...
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)
Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.
Or Sam bribed the government to do this, which is also entirely possible.
I do not know but I would not very optimistic about those new terms.
Someone might just create a spawn of openai with a tag and do all the stuff there...
There is no much guarantee I think
Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.
I can't tell you what to do but I hope you make the right decision.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.
Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?
It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.
Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.
Shame on you people. What a disgusting vision.
Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.
One got characterized as supply chain risk and so much for OpenAI to get the same.
And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.
And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.
So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.
The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.
OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.
If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.
I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.
[1]: https://www.wired.com/story/openai-staff-walk-protest-sam-al...
In my mind the only people left are those who are there for the stocks.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.
This is too transparent even for sama.
You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
Do you mean the same OpenAI that has a retired U.S. Army General & former director of the NSA (Gen. Nakasone) serving on its board of directors?
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...
Woolad theyll create the autonomous military robots themselves for that check.
Sometimes money is more attractive than morality. So I guess money is the answer here.
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
The morals were just there while it was easy virtue signaling.
Same for almost all Google, Facebook, etc. Prove me wrong, please.
um, easy -- everyone has a price. Some of the most highly-paid workers on the planet work there.
Pay me $5M/yr and there are a LOT of things I wouldn't do for $300k.
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
He said human responsibility. Anthropic said human in the loop.
And Anthropic refused to say any lawful purpose would be allowed reportedly.
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
But what's the most charitable / objective interpretation of this?
For example - https://x.com/UnderSecretaryF/status/2027594072811098230
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.
It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.
this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
the oai folks are good at making deals, just look at all the complex funding arrangements they have
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
https://x.com/UnderSecretaryF/status/2027566426970530135
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
He's an administration official openly cheerleading his team. This should be characterized as the insider perspective/spin, not a neutral analysis of the relevant facts.Nothing in the quoted text comes anywhere close to the realm of justifying the retaliatory actions.
1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.
2. We've seen government change the law to make whatever they want legal (see: patriot act)
3. We've seen courts just interpret laws to make things legal
A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.
(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)
So the question is: do you trust the government to effectively govern its own use of AI? or do you trust Anthropic's enforcement of its TOS?
Does the qualifier "domestic" for mass surveillance mean that OpenAI allows the use of its models for whatever isn't "domestic"?
... Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force ...If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.
It seems like you chose to immediately disbelieve it.
> until a trustworthy news outlet publishes the text
If you've found one of these, let me know. I'm still looking...
ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...
Anyone thinking they have any virtue is naive.
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
https://www.nytimes.com/2026/02/27/technology/openai-reaches...
A few months down the line, OpenAI will quietly decide that their next model is safe enough for autonomous weapons, and remove their safeguard layer. The mass surveillance enablement might be an indirect deal through Palantir.
Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.
https://www.binance.com/en/square/post/35909013656801
I'm sure more will drop in the coming months.
* no military use
* no lethal use
* no use in support of law enforcement
* no use in support of immigration enforcement
* no use in mass surveillance
* no use in domestic mass surveillance (but mass surveillance of foreigners is OK)
* no use in domestic surveillance
* no use in surveillance
* require independent audits
* require court oversight
* require company to monitor use
* require company to monitor use and divulge it to employees
* some other form of human rights monitoring or auditing
* some other form of restriction on theaters/conflicts/targets
* company will permit some of these uses (not purport to forbid them by license, contract, or ToS) but not customize software to facilitate them
* company can unilaterally block inappropriate uses
* company can publicly disclose uses it thinks are inappropriate
* some other form of remedy
* government literally has to explain why some uses are necessary or appropriate to reassure people developing capabilities, and they have some kind of ongoing bargaining power to push back
It feels normal to me that a lot of people would want some of those things, but kind of unlikely that they would readily agree on exactly which ones.
I even think there's a different intuition about the baseline because one version is "nobody works on weapons except for people who specifically make a decision to work for an arms company because they have decided that's OK according to their moral views" (working on weapons is an abnormal, deliberate decision) and another version is "every company might sell every technology as part of a weapons system or military application, and a few people then object because they've decided that's not OK according to their moral views" (refusing to work on weapons is an abnormal, deliberate decision). I imagine a fair number of people in computing fields effectively thought that the norm or default for their industry was the latter, because of the perception that there are "special" military contractors where people get security clearances and navigate military procurement processes, and most companies are not like that, so you were not working on any form of weapon unless you intentionally chose to do so. But, having just been to the Computer History Museum earlier this week, I also see that a lot of Silicon Valley companies have actually been making weapons systems for as long as there has been a Silicon Valley.
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"
- Anthropic is not ok with use of their AI for autonomous weapons
An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.
So killing people is legal,
Killing people by a random process is legal,
A randomized algorithm deciding on who to kill is legal,
And some of you think you are legally protected because they used the word “domestic”?
They will deploy this on a domestic scale and claim to use it to locate non-domestic threats. I can’t believe anyone is falling for this.
Who said that any of it is legal? Keeping in mind that when the government does something, it usually takes more than 24h for there to be an official determination on whether they broke the law.
So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.
The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?
[0] https://news.ycombinator.com/item?id=47176170
[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
They’re willing to let their brand go to trash for this government contract.
Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.
But Altman seems so desperate to keep the cash coming he’s ready to do anything.
and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
Mad respect to Sam, now I believe OpenAI have better chance to win in the race
And people wonder how we got here.
But I suspect the public sentiment will eventually turn against him. When society sets its pitchforks on big tech he’ll be the poster boy. A 21st century John D. Rockefeller.
Him, Musk, Bezos, and Zuck.
anything HN countersignals, go long on.
Income and revenue sources always, inevitably, and without fail, determine behavior.
I hope everyone goes and works for Anthropic and OpenAI collapses.
Markets going to be interesting on Monday. This plus a war. Urgh.
So it wasn't about those principles making them a supply chain risk? They're just trying to punish Anthropic for being the first ones to stand firm on those principles?
weasels gonna weasel
Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.
(Guess I need to build everything I intended this year in a weekend.)
I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.
I posted about this here after Sam made his tweet:
https://news.ycombinator.com/item?id=47189756
Source: https://defensescoop.com/2025/01/16/openais-gpt-4o-gets-gree...
Both are based in Europe but Proton Lumo has the better privacy promises.
Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)
But tbh I just switched to Anthropic, they need all the support they can get. Claude is great for question/answer.
The little respect I had left for Sam is now wiped. Makes me sick.
Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.
I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.
Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.
Ended up renewing my Claude sub today instead. Principled stances matter and I no longer trust OpenAI to be trustworthy custodians of my AI History.
I linked to https://notdivided.org/ as the reasoning why.
Was shocking back then to think how far we’ve come.
taking real action is your choice, but stop pretending this kind of thing matters one iota
edit: to be clear, i'm not advocating for nihilism, but tricking yourself into thinking you made a difference to make yourself feel better isn't the play either
Cancelling ChatGPT sends a signal that you don't agree with weaponizing AI. Switching to Claude says you support Anthropic's principled stance against it. If you have a strong opinion either way, today is the day to vote with your wallet.
Dismissing every small action as meaningless is just apathy and how nothing ever changes.
I was not a Chat-GPT user even before this, but I'm bumping my Claude Code subscription to the next tier up. Fuck OpenAI.
This is blatantly false and intellectually dishonest. Of course it matters. Your edit is also wrong; you are advocating for nihilism with statments like these.
I also absolutely do not trust sleezy Sam Altman when he claims he has the same exact redlines as Anthropic:
> AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
If Hegseth and Trump attack Anthropic and sign a deal with OpenAI under the same restrictions, it means this is them corrupting free markets by picking which companies win. Maybe it’s at the behest of David Sacks, the corrupt AI czar who complained about lawfare throughout the Biden administration but now cheers on far worse lawfare.
So it’s either a government looking to surveil citizens illegally or a government that is deeply corrupt and is using its power to enrich some people above others.
So by that measure the US govt can go get some Israeli software to surveill their domestic populace!
Homo sapiens deserve to become extinct.
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
(Person Of Interest for those who haven't seen it, watched it a decade ago and it's actually quite surprising how on point it ended up being)
Why? It is in the admin's interest to absolutely destroy Anthropic. Make them an example.
Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.
I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.
Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.
The same day:
Pssst psst Samy Samy, come here we have money and data psst
> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.
I am fully prepared to believe that they got absolutely nothing else out of it (to date).
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.
When this happens, Altman will go from being merely a drifter to having blood on his hands.
A lot of innocent people are about to be harmed because the cogs of fascism are lubricated with blood.
For hardline right wing Israeli government officials who would be privy to such information, the window of time to leverage to US to enact regime change on the Islamic Republic is closing. The survival of Israel over the long run really depends on not having a hardline Islamic regime in Iran developing nuclear weapons. Things like AI safety and US elections are secondary to such prerogatives. The question for voters in the US is whether it really is worth it to the average US citizen to shed blood and tax dollars for this stuff.
I hope there can be a peaceful regime change in Iran and that there will be peaceful relations with Iran and Israel in the future. But damn I wish things could go back to normal with our US political system once this is all settled.
1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.
2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.
Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.
Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance?
Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it.
The US military _does not_ need to build autonomous weapon systems and _should not_ surveil US citizens broadly.
> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?
(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?
(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?
(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?
Use it to save your data, shouldn't be hard to get it working elsewhere
so foreign mass surveillance is all good?
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
https://www.wsj.com/tech/ai/trump-will-end-government-use-of...
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.
One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.
When I need advice for my clandestine operations I always reach for Grok.
HN: if you continue to subscribe to OpenAI, if you use it at your startup, you’re no better than the tech bros you often criticize. This is not surprising but beyond shady.
shocked pikachu face
Come on by now we all know the only thing Altman (who else is still at OpenAI from the start?) wants it more money and more power, it doesn't really matter how.
I know I'll get down voted but come on, this is so very naive.
So until we see the contract I think it’s fair to assume that OAI and Anthropic got roughly the same deal, with Anthropic insisting on language that actually limits the government, while OAI licked the boot and is passing it off like filet mignon.
What sam and greg don't realize is that the many who succumb to trump's pressure tactics will all be lumped into the same category by history.
Sam and Greg are handing an authoritarian regime that has broken so many laws in the past year a superweapon.
They got divided 12 hours later, lol.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place."
Serve Palestinians volleys of rockets, that is.
> prohibitions on domestic mass surveillance and human responsibility *for the use of force*
The president or anybody at DoD can be "responsible", and we know there will be zero accountability. The courts defer to the executive, and Congress is all-too-happy for the executive to take the flak for their wars.
A bold statement. It would appear they've definitively solved prompt injection and all the other ills that LLMs have been susceptible to. And forgot to tell the world about it.
/s
Edit: It looks like the terms are similar in OpenAI's deal in what they prohibit so it isn't clear why they are any better. We should be the ones dictating what is and isn't prohibited. Not Sam. We will have to wait for more news on what is actually different.
This also means that they should adhere to a deal once it is signed. That's part of the law too. They shouldn't suddenly turn around and try to alter the deal, then retaliate against their deal partner when they say "that wasn't the deal". You can't just go and answer: "Pray we don't alter it further".
The government of a nation sets the example for others, and should be scrupulous in their dealings.
You think OpenAI decided to build MurderBot because someone made fun on them selling ads?
> Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
IF this is true, it SHOULD be verifiable. So, we wait? I mean, I am a dummy, but that language doesn’t seem too washy too me? Either it’s a bold face lie and OpenAI burns because of it or it’s true and the Trump admin is going after the “left” AI company. Or whatever. My point is, someone smarter than me/us is going to fact check Sam’s claim.
Edit: as soon as I hit submit I realized this might sound condescending, but I actually mean this lol
Do you really still genuinely believe in this? This is the same person that said ads is going to be the last resort, and yet we are getting ads. I just don't understand how people can trust a single word coming out of folks like Sam, Musk, Trump or whoever rich asshole.
I listen to these people talk and they literally do not have souls. They will say whatever it is they need to get ahead. I watched a couple of Sam speeches and videos, the man does not have anything interesting to say.
DoW: WOKE Antropic tried to impose their 'values' on us? Friendship ended!! National security risk!
OpenAI: We just signed a deal that's strong on values, the exact same ones as Anthropic, no way we would mislead anyone about this
You: Seems legit
What does it even mean to reflect those principles in law? Did they pass a law that says they can't do it? Which one?
What does it mean to "put them into our agreement"? Did they just have a section in the appendix listing various principles, or is there agreement from both parties to not violate those principles? What system does the contract specify for verification of compliance?
> reflects them in law
Means exactly. What law and what does it say?
I’m also sure he quietly bent the knee, but I want to know what “law and policy” it’s being reflected in to know.