I see how it'd be really dicey for a CEO to pitch a fit on the way out under ordinary circumstances. But also, if you fire your CEO, he may very well go stand up a terrifying competitor, which is basically what happened here, right?
Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.
The ad-extremo argument is a fallacy that doesn't ad anything to the discussion. It's simply a way of pushing a certain view, because you can always find some hypothetical that might justify an action.
Fortunately we are dealing with a concrete situation so we don't need to talk in hypotheticals.
Leave.
> But it's a matter of survival of the entire universe?
Literally nothing is like that. And even after correcting your hypothetical to something reasonable, Altman was almost certainly closer to the "risk destroying the universe" camp than the board.
Surely you jest.
This type of delusional thinking can be used to justify anything.