Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.
The ad-extremo argument is a fallacy that doesn't ad anything to the discussion. It's simply a way of pushing a certain view, because you can always find some hypothetical that might justify an action.
Fortunately we are dealing with a concrete situation so we don't need to talk in hypotheticals.
Leave.
> But it's a matter of survival of the entire universe?
Literally nothing is like that. And even after correcting your hypothetical to something reasonable, Altman was almost certainly closer to the "risk destroying the universe" camp than the board.
This type of delusional thinking can be used to justify anything.
It is wise to be exceedingly cautious when deploying ends-justifies-the-means logic. Most of the time, history doesn’t remember those who employ it too fondly.
I'm particularly curious if you've done a meta-analysis on what you choose to analysis or not, and what the fundamental basis is for determining fitness for your values and actions (e.g. what does a healthy society and healthy person look like to you and how do you statistically justify this position?).