edit: Or that the board can't actually make a difference because whatever OpenAI doesn't do someone else will. But if people actually thought that were true they wouldn't have set up the board and charter.
It's amateur behavior. I'm sympathetic to her goals, less impressed by the execution.
Again, her mandate is not to help OpenAI deal with the FCC it's to prevent the company from building unsafe AI, one reasonable aspect of which might be to compare the methodologies of different companies.
You can justify pretty much anything with ends-justify-the-means logic and I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.
This isn't a theoretical exercise where we get to do it all over again next week to see if we can do better, this is for keeps.
The point could have been made much more forceful by not releasing the report but holding it over Altman's head to get him to play ball.
> You can justify pretty much anything with ends-justify-the-means logic
Indeed. That's my point: the ends justify the means. This isn't some kind of silly game this is to all intents and purposes and arms race and those that don't understand that should really wake up: whoever gets this thing first is going to change the face of the world, it won't be the AGI that is your enemy it is whoever controls the AGI that could well be your enemy. Think Manhattan project, not penicillin.
> I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.
That charter is and always was a fig leaf, I am probably too old and cynical to believe that it was sincere, it was in my opinion nothing but a way to keep regulators at bay. Just like I never bought SBF's 'Altruism' nonsense.
The road to hell is paved with the best of intentions comes to mind.
That effort isn't critical to OpenAI other than to try to create a monopoly.