As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."
I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).
The optics dont look good though if a board member is complaining publicly.
If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.
If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.
They don't need to worry about commercial side because money is being pured more than enough.
The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.
Both sides should have worried less and carry on.
If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".
> and potentially let a different effort take the lead
Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).
Then it was 1/6 about the voting.
But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.
In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.