It’s incredibly disingenuous to slap your name on an ethics paper claiming a company is doing malfeasance such as triggering a race-to-the-bottom for AI ethics when you have an active hand in steering the company.
It’s shameful. Either she should have resigned from the company, for ethical reasons, or recused herself from the paper, for conflict of interest.
I'm not sure if this is the true statement, because the not-for-profit/profit arms make it more complex.
In this case, the non-for-profit board seems to act as a kind of governance over the profit arm, in a way, it's there to be a roadblock to the profit arm.
Normally a board aligns with the incentives: to maximize profit for shareholders.
Here the board has the opposite incentive, to maximize AGI safety and achievement, even at the detriment of profit and investors.
To put these claims in a published paper in such a naive way with no disclosure is academically disingenuous.
Academically, did she not disclose being on the board on her paper?
Being at the top of the org and being present during the specific incidents that gives one qualms burdens one with moral responsibility, even if they were the one who voted against.
You shouldn’t say “they did [x]” instead of “we did [x]” when x is bad and you were part of the team.
While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety. The reason for this unintended outcome is that the company took other actions that overshadowed the import of the system card: most notably, the blockbuster release of ChatGPT four months earlier. Intended as a relatively inconspicuous “research preview,” the original ChatGPT was built using a less advanced LLM called GPT-3.5, which was already in widespread use by other OpenAI customers. GPT-3.5’s prior circulation is presumably why OpenAI did not feel the need to perform or publish such detailed safety testing in this instance. Nonetheless, one major effect of ChatGPT’s release was to spark a sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the wave of customer enthusiasm about chatbots, competitors sought to accelerate or circumvent internal safety and ethics review processes, with Google creating a fast-track “green lane” to allow products to be released more quickly.150 This result seems strikingly similar to the race- to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid. OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to “jailbreaks” that allow users to bypass safety controls.151 This muddled overall picture provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.
https://news.ycombinator.com/item?id=38374972
It has an excess amount of weasel words, so you might need to employ ChatGPT to read between the lines.