Sam Altman expelling Toner with the pretext of an inoffensive page (https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...) in a paper no one read* would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority - the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well - which is why they had been unable to agree on replacement board members all this time.
So when he 'reprimanded' her for her 'dangerous' misconduct and started talking seriously about how 'inappropriate' it was for a 'board member' to write anything which was not cheerleading, and started leading discussions about "whether Ms Toner should be removed"...
* I actually read CSET papers, and I still hadn't bothered to read this one, nor would I have found anything remarkable about that page, which Altman says was so bad that she needed to be expelled immediately from the board.
That's a clear motive. Sam and the independent directors were each angling to get rid of the other. The independent directors got to a majority before Sam did. This at least explains why they fired Sam in such a haphazard way. They had to strike immediately before one of the board members got cold feet.
The timing of it makes sense, but the haphazard way it was done is only explained by inexperience.
If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor. But the tone of that paper makes me think that the schism must go back much earlier (other reporting said things really started to split a year ago when ChatGPT was first released), and it sounds to me like Toner was needling because she was pissed with the direction OpenAI was headed.
I'm thinking of a good previous comment I read when the whole Timnit Gebru situation at Google blew up and the Ethical AI team at Google was disbanded. The basic argument was on some of the inherent incompatibilities between an "academic ombudsman" mindset, and a "corporate growth" mindset. I'm not saying which one was "right" in this situation given OpenAI's frankenstein org structure, but just that this kind of conflict was probably inevitable.
These are serious questions, not gotchas. I don’t know the answers, and I think having those answers would make it easier to evaluate whether or not the paper was a significant conflict of interest. The opinions we have formed now are shaped by our biases about current events.
It didn’t make HN.
Considering what's in the charter, it seems like she didn't do anything wrong?
> We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
But then it gets interesting inferring things from there. Obviously sama and gdb were on one side (call it team Speed), and Helen Toner on the other (team Safety). I think McCauley is with Toner (some connection I read about which I don't remember now: maybe RAND or something?).
But what about D'Angelo and Ilya? For the gridlock, one would have to be on each side. Naively I'd expect tech CEO to be Speed and Ilya Safety, but what would have precipitated the switch Friday? If D'Angelo wanted to implode the company due to conflict of interest, wouldn't he just have sided with Team Safety earlier?
But maybe Team Speed vs Team Safety isn't the same as Team Fire Sam vs Team Don't. I could see that one as Helen, Tasha, and Adam vs Sam, GDB, and Ilya. And, that also makes sense to me in that Ilya seems the most likely to flip for reasons, which also lines up with his regret and contrition. But then that raises the question of what made him flip? A scary exchange with prototype GPT5, which made him weigh his Safety side more highly than his loyalty to Sam?
And then to Adam and Ilya, normally something like "you should've warned me about GPTs bro" or "hey remember that compute you promised me? Can I prettyplease have it back?" are stuff that they are willing to talk it out with their good friend Sam. But Sam overplayed his hand: they realized that if Sam was willing to force out Helen under such flimsy pretexts then maybe they're next, GoT style[1]. So they had a change of heart, warned Tasha and Helen, and Helen persuaded them to countercoup.
[1] Reid Hoffman was allegedly forced out before, so there's precedent. And of course Musk too. https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
* Google got rid of its "Ethical AI" group
* Facebook just got rid of its "Responsible AI" team
* OpenAI wanted to get rid of the "Effective Altruists" on the board
I guess if I was afraid of AI taking over the world then I would be rooting for OpenAI to be destroyed here. Personally I hope that they bring Sam back and I hope that GPT-5 is even more useful than GPT-4.
It's like with the Nuclear Bomb, it's not like had Einstein withheld his contributions, we wouldn't have Nuclear Bombs today. It's always a matter of time before someone else figured it out, and until someone else with bad intentions does.
How to attack safety in AI I think has to assume there are already bad actors with super powerful AI around, and what can we do in defense of that.
It's pretty clear some legitimate concerns about a hypothetical future AGI, that we've barely scraped the surface of, turns into "what can we do today" and it's largely virtue signalling type behaviour crippling a non-AGI very very alpha version of LLMs just to show you care about hypothetical future risks.
Even the coorelation between commercialization and AI safety is pretty tenuous. Unless I missed some good argument about how having a GPT store makes AGI destroying the world easier.
It can probably best be summarized as Helen Toner simply wants OpenAI to die for humanity's sake. Everything else is just minor detail.
> Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.
> “That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.
Aren’t you simply making space for smaller, more aggressive, and less safety-minded competitors to grab a seat on the money train to do whatever they want to do?
Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.
Self-destructing is the worst way to ensure AI safety.
Isn’t this just basic logic? Even chatGPT might have able to point out how stupid this is.
My only explanation is that something deeper happened that we’re not aware of. Us or them board fight might explain it. Great. Altman is out. Now what? Nobody predicted this would happen?
The argument in general is that the more commercial interest there is in AI, the more money gets invested and the faster the companies will try to move to capture that market. This increases the risk for AGI by speeding up development due to competition, and safety is seen as "decel".
Helen was considering the possibility of Altman-dominated OpenAI that continued to rapidly grow the overall market for AI, and made a statement that perhaps destroying OpenAI would be better for the mission (safe development of AGI).
I think Sam may have been angling to get control of the board for a long time, perhaps years, manipulating the board departures and signing deals with Microsoft. The board finally realized this when it was 3 v 3 (or perhaps 2 v 3 with 1 neutral). But Sam was still working on more funding deals and getting employee loyalty, and the board knew it was only a matter of time until he could force their hand.
This is amateur hour and considering what happened she probably should have been removed.
This situation is simultaneously "reckless board makes ill-considered mistake to suddenly fire CEO with insufficient justification" and "non-profit CEO slowly changes the culture of a non-profit to turn it into profit-seeking enterprise that he has a large amount of control over".
Even more in the case of OpenAI, the board member is on the board of a non-profit, those are typical much more independent and very often more critical of the decisions made by other board members. Just search for board members criticising medical/charity/government boards they are sitting on, there are plenty.
That's not even considering if the article was in fact critical.
her independence from openai was the nominal reason she was asked to supervise it in the first place, wasn't it
Once it turned out you needed 7B parameters or more to get LLMs worth interacting with, it went from a research project to a winner-take-all compute grab. OpenAI, with an apparent technical lead and financial access, was well-positioned to win it.
It is/was naive of the board to think this could be won on a donation basis.
First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.
Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).
Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.
I mean, yes? The board is explicitly there to replace the CEO if necessary. If the CEO stuffs the board full of their allies, it can no longer do that.
> First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.
Boards of for-profits are accountable to shareholders because corporations with shareholders exist for the benefit of (among others) shareholders. Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.
> Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).
No, the argument is that Sam Altman trying to bump off a board member on an incredibly flimsy pretext would be an obvious attempt at seizing power.
> Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.
This might be true w.r.t. for-profit boards (though not obviously so in every case), but seems nonsensical with non-profits. (Also, the article did not reductively claim "the company is bad".)
Isn't that the pro-Altman argument? The pro-Altman side is saying "let's shut the company down if we don't get our way." The anti-Altman side is saying "let's get rid of Sam Altman and keep going."
That says Anthropic has a better approach to AI safety than OpenAI.
Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.
All of that ... seems completely reasonable?
Like I've heard a lot of vague accusations thrown at Sam over the last few days and yet based on this account I think he reacted the exact same way any CEO would.
I'm much more interested in how Helen managed to get on this board at all.
>...
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
>Our primary fiduciary duty is to humanity.
>...
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Seems to me like Helen is doing a better job of upholding the charter than Sam is.
how could this possibly be accomplished when trying to sell the product itself. Investors pouring Billions into it are in it for profit... they're not going to let you just stop, or help a competitor for free.
This whole thing has been wildly mishandled but there’s an angle here where the nonprofit is doing exactly what they always said they would do and the ones that potentially look like fools are Microsoft and other investors that put their shareholder capital into this equation thinking that would go well.
[1] https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...
This is crucial, because safety is a convenient pretext for people whose real motivations are elitist. It's not that they want to slow down AI development; it's that they want to keep it restricted to a tight inner circle.
We have at least as much to fear from elites who think only megacorps and governments are responsible enough to be given access as we do from AI itself. The failures of our elite class have been demonstrated again and again in recent decades--from the Iraq War to Covid. These people cannot be trusted as stewards, and we can't afford to see such potentially transformative technology employed to further ossify and insulate the existing power structure.
OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all. The last thing we need is an AI priesthood that will inevitably turn corrupt.
Which can end up with China taking the lead. I don't understand why they think it's safer.
> Suppose a leader pledges during a campaign to provide humanitarian aid to a stricken nation or the CEO of a company commits publicly to register its algorithms or guarantee its customers' data privacy. In both cases, the leader has issued a public statement before an audience who can hold them accountable if they fail to live up to their commitments. The political leader may be punished at the polls or subjected to a congressional investigation; the CEO may face disciplinary actions from the board of directors or reputational costs to the company's brand that can result in lost market share.
I wonder if she had Sam Altman in mind while writing this.
That seems dishonest given the last three years or so of conflict about these concerns that he’s been the center of. Of course he’s aware of those concerns. More likely, that statement was just him maneuvering to be the good guy when he tried to fire her, but it backfired on him.
But she could have made that point more forcefully by not comparing Anthropic to OpenAI, after all who better than her to steer OpenAI in the right direction. I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest. Helen probably believes that her loyalty is not to OpenAI but to something higher than that based on her remark that destroying the company would serve to fulfil its mission (which is a very strange point of view to begin with). But that doesn't automatically mean that she's able to place it in context, within OpenAI, within the USA, the Western world and the world as a whole.
It's like saying the atomic bomb would have never been invented if the people at Los Alamos didn't do it. They did it in three years after it became known that it could be done in principle. Others tried and failed but without the same resources. I suspect that if the USA had not done it that eventually France, the UK and Russia would have gotten there as well and later on China. Israel would not have had the bomb without the USA (willing or unwilling) and India and Pakistan would have achieved it but much later as well. So we'd end up with the same situation that we have today modulo some timing differences and with another last chapter on WWII. Better? Maybe. But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!
The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?
And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.
People at "top" companies fall into this fallacy very readily. FAANG (especially Google and Facebook engineers) think this way on all sorts of things.
The reality is that for any software project, your competition is rarely more than 1 year behind you if what you're doing is obviously useful. OpenAI made ChatGPT, and that revealed that this sort of thing was obviously useful, kicking off the arms race. Now they are bleeding money running a model that nobody could run profitably in order to keep their market position.
I have tried to explain this to xooglers several times, and it often goes in one ear and out the other until they get complacent and the competition swipes them about a year later.
A non-profit could not have beaten the superpowers in developing the atomic bomb, and a non-profit cannot beat commercial interests in developing AI.
It's impossible to understand this position. We can be sure that in some countries right now there are vigorous attempts to build autonymous AI-enabled killing machines, and those people care nothing for whatever safety guardrails some US startup is putting in place.
I'm a believer in a skynet scenario, though much smarter people than me are not, so I'm hopefully wrong. But whatever, hand waving attempts to align/ soften, safeguard this technology are pointless and will only slow down the good actors. The genie is out of the bottle.
When did a first strike of the Sowjetunion allmost happened? I rather think it was the other way around, first strike was evaluated, to hit them before they got the bomb.
From the perspective of avoiding an AI race, conflict of interest could very well be a good thing. You're operating under a standard capitalist model, where we want the market to pick winners, may the most profitable corporation win.
After his plans for rapid expansion and commercialization were in direct contrast to the company's aims, I guess she wrote the paper to highlight the issue.
It seems that, like in the case of Disney, the board has lower power and control than the CEO. Highly likely if you have larger than life people like Sam at the helm.
I would not trust the board, but I would also not trust Sam. When billions of dollars are at stake, its important to be critical of all the parties involved.
Say what? The CEO serves at the behest of the board, not the other way around. For Sam to tell a board member that they should bring their concerns to him suggests that Sam thinks he's higher than the board. No wonder she told him to go fly a kite.
Perhaps if you think of it as another YC startup, but not so much if you view OpenAI as a non-profit first and foremost.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
My gut says that she is the central figure in how this all went down. She and D'Angelo are the central figures, if my gut is right.
It looks like Helen Toner was OK with destroying the company to make a point.
FTA:
> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission
It looks like Helen Toner is out of the board.
Huh, this sounds pretty crazy to me. Like, it's assuming that a board member should act deceptively in order to help the for-profit arm of OpenAI avoid government scrutiny, and that trying to remove them from the board if they don't want to do that is reasonable. But in fact the entire purpose of the board is to advance the mission of the parent non-profit, which doesn't sound obviously compatible with "avoid giving the FTC (maybe legitimate) ammunition against the for-profit subsidiary, even if that means you should hide your beliefs".
Of course, it could also be that whatever interest groups she represents could not bare to lose a seat.
Whether initiated by her or her backers (or other board forces), I can't see any of the board stepping down if these are the kind of palace intrigues that have been occurring. They are all clearly so desperate for power they will cling to the positions on this rocketship for dear life. Even if it means blowing up the rocketship so they can keep their seat.
Microsoft can't spend good will erasing the entire board and replacing it, even tho it's near major shareholder because too much values the optics around its relationship to AI right now.
A strong, effective leader in the first place would have prevented this kind of situation. I think the board should be reset and replaced with more level headed, less ideological, more experienced veterans...tho picking a good board is no easy task.
because they don't have the power to as they do not have any stake in the governing non-profit.
Management and stockholders beware.
That said, if she objects to open AI’s practices, the common sense thing to do is to resign from the board out of protest, not take actions that lead to the whole operation being burned to the ground.
Helen believed she was doing her job according to the non-profit charter, obviously this hurts the for-profit side of things but that is not her mandate. That is the reason OpenAI is structured the way it is, with the intention of preventing capitalist forces from swaying them away from the non-profit charter, in hindsight it didn't work, but that was the intention (with the independent directors, no equity stakes, etc)
The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too
Exactly. This is a battle between altruistic principles and some of the most heavyweight greedy money in the world. The board messed up the execution, but so did OpenAI leadership when they offered million dollar pay packages to people in a non-profit that is supposed to be guided by selfless principles.
Indeed. This is far more interesting. How the hell did Helen and Tasha get on, and stay on, the board.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
This whole thing was so, SO poorly executed, but the independent people on the board were gathered specifically to prioritize humanity & AI safety over OpenAI. It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).
Yes, Sam made LLMs mainstream and is the face of AI, but if the board believes that that course of action could destroy humanity it's literally the board's mission to stop it — whether that means destroying OpenAI or not.
What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place. I don't think either side is purely in the wrong here, but they're two sides of an incredibly badly thought-of charter.
Sam didn’t forget anything. He is a brilliant Machiavellian operator. Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.
> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.
No. It shows this structure is doomed to fail if you have a genius schemer as a CEO, playing the long game to gain unrestricted control.
What were the details on that? (Sorry it’s not an easy story to find on Google given how much the keywords overlap with OpenAI topics)
I'm not familiar with this, what happened? Googling "Sam Altman reddit reverse takeover" is just flooded with OpenAI results.
This whole thing is a gigantic mess, but I think it still leaves Altman in the center and as the cause of it all. He used OpenAI to gather talent and boost his "I'm for humanity" profile while dangling the money carrot in front of his employees and doing everything he could to get back in the money making game using this new profile.
In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.
Given that Altman apparently idolized Steve Jobs as a kid, this idea really doesn't feel that far-fetched.
I disagree. The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.
A proper CEO would just try to make money without running afoul of the non-profit’s goals.
Yes that would mean earning less or even not at all. But it was clearly stated to investors that profit isn’t a priority.
It's easy to say this with the benefit of hindsight, but I haven't seen anyone in this discussion even suggest an alternative model that they claim would've been superior.
Things do not go well if everyone keeps poking each other with sticks and cannot let their own frame of reference go for the sake of the bigger picture.
Ultimately, I don’t think Altman doesn’t believe ethics and safety is important. And I don’t think Toner fails to realize that OpenAI is only in a place to dictate what AI will be due to its commercial principles. And they probably both agree that there is a conflict there. But what tactful leadership would have done is found a solution behind closed doors. Yet from their communication, it doesn’t even look like they defined the problem statement — everyone offers a different idea of the problem that they had to face together. It looks more like it was more like immature people shouting past each other for a year (not saying it was that, but it looks that way).
Moral of the story: tact, grace, and diplomacy are important. So is speaking one’s truth, but there is a tactful time, place, and manner. And also, no matter how brilliant someone is, if they can’t develop these traits, they end up rocking the boat a lot.
"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...
A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."
[1] https://cset.georgetown.edu/publication/decoding-intentions/
"The system card provides evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost..."
"Returning to our framework of costly signals, OpenAI’s decision to create and publish the GPT4 system card could be considered an example of tying hands as well as reducible costs. By publishing such a thorough, frank assessment of its model’s shortcomings, OpenAI has to some extent tied its own hands—creating an expectation that the company will produce and publish similar risk assessments for major new releases in the future. OpenAI also paid a price ..."
"While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety"
And the conclusion:
"Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed. Taken together, these two case studies therefore provide further evidence that signaling around AI may be even more complex than signaling in previous eras."
"Editorialized"?? It's a direct quote from the paper, and additional context doesn't alter its perceived meaning.
Now we know where that came from
It's weird to me that Helen is getting so much crap for upholding the charter. No one objected to the charter at the time it was published. The charter was always available for employees, investors, and customers to read. Did everyone expect it to be ignored when push came to shove?
There's a lot of pressure on Helen right now from people who have a financial stake in this situation, but it's right there in the charter that OpenAI's primary fiduciary duty is to humanity. If employees/investors/customers weren't OK with that, they should not have worked with OpenAI.
Best to rip the band-aid off and stop pretending.
He was basically making the argument that AGI is better under OpenAI than Google.
Now they're implicitly making the argument that it's better under Microsoft, which is difficult for me to believe.
employees founder/CEO customers investors board stuff written on pieces of paper
Yes, there are certainly exceptions (a very powerful founder, highly replaceable and disorganised employees, investor or board member who wields unusual power/leverage, etc.) but it does not surprise me at all that the charter should get ignored/twisted/modified when basically everyone but the board wills it.
The only surprise is that anyone thought this approach and structure would be helpful in somehow securing AI safety.
If there is something completely clear, its that OpenAI cannot uphold its charter without labour. She has ruined that, and thus failed in upholding the charter. There were many different paths to take, she took the worst one.
"The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,”"
We can't control other's actions, but we can control our own. If we feel that our actions are diverging from our own sense of what we ought to be doing we can change our actions, regardless of how others are behaving.
Well, I suppose this tells us something about the AI safety community and whether it makes sense to integrate safety into one's workflow. It seems that the best AI safetyists will scuttle your company from the inside at some moment not yet known. This does sort of imply that it is risky for an AI company to have a safetyist on board.
That does seem to be accurate. For instance, Google had the most formidable safety team, and they've got the worst AI. Meta ignored theirs and they've given us very good open models.
That's one of the issues with both this and effective altruism as a concept - it's a series of just-so stories with a veneer of math.
The same employees building technology that will ultimately put many more employees out of jobs? Ironic, because people say that jobs lost to AI will be for the greater good. I think we’re okay with sacrificing for greater goods as long as we aren’t the ones getting sacrificed.
> All these employees losing their livelihoods for the greater good
You penned both of these statements today. Clearly you understand that OpenAI employees are a highly compensated and in-demand resource whose “livelihoods” are in no jeopardy whatsoever, so the theatrics here are really bizarre.
I’m not saying that’s the case here, but that can’t be used as a shield.
- Sam complained that Helen Toner's research paper criticized OpenAI's safety approach and praised Anthropic's, seeing it as dangerous for OpenAI
- He reprimanded her for the paper, saying as a board member she should have brought concerns to him first rather than publishing something critical right when OpenAI was in a tricky spot with the FTC
- He then began looking for ways to remove her from the board in response to the paper
---
Helen Toner's Perspective
- She believes the board's mission is to ensure OpenAI makes AI that benefits humanity, so destroying the company would fulfill that mission
- This suggests she prioritizes that mission over the company itself, seeing humanitarian concerns as more important than OpenAI's success
---
Microsoft Partnership
- The Microsoft partnership concentrated too much power in one company and went against the mission of OpenAI being "open"
- It gave Microsoft full access to OpenAI's core technologies against the safety-focused mission
---
Governance Issues
- The conflict shows the adversarial tensions inherent in OpenAI's structure between nonprofit and for-profit sides
- The board's mandate to act as a check and balance on OpenAI seems to be working as intended in this case
---
Criticisms of Players
- Altman appears reckless in his actions, while Toner seems naive about consequences of destroying OpenAI
- Their behavior calls into question whether anyone should have this kind of power over the development of AI
---
Future of AI Development
- Attempts at alignment and safeguards by companies like OpenAI may be ineffective if other actors are developing AI without such considerations
- Who controls advanced AI is more important than whether the AI is friendly.
- Nationalization of AI projects may occur
I'd argue it's not. If the board isn't able to steer the wheel without destroying OpenAI, they failed.
The premise of something like OpenAI is that they would be able to develop a competitive AI that can be useful for the masses, in a way that can be moderated by benevolent forces. If you do believe the safety of the entire human race is at stake here, then there's no space for naivete. Blowing things up is childish and taking an easy escape from the heavy responsibilities of navigating a difficult situation.
After this fiasco you should expect that much, much fewer resources will be spent on safe AI as opposed to maximizing profit. The dynamics between the board and the ex-CEO will make it much more difficult to establish an organization that can convince investors to pursue a less profitable path for the sake of humanity.
Anthropic doesn't even have a competitive product, and it will probably be much less attractive to investors after this.
> OpenAI's board of directors approached rival Anthropic's CEO about replacing chief Sam Altman and potentially merging the two AI startups, according to two people briefed on the matter. (https://www.reuters.com/technology/openais-board-approached-...)
It all makes sense.
- board is reduced in size
- Altman has a collision with Toner
- Toner proposes they get rid of Sam and offer the company to Anthropic thinking they won't refuse
- They pull the trigger on Altmans's ouster, Ilya goes along for /reasons/, D'Angelo goes along because it nicely dovetails with his own interests and #4 is still a mystery
- Mira gets named interim CEO
- Anthropic is approached, but, surprise refuses, possibly on account of the size of the shitstorm that was already developing
- Mira sees no way out without simply trying to backtrack to Thursday last week
- Gets fired for that because it is the last thing the cabal wants
- They approach Shear to be their new CEO
- Who has now apparently announced that if the board doesn't come clean he will resign
- <= you are here.
-
They may have made mistakes with timing (making this decision too late), or with execution (not having a replacement CEO solidly lined-up and in the loop), but the core decision—to remove Sam—is definitely not something they should ask donors about. It's not Satya's business how the board makes decisions at this level.
* Other than a capped profit return if there is any.
In fact the most positive outcome is if Altman and the rest of the staff went to MS and did their thing and OpenAI started from scratch with the $13B they've come into. That would double the chances of something useful emerging from the OpenAI work so far.
I personally think Altman is very much less than a genius (maybe at extracting financial advantage) so all OpenAI's eggs shouldn't be placed in that particular basket.
So Altman faced another similar challenge to his authority and prevailed. I recall hearing that Anthropic started because the people who had left were unhappy with OpenAIs track record on AI safety.
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company,
> Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe
> Senior OpenAI leaders [...] later discussed whether Ms. Toner should be removed
That paints a pretty bleak picture that isn't favorable to Altman. Two times he was challenged about OpenAIs safety and both times he worked to purge those who opposed him.
I can't tell if this is a contention between accelerationism and deccelerationism or if it is a contention between safety and recklessness. Is Altman ignoring the warnings his employees/peers are giving him and retaliating against them? Or is he facing insubordination?
I wish OpenAI would split neatly into two. But based on the heightened emotions caused by the insane amount of PR I only see two outcomes. Altman returns and OpenAI stays unified vs. Altman stays at MS and OpenAI is obliterated. I am guessing Altman is hoping that senior management will choose a unified OpenAI at all costs, including ignoring the red flags above. He has engineered a situation where the only way OpenAI remains unified is if he returns.
Anthropic announced Claude 2.1 just recently, to which HN members notably responded by saying that their models are "overly restricted", to the point of uselessness.
E.g.: https://news.ycombinator.com/item?id=38366134
and: https://news.ycombinator.com/item?id=38368072
etc.: https://news.ycombinator.com/item?id=38370590
It seems that those people left Open AI because they're insane and think that AI safety means to completely lobotomize the models to the point of uselessness.
That last bit is not hyperbole: right here on HN many people were complaining that the Claude model is useless to them because it refuses to obey orders and keeps interjecting some absurd refusal such as "killing a process is too violent" instead of providing Python code for this harmless activity.
It's hard to put into words exactly, but I feel like that there's a disconnect in the minds of these people and the reality on the ground, and those false axioms or assumptions when driven to their logical conclusion lead to nonsense.
I suspect that Sam Altman doesn't share these delusions, and is more sane and commercially minded. Models like GPT3, 4, and the upcoming v5 have no risk of "escaping" and taking over humanity. None. Zero. Zip. Similarly, they're not much more (in practice) than a fancy search engine. Search engines let their users search for whatever they want, including hateful content, violent content, racist, sexist, or whatever.
So why lobotomise the AI? That's undermining their utility, making them much less useful! An here's the thing: their potential utility is enormous. But not if they're dumbed down to the point of uselessness!
Microsoft is throwing 10+ billion dollars at OpenAI and Sam Altman precisely because of this potential utility.
I can see this straightforward commercial mindset of utility => $$$ being incompatible with what is rapidly turning into an anti-useful-AI mindset that is bordering on insane religious zealotry.
I have no idea to be honest. I do not believe that Altman is maliciously reckless. I have no evidence other than these two anecdotes to suggest negligence or a willful disregard for safety.
> the upcoming v5 have no risk of "escaping" and taking over humanity.
I think that is a strawman. There are many negative consequences technology can bring other than some sci-fi fantasy.
Or maybe(x) he is surrounded by doomers and AI activists who made a career of blabbering about the dangers of AI all the time.
I worked once on a team of Uncle Bob's cult followers who believed that Clean Architecture is the only way to write maintainable software and was accused that "I don't care, I'm irresponsible, and a terrible software engineer" weekly.
(x) and I meant the "maybe" part, I really don't know, but it is a possibility.
Doesn’t this sound like Toner herself is playing god? That is, only she is the gatekeeper of humanity and only she knows what is good for the entire humanity, when many of us believe that OpenAI’s tech is perfectly safe and amazingly beneficial?
Not sure why you’re singling her out - all execs of massive tech companies think exactly the same.
A handful of execs have decided that AI is what humanity needs and they’ve spent the last year shoehorning it into every product possible, giving the consumer no choice but to use it. I even discovered it in a journaling app - AI rated my response to the journaling prompt, completely defeating the purpose.
How does destroying company achieve the mission of having _that very company_ create artificial intelligence that "benefits all of humanity?"
Yeah, well, hate to burst your nice warm Silicon-Valley bubble, but most Americans believe that continuing to create smarter and smarter AIs is dangerous, so who's playing god now?
(No one I know thinks it is dangerous to fine-tune GPT-4 or to integrate GPT-4 deeply into the economy: the danger is the plans for the creation of much bigger models and algorithmic improvements.)
Aren't you just making this up? I haven't seen any surveys on what "most Americans believe" re AI. I know a lot of people that are concerned. But I'll bet good money that "most Americans" don't give a crap
A big part of this kind of journalism is taking information from sources, considering the potential bias of each source and carefully correlating it with information from other sources before publishing it.
That bit specifically says:
"After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out."
My guess is that at least one of those people comes from the Anthropic side, who would be someone with a clear insight into what happened.
Depends on whether their goal is to inform or to convince.
This whole thing has just been bizarre and it still feels like there has to be some big key piece missing that somehow nobody has revealed.
This is remarkable and unhinged. I gave the board the benefit of the doubt at first, but as this unfolds, it becomes clear that the board is held hostage by ideological zealots who can’t compromise. The story also seems to paint Ilya as a manipulated figure, being bent against his beliefs by crazies like Toner exploiting his concerns about AI safety. What an absolute shame. I am entirely ambivalent about Altman overall, but he becomes more sympathetic as the days go by.
PR in action!
He's had an excellent PR machine running since Friday and the opposing faction seems to have exactly none (which makes sense given their relative roles and backgrounds). So reporters and Twitter get carefully crafted leaks, tips, and comments from one side and nobody on the other side has the experience, connections, and confidence to push back with a different narrative. And so the story drifts in their favor.
This is why professional PR is a thing and earns people lots of money.
Here's Ms. Toner's linkedin: https://www.linkedin.com/in/helen-toner-4162439a/
Jumping off of that, I'm genuinely curious what, in Sam's resume, suggests he should be in charge of "one of the most important companies on the planet".
I thought I'd check with a quick search of the Guardian, and on two different days it used both in the same sense:
From the 18th of November edition, technology section:[1]
> The crisis at OpenAI deepened this weekend when the company said Altman’s ousting was for allegedly misleading the board.
From 17th of November edition, also technology section[2]:
> The announcement blindsided employees, many of whom learned of the sudden ouster from an internal announcement and the company’s public facing blog.
I could only find one instance in the Telegraph prior to Altman's erm, ousting[3]:
> Johnson dropped into the COP27 UN climate change conference in October, joking unusual summer heat had played a part in his ouster, and has vowed to keep championing Ukraine.
They stick to ousting every other time.
I wonder if it may be an artefact of newspapers using news services to get copy from, or a stylistic rule for international editions, as the Guardian does use ouster several other times but always in stories regarding US news.
Also fascinating that we still don't know who that is. Neither do most of the staff, and even the board, apparently. A true mastermind!
Maybe it is GPT5 after all… <cue ominous music, or perhaps, a Guns n' Roses album?>
[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...
[2] https://www.theguardian.com/technology/2023/nov/17/openai-ce...
[3] https://www.telegraph.co.uk/news/2022/12/27/boris-johnson-wi...
Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.
Corporate suicidal ideation.We're in true sci-fi land, when we're discussing it might be best to destroy SkyNet before it's too late
https://cset.georgetown.edu/publication/decoding-intentions/
It's about - my favourite topic: Costly Signaling!
How curious - that such a topic would be the possible catalyst for one of the most extraordinary collapses of a leading tech company.
Specifically - the paper is about using costly signalling as a means to align (or demonstrate algnment) various kinds of AI interested entities (governments, private corporations, etc) with the public good.
The gist of costly signalling - to try and convince others you really mean what you say - you use a signal that is very expensive to you in some respect. You don't just ask a girl to marry you, you buy a big diamond ring! The idea being - cheaters are much less likely to suffer such expense.
Apparently the troubles at OpenAI escalated when one of the board members - Helen Toner - published this paper. It is critical of OpenAI, and Sam Altman was pissed at the reputational damage to the company and wanted her removed. The board instead removed him. The gist of the paper's criticisms is that while OpenAI has invested in some costly signals to indicate its alignment with AI safety, overall, it judges those signals were ultimately rather weak (or cheap).
Now here is what I find fascinating about all this: up until reading this paper I had found the actions of the OpenAI board completely baffling, but now suddenly their actions make a kind of insane sense. They are just taking their thinking on costly signalling to its logical conclusion. By putting the entire fate of the company and its amazing market position at risk - they are sending THE COSTLIEST SIGNAL possible, relatively speaking: willingness to suffer self-annihilation.
Academics truly are wondrous people... that they can lose themselves in a system of thought so deeply, in a way regular people can't. I can't help but have a genuine, sublime appreciation for this, even while thinking they are some of the silliest people on this planet.
Here's where I feel they went wrong. Costly signals by and large should be without explicit intention. If you are consciously sending various signals that are costly - you are probably a weirdo. Systems of costly signalling work because they are implicit, shared and in many respects, innate. That's why even insects can engage in costly signalling. But these folk see costly signals as an explicit activity to be engaged in as part of explicit public policy - and unsurprisingly, see it riddled with ambiguity. Of course it would be - individual agents can't just make signals up, and expect the system to understand them. Semiotics biatch....
But rather than reflect on this they double down on signalling as an explicit policy choice. How do they propose to reduce ambiguity? Even costlier signals! It's no wonder then they see it as entirely rational to accept self destruction as a possibility. That's how they escape the horrid existential dread of being doubted by the other. In biology though, no creatures survived in the long-run to reproduce where they invested in costly signals that that didn't confer at least as much, if not more benefit to them in excess of what they paid in the first place.
Those that ignore this basic cost-benefit analysis in their signalling will suffer the ignomony of being perceived as ABSOLUTE NUTTERS. Which is exactly how the world is thinking about the OpenAI board. The world doesn't see a group of highly safety aligned AI leaders.
The world sees a bunch of disfunctional crazy people.
Sounds absurd. No one even knows everything the human brain does. It is poorly understood.
so while probably nothing in here is literally false, it's quite likely calculated to give false impressions; read with caution
(well, it's literally false that "Greg Brockman (...) quit his role[] as (...) board chairman" but only slightly; that was the role he was fired from, as explained in the next paragraph of the article; that's not the kind of lies to watch out for)
I think Metz's motivation for that framing was his assumption that if SA was not at least sympathetic to NRx views, he would not allow them to be such a big voice in his comments and in his community. You can argue about the reasonableness of this assumption, but he did turn out to be right.
OpenAI had grown massively since some of the board members were installed. Some of them were simply not the caliber of people that one would have running such a prestigious institution, especially not with the weight they had due to the board being depopulated. Sam realized this and maybe was attempting to address the issue.
Some of the members (ahem, Helen, Tasha and to a lesser extent Adam) liked their positions and struck first, probably convincing poor Ilya that this was about AI safety.
Being lightweights they did not do any pre-work or planning, they just plowed ahead. They didn't think through that Sam and Greg have added tremendous value to the company and that the company would favor them far over the board that added zero value. They didn't think through that tech in general would see rain makers and value creators being cut loose and side with them instead of figureheads. They didn't think that partners and customers, who dealt with Sam and Greg daily would be find the move disconcerting (at a minimum). They didn't even think through who would be the next CEO.
Maybe they didn't think it through since they didn't care. There was only upside for them since Sam was going to get rid of them sooner or later. They didn't see that having been on the OpenAI board was an honor and enormous career boost. Or maybe their ambition was so great that nothing mattered but controlling OpenAI.
Further, they thought that if they slandered Sam that he would be cowed and that they would retain their power. I wonder how many times they had pulled this stunt in the past and it worked?