With all sympathy and empathy for Sam and Greg, whose dreams took a blow, I want to say something about investors [edit: not Ron Conway in particular, whom I don't know; see the comment below about Conway]: The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management. And if anyone objects to the impact on the employees, they justify it with the magic words that somehow cancel all morality and humanity - 'it's business' - and call you an unserious bleeding heart. But when the investor's buddy CEO is fired ...
I think that's wrong and that they should also take into account the impact on employees. But CEOs are commanders on the business battlefield; they have great power over the company's outcomes, which are the reasons for the layoffs/firings. Lower-ranking employees are much closer to civilians, and also often can't afford to lose the job.
There is why you do something. And there is how you do something.
OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.
They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution. Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.
For me there is no justification for how this all happened.
Given the language in the press release, wouldn't it be more accurate to say that Sam Altman, and not the board, blindsided everyone? It was apparently his actions and no one else's that led to the consequence handed out by the board.
> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.
From all current accounts, doesn't that seem like what Altman and his crew were already trying to do and was the reason for the dismissal in the first place?
The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.
This wasn't a change of strategy, it was a restoration of it. OpenAI was structured with a 501c3 in oversight from the beginning exactly because they wanted to prioritize using AI for the good of humanity over profits.
It's seeming like corporate governance and market domination are exactly the kind of thing the board are trying to separate from with this move. They can't achieve this by going to investors first and talking about it - you think Microsoft isn't going to do everything in their power to prevent it from happening if they knew about it? I think their mission is laudable, and they simply did it the way it had to be done.
You can't slowly untangle yourself from one of the biggest companies in the world while it is coiling around your extremely valuable technology.
When a company experiences this level of growth over a decade, the board evolves with the company. You end up with board members that have all been there, done that, and can truly guide the management on the challenges they face.
OpenAI's hypergrowth meant it didn’t have the time to do that. So the board that was great for a $100 million, even a billion $ startup falls completely flat for 90x the size.
I don’t have faith in their ability to know what is best for OpenAI. These are uncharted waters for anyone though. This is an exceptionally big non-profit with the power to change the world - quite literally.
Sorry I don't see the 'how' as necessarily appalling.
The less appalling alternative could have been weeks of discussions and the board asking for Sam's resignation to preserve the decorum of the company. How would that have helped the company ? The internal rife would have spread, employees would have gotten restless, leading to reduced productivity and shipping.
Instead, isn't this a better outcome ? There is immense short term pain, but there is no ambiguity and the company has set a clear course of action.
To affirm that the board has caused a split in the company is quite preposterous, unless you have first hand information that such a split has actually happened. As far as public information is concerned 3 researchers have quit so far, and you have this from one of the EMs.
"For those wondering what’ll happen next, the answer is we’ll keep shipping. @sama & @gdb weren’t micro-managers. The comes from the many geniuses here in research product eng & design. There’s clear internal uniformity among these leaders that we’re here for the bigger mission."
This snippet in fact shows the genius of Sam and gdb, how they enabled the teams to run even in their absence. Is it unfortunate that the board fired Sam, from the engineer's and builder's perspective yes, from the long term AGI research perspective, I don't know.
By all accounts, this split happened a while ago and led to this firing, not the other way around.
Oh! So, now you got him furious? when just yesterday he made a rushed statement to standby Mira.
https://blogs.microsoft.com/blog/2023/11/17/a-statement-from...
This is the biggest takeaway for me. People are building businesses around OpenAI APIs and now they want to suddenly swing the pendulum back to being a fantasy AGI foundation and de-emphasize the commercial aspect? Customers are baking OpenAI's APIs into their enterprise applications. Without funding from Microsoft their current model is unsustainable. They'll be split into two separate companies within 6 months in my opinion.
Is Microsoft a higher purpose?
But as far as I can tell, unless you are in the exec suites at both OpenAI and at Microsoft, these are just your opinions, yet you present them as fact.
If it was so easy to go to the back of the queue and become a threat, Open AI wouldn't be in the dominant position they're in now. If any of the leavers have taken IP with them, expect court cases.
I think it’s a good outcome overall. More decentralization and focused research, and a new company that focuses on product.
The board's job is specifically to do right by the charitable mission of the nonprofit of which they are the board. Investors in the downstream for-profit entity (OpenAI Global LLC) are warned explicitly that such investments should be treated as if they were donations and that returning profits to them is not the objective of the firm, serving the charitable function of the nonprofit is, though profits may be returned.
This exactly. Folks have completely forgotten that Altman and Co have largely bastardized the vision of OpenAI for sport and profit. It's very possible that this is part of a larger attempt to return to the stated mission of the organization. An outcome that is undoubtedly better for humanity.
I met Conway once. He described investing in Google because it was a way to relive his youth via founders who reminded him of him at their age. He said this with seemingly no awareness of how it would sound to an audience whose goal in life was to found meaningful, impactful companies rather than let Ron Conway identify with us & vicariously relive his youth.
Just because someone has a lot of money doesn’t mean their opinions are useful.
Yes. There can often be an inverse correlation, because they can have success bias, like survival bias.
That would make things more equitable perhaps. It’d at least be interesting
i'm surprised anyone can take this "oh woe is me i totally was excited about the future of humanity" crap seriously. these are SV investors here, morally equivalent to the people on Wall Street that a lot here would probably hold in contempt, but because they wore cargo shorts or something, everyone thinks that Sam is their friend and that just if the poor naysayers would understand that Sam is totally cool and uses lowercase in his messages just like mee!!!!
they don't give a shit that your product was "made with <3" or whatever
they don't give a shit about you.
they don't give a shit about your startup's customers.
they only give a shit about how many dollars they make from your product.
boo hooing over Sam getting fired is really pathetic, and I'd expect better from the Hacker News crowd (and more generally the rationalist crowd, which a lot of AI people tend to overlap with).
I don't know him but he seems a reasonably decent / maybe average type.
you get that you sow. The way Altman publicly treated Cruise co-founder establishes like a new standard of "not do right by". After that I'd have expected nobody would let Altman near any management position, yet SV is a land of huge money sloshing care-free, and so I was just wondering who is going to be left holding the bag.
He might be emotional and defend his friends that’s not in challenge, he likes the guys— and he might be more cynical when it comes to firing 10,000 engineers —that’s less what I’ve heard about him personally, but maybe— however, in this case, he’s explicitly defending not an employee victim of the almighty board, but the people who created the entity, who later entrusted the board with some responsibility to keep the entity faithful to its mission.
Some might think Sam deserves that title less than Greg… not sure I can vouch for either. But Conway is trying to say that all entities (and their governance) owe their founders a debt of consideration, of care. That’s filial piety more than anything contractual. That isn’t the same as the social obligation that an employer might have.
The cult for founders, “0 to 1” and all that might be overblown in San Francisco, but there’s still a legitimate sense that the people who started all this should only be kicked out if they did something outrageous. Take Woz: he’s not working, or useful, or even that respectful of Apple’s decisions nowadays. But he still gets “an employee discount” (which is admittedly more a gimmick). That deference is closer to what Conway seems to flag than the (indeed) fairly violent treatment of a lot of employees during the staff reduction of the last year.
I think the distinction of founders is a rationalization of simple corruption: They know the founder, it's their buddy; they go to the same club, eat at the same restaurants, serve on the same boards, and have similar careers. Understanding the burden and challenges and the accomplishment of founders is instinctive, and appreciating founders is appreciating themselves.
Thats why Sam & Greg wasn't all they complained about. They lead with the fact that it was shocking and irresponsible.
Ron seems to think that the board is not making the right move for OpenAI.
I can see where the misalignment (ha!) may be: someone deep in the VC world would reflexively think that "value destruction" of any kind is irresponsible. However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts. Without getting into the specifics of this case: a non-profit's board is expected to make decisions that lose money (or not generate as much of it) if the decisions lead to results more consistent with the mission.
It’s possible Sam betrayed their trust and actually committed a fireable offense. But even if the rest of the board was right, the way they’ve handled it so far doesn’t inspire a lot of confidence.
The CEO always gets way too much credit externally for what the company is doing, it does not mean the CEO is that important.
OpenAI might be different, I don’t have any personal experience, but I also am not going to assume that this is a complete outlier.
But when your profit-seeking company is owned by a non-profit with a public mission, that trajectory might end up pointed the wrong way. The Dev Day announcements, and especially the marketplace, can be seen as suggesting that's exactly what was happening at OpenAI.
I don't think everyone there wants them to be selling cool LLM toys, especially not on a "make fast and break things" approach and with an ecosystem of startup hackers operationalizing it. (Wisely or not) I think they want to be shepherding responsible AGI before someone else does so irresponsibly.
I'm as distant from it all as anyone else, but I can easily believe the narrative that Ilya (et al.) didn't sign up there just to run through a tired page from the tech playbook where they make a better Amazon Alexa with an app store and gift cards and probably Black Friday sales.
But this is where we've come to as a society. I don't think it's a good place.
If shepherding responsible AGI can be done without a $10B budget in H100, sure… but it seems that scale matters. Having some people in the company sell state-of-the-art solutions to pay for the rest doing cutting-edge, expensive, necessary research isn’t a bad model.
If those separations needed to be re-affirmed, the research formally separated, a board decision approved to share any model from the research arm before it’s commercialized, etc., all that could be implemented within the mission of the entity. Microsoft Research, before them Bell Labs, and many others, have worked like that.
Is this a thing? This would be like Switzerland in WWII doing nuclear weapons research to try and get there before the Nazis.
Would that make any difference whatsoever to the Nazis timeframe? No.
I fail to see how the presence of "ethical" AI researchers would slow down in the slightest the bad actors who are certainly out there.
The notion they could just build it and they will come is ludicrous. Sam understood that and was trying to figure out a model that could pay for itself.
Obviously there will be mistakes made along the way. That's how it goes.
Don't forget. ChatGPT has competitors. A lot of them and they're getting pretty good.
It's such a small cohort that when someone doesn't completely blow it, they're immediately deemed as geniuses.
Give someone billions of dollars and hundreds of brilliant engineers, researchers and many will make it work. But only a few ever get the chance, so this happens.
They don't do any of the work. They just take the credit.
And many times even when they do blow it, it's handwaved away as being something outside of their control, so let's give them another shot.
Some founders don’t do much, and some are violently toxic (Lord knows I worked for many), but it’s rarely how they gather big financing rounds. At least, the terrible ones I know rarely did.
CEOs… I’ve seen people coast from Consulting or Audit into very mediocre careers, so I wouldn’t understand if Conway defended them as a class. The Cult for Founders has problems (for the reasons you point out, especially those who keep looking for ‘technical cofounders’ for years), but it’s not as blatantly unfounded.
Sad thing so, if they find enough people to continue investing, they will ultimately launch a product, most likely the early employees and founders will sell of their shares, become instant millionaires in the three figures and be hailed as thebtrue geniuses in their field... What an utter shit show that was...
Now imagine the weekend for those fired and those who quit OpenAI: you know they are talking together as a group, and meeting with others offering them billions to make a pure commercial new AI company.
An Oscar worthy film could be made about them in this weekend.
I think it's likely that we're going to find out Sam and others are just talented tech evangelists/hucksters and that justifiably worries a lot of people currently operating in the tech community.
The worry now is that the approach is going to be more of controlling access to just researchers who are trusted to be “safe”.
A non-profit immediately makes the values of OpenAI's PPUs (their spin on RSUs) to zero. Employees will be losing out of life changing sums on money.
Unless someone is truly well versed in the production of something, they latch on to the most public facing aspect of that production and the person at the highest level of authority (to them, even though directors and CEOs often have to answer to others as well)
That’s not to say they don’t have an outsized individual effect, but it’s rare their greatness is solo
I have often thought that we don't have enough information on how film directors operate, as I feel it could yield a lot of insight. There's probably a reason why many film directors don't hit their stride until late 30s and 40s, presumably because it takes those one or two decades to build the appropriate experience and knowledge.
Sam Altman has been an objectively successful leader of OpenAI.
Everyone has their flaws, and I'm more of a Sam Altman hater than a fan, but even I have to admit he led OpenAI to great success. He didn't do most of the actual work but he did create the company and he did lead it to where it is today.
Personally, If I had stock in OpenAI I'd be selling it right now. The odds of someone else doing as good a job is low. And the odds of him out-competing OpenAI is high.
I'm not sure this is actually the case, even ignoring the non-profit charter and the for-profit being beholden to it.
We know that OpenAI has been the talk of the town, we know that there is quite a bit of revenue, and that Microsoft invested heavily. What we don't know is if the strategy being pursued ever had any chance of being profitable.
Decades-long runways with hope that there is a point where profitability will come and at a level where all the investment was worth it is a pretty common operating strategy for the type of company Altman has worked with and invested in, but it is less clear to me that this is viable for this sort of setup, or perhaps at all - money isn't nearly as cheap as it was a decade ago.
What makes a for-profit startup successful isn't necessarily what makes a for-profit LLC with an operating agreement that makes it beholden to the charter of a non-profit parent organization successful.
In what way, exactly? ChatGPT would have been built regardless of whether he was there or not. It's not like he knows how to put a transformer pipeline together. The success of OpenAI's product rests on its scientists and engineers, not the CEO, and certainly not a non-technical one like Mr. Altman.
And if those primary engineers get sucked out of OpenAI, OpenAI won't be able to compete.
OpenAI is a different animal.
SamAltman has the cache to pull out those engineers. Particularly because Ilya's vision doesn't include lucrative stock options.
If Susan Fowler's book is accurate, Uber under TK was riddled with toxic management and incompetent HR. Yet you will hear people on Twitter reminisce of TK era Uber as the golden period and many would love him back
In this case, tons of people already have resigned from OpenAI. Sam Altman seems very likely to start a rival company. This is a huge decision and will have massive consequences for the company and their product area.
There is a long history of governance problems in nonprofits (see the transaction-cost economics literature on point). Their ambiguous goals induce politics. One benefit of profit-driven boards is that the goals make only well-understood risk trade-off's between growth now or later, and the board members are selected for their actual stake in that actual goal.
This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.
I think it would be much more rational to make AI/AGI an entirely for-profit enterprise, BUT reverse the liability defaults and require that they pay all external costs resulting from their products.
Transaction cost economics shows that in theory that it doesn't matter where liability is allocated so long as the transaction cost of redistributing liability is near zero (i.e., contract in advance and tort after are cheap), because then parties just work it out. Government or laws are required only to make up for the actual non-zero dispute transaction cost by establishing settled expectation.
The internet and software generally has been a domain where consumers have NO redress whatsoever for exported costs. It's grown (and disrupted) fantastically as a result.
So to control AI/AGI, make it for-profit, but flip liability to require all exported costs to be paid by the developer. That would ensure applications are incredibly narrow AND have net-positive social impact.
"Governance can be messy. Time will be the judge of whether this act of governance was wise or not." (Narrator: specifically, about 12 hours.) "But you should note that the people involved in this act of corporate governance are roughly the same people trying to position themselves to govern policy on artificial intelligence.
"It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come. Many people are saying we need more governance: maybe it turns out we need less."
>I could not find anything in the way of a source on when, or under what circumstances, Tasha McCauley joined the Board.
I would add, "or why she's on the board or why anyone thought she was qualified to be on the board".
At least with Helen Toner the intent was likely just to add a token AI Safety academic to pacify "concerned" Congressmen.
I am kind of curious how Adam D'Angelo voted. If he voted against removing Sam that would make this even more of a farce.
Yes, of course. But that's because "doing good" is by definition much more ambiguous than "making money". It's way higher dimension, and it has uncountable definitions.
So nonprofits will by definition involve more politics at the human level. I'd say we must accept that if we want to live amongst the actions of nonprofits rather than just for-profits.
To claim that "politics" are a reason something "can't be trusted" is akin to saying involvement of human affairs means something can't be trusted (over computers). We must imagine effective politics, or else we cannot imagine effective human affairs -- only mechanistic affairs of simple optimization systems (like capitalist markets)
https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...
The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs. The curious silence on military-industrial applications of LLMs makes me suspect this is part of the OpenAI story... Good plot for a novel, at least.
These cannot possibly be the most realistic failure cases you can imagine, are they? Who cares if "kids" "make illegal drugs?" But yeah, if kids can make illegal drugs with this tech, then actual bad actors can make actual dangerous substances with this tech.
The real risk is manifold and totally unforeseeable the same way that a 400 Elo chess player has zero conception of "the risks" that a 2000 Elo player will exploit to beat them.
Some background: During a period of about 10 years, Carmack kept making massive graphics advances by pushing cutting-edge technology to the limit in ways nobody else had figured out, starting with smooth horizontal scrolling in Commander Keen, through Doom's pseudo-3D, through Quake's full 3D, to advances in the Quake sequels, Doom 3, etc. It's really no exaggeration to say that every new id game engine from 1991 to 1996 created a new gaming genre, and the engines after that pushed forward the state of the art. I don't think anybody who knows this history could argue that John Carmack was replaceable.
At the time, the rest of id knew this, which gave Carmack a lot of clout and eventually allowed him to fire co-founder John Romero. Romero was considered the kinda flamboyant, and omnipresent, public face of id -- he regularly went to cons, worked the press, played deathmatch tournaments, and so on (to be clear, he was a really talented level designer and programmer, among other things, I only want to point out that he was synonymous with id in the public eye). And what happened after the firing? Romero was given a ton of money and absurd publicity for new games ... and a few years later, it all went up in smoke and his new company folded, as he didn't end up making anything nearly as big as Doom or Quake. Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.
The moral of the story to me is that, when your revenue massively grows for every bit of extra performance you extract from bleeding-edge technology, engineer expertise REALLY matters. In the '90s, every minor improvement in PC graphics quality translated to a giant bump in sales, and the same is true of LLM output quality today. So, just like Carmack ultimately turned out to be the absolute key driver behind id's growth, I think there's a pretty good chance it's going to turn out that Ilya plays the same role at OpenAI.
I don't think that is accurate...
The output of id Software after Romero left (post Quake 1) was a clear step down. The technology was fantastic but the games were boring and uninspired, at best "good" but never "great". It took a full 20 years for them to make something interesting again (Doom in 2016).
After Romero left, id Software's biggest success was really as a technology licensing house, but not as a games developer. Powering games like Half Life, Medal of Honor, Call of Duty, ...
Meanwhile Romero's new company (Ion Storm) eventually failed, but at least the creative freedom there led to some interesting games, like Deus Ex and Anachronox. And even Daikatana is a more interesting game than something like Quake 2 or Quake III.
Daikatana was a commercial and critical failure. Quake 2 and Quake III were commercial and critical successes.
Carmack could make graphics advances on his own with just a computer and his brain. Ilya needs a lot more for OpenAI to keep advancing. His giant brain isn’t enough by itself.
Carmack did not invent that trick; it had been around more than a decade before he used it. I remember reading a Jim Blinn column about that and other dirty tricks like it in an IEEE magazine years before Carmack "invented" it.
Thanks for the correction, edited the post.
1. I don't think Ilya is equivalent to Carmack in this case — he's been focused on safety and alignment research, not building GPT-[n]. By most accounts Greg Brockman, who quit in disgust over the move, was more impactful than Ilya in recent years, as well as the senior researchers who quit yesterday.
2. I think you are underselling what happened with id: while they didn't blow up as fantastically as Ion Storm (Romero's subsequent company), they slowly faded in prominence, and while graphically advanced, their games no longer represented the pinnacles of innovation that early Carmack+Romero id games represented. They eventually got bought out by Zenimax. Carmack alone was much better than Romero alone, but seemingly not as good as the two combined.
3. I don't think Sam Altman is equivalent to John Romero; Romero's biggest issue at Ion Storm was struggling to ship anything instead of endlessly spinning his wheels chasing perfection — for example, the endless Daikatana delays and rewrites. Ilya's primary issue with Altman was he was shipping too fast, not that he was unable to motivate and push his teams to ship impressive products quickly.
I hope Sam and Greg start a new foundational AI company, and if they do, I am extremely excited to see what they ship. TBH, much more excited than I am currently by OpenAI under a more alignment-and-regulation regime that Ilya and Helen seems to want.
Brockman did an entirely different type of work than Sutskever. Brockman's primary focus was on the infrastructure side of things - by all accounts the software he wrote to manage the pre-training, training, etc., is all world-class and a large part of why they were able to be as efficient as they are, but that is not the same thing as being the brains behind the ML portion.
This is one of the core promises of alignment. Without it how can there be trust? While there are probably short term slow downs with an alignment focus, ultimately it is necessary to avoid throwing darts in the dark.
Romero was fired in 1996
Until this point, as you mentioned id had created multiple legendary franchises with unique lore, attributes, and each one groundbreaking tech breakthroughs: Commander Keen, Wolfenstein 3D, Doom, Quake.
After Romero left, id released: https://en.wikipedia.org/wiki/List_of_id_Software_games
* Quake 2
* Quake 3
* Doom 3
* And absolutely nothing else of any value or cultural impact. The only "original" thing was Rage which again had no footprint.
There were a lot of technical achievements, yes, but it turns out that memorable games need more than interesting technology. They were well-reviewed for their graphics at a time when that was the biggest thing people expected from new id games - interesting new advances in graphics. For a while, they were THE ones pushing the industry forward until arguably Crysis.
But the point is for anyone experiencing or interacting with these games today, Quake is Quake. Nobody remembers 1, 2 or 3 - it's just Quake.
Now, was id a successful software company and business? Yes. Would it have become the industry titan and shaped the future all of all videogames based on their post Romero output? Absolutely not.
So, while it is definitely justifiable to claim that Carmack achieved more on his own than Romero did, the truth is at least in the video game domain they needed each other to achieve the real greatness that they will be remembered for.
It remains to be seen what history will say about ALtman and Sutskever.
Quake 3 was unquestionably the pinnacle, the real beginning of esports, and enormously influential on shooter design to this day.
I believe this is absolutely wrong. Quale 2, 3 and Doom 3 were critical success, not commercial ones, which led ID to be bought.
John and John were like Paul and John from the beatles, they never made really great games anymore after their break up.
And to be clear, that's because the role of Romero in the success of ID is often underrated like here. He invented those games (Doom and Quake and Wolf) as much as Carmack did. For example, Romero was the guy who invented percent-based life. He removed the score. This guy invented the modern video game in many ways. Games that werent based on Atari or Nintendo. He invented Wolf, Doom and Quake setups which were considerably more mature than Mario and Bomberman and it was new at the time. Romero invented the deathmatch and its "frag". And on and on.
The company was barely making 30 million a year while 1.5 billion in debt...in the early 80s.
Even then, Gygax's downfall is the result of his own coup, where he ousted Kevin Blume and brought in Lorraine Williams. She bought all of Blume's shares and within about a year removed any control that Gygax had over the company and canceled most of his projects. He resigned a year later.
Thanks for the rabbit hole though, that was an entertaining read.
Altman is going to move on and announce a new venture in the coming weeks. Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.
Brockman and the others will likely do something new in AI.
I admire you but these days dumb is kinda the norm. Look at the other Sam for example. Really hard to keep your mouth shut and do smart things when you think really highly about yourself.
This is an interesting take. Didn't the board effectively claim that he was lying to or misleading them? If that's true, how does someone doing that and being called out on it given them the high ground? By many accounts that have come out, it seems Altman had several schemes in work going against the charter of the non-profit OpenAI.
> Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.
Why is he considered an oracle in this space?
Notable that when he came back, while he was still a difficult personality, the other things didn't happen anymore. Apple after the return of jobs became very good at executing on a single cooperative vision.
The board was irresponsible and incompetent by design. There is one OpenAI board member who has an art degree and is part of some kind of cultish "singularity" spiritual/neo-religious thing. That individual has also never had a real job and is on the board of several other non-profits.
Oh no! Everyone knows that progress is only achieved by people with computer science degrees.
There clearly were tensions between the for and not-for growth factions, but the Dev Day is being cited as a 'last straw'. It was a product launch.
Ilya, and the board, should have been well aware of what was being released on that day for months. They should have at the very least been privy to the plan, if not outright sanctioned it. Seems like before launch would have been the time to draw a line in the sand.
Did they have a 'look at themselves in the mirror' moment after the announcements or something?
Never assume this. After all, their communication specifically cited that Sam deceived them in some way, and Greg was also impacted. Ilya is the only board member that might have known naturally, given his day-to-day work with OAI, but since ~July he has worked in the area of superalignment, which could reasonably be a different department (it shouldn't be). The Board may have also found out about these projects, maybe from a third party/Ilya, told Sam they're moving too fast, and Sam ignored them and launched anyway. We really don't know.
Not necessarily, and that may speak to the part of the Board's announcement that Sam was not candid
It sucks for openAi, but there's too many hungry hungry competitors salivating at replacing OpenAI so I don't think this will have big king term consequences in the field.
I'm curious what sorts of oversight and recourse all the investors (or are they donors?) Have. I imagine there's a lot of people with a lot of money that are quite angry today.
The “won’t anyone think of the needs of the elite wealthy investor class” that has run through the 11 threads on this topic is pretty baffling I have to admit.
I could very much see it as a “look in the mirror” moment, yeah.
Ilya Sutskever is a True Believer in LLMs being AGI, in that respect aligned with Geoff Hinton, his academic advisor at University of Toronto. Hinton has said "So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete"[1].
Meanwhile, Altman has decided that LLMs aren't the way.[2]
So Altman was pushing to turn the LLM into a for-profit product, to get what value it has, while the Sutskever-aligned faction thinks it is AGI, and want to keep it not-for-profit.
There's also some difference about whether or not AGI poses an "existential risk" or if the risks of current efforts at AI are along the lines of algorithmic bias, socioeconomic inequality, mis/disinformation, and techno-solutionism.
1. https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinto...
2. https://www.thestreet.com/technology/openai-ceo-sam-altman-s...
> the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI's tech deployment.
Who was first to launch a marketplace for GPTs/agents? It wasn’t OpenAI, but Poe by Quora. Guess who sits on the OpenAI non-profit board? Quora CEO. So at least we know where his interest lies with respect to the vote against Altman and Greg.
Calling it a coup falsely implies that OpenAI in some sense belongs to Sam Altman.
If anything is a coup, it's the idea that a founder can incorporate a company and sell parts of it off, and nevertheless still own it. It's the wresting of control from the actual owners in favor of a public facing executive.
But in the business and wider world, a coup (without the d'état part) is, by analogy, any takeover of power that is secretly planned and executed as a surprise. (We can similarly talk about a company "declaring war" which means to compete by mobilizing all resources towards a single purposes, not to fire missiles and kill people.)
This is absolutely a coup. It was an action planned by a subset of board members in secret, taken by a secret board meeting missing two of its members (including the chair), where not even Microsoft had any knowledge or say, despite their 49% investment in the for-profit corporation.
I'm not arguing whether it's right or wrong. But this is one of the great boardroom coups of all time -- one for the history books. There's a reason it's front-page news, not just on HN but in the NYT and WSJ as well.
Executives do not have any right to their position. They are an officer, i.e., an agent of the stakeholders. The idea that the executive is the holder of the power and it's a "coup" if they aren't allowed to remain is disgustingly reminiscent of Trumpian stop-the-steal rhetoric.
For example, the French revolution saw 3 such events commonly descried as coups - the the fall of Robespierre on 9-th of Thermidor and the Directory's (technically legal) annulment of elections on the 18-th of Fructidor and 22-nd Floréal. The last one was even somewhat bloodless.
https://openai.com/our-structure
especially this part:
https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
That6 is literally a political coup
On the other hand, the AGI side of the OpenAI brand is just fine. They will continue the responsible AGI development, spearheaded by Ilya Sutskever. My best wishes for them to succeed.
I suspect Microsoft will be filing a few lawsuits and sabotaging OpenAI internally. It's an almost $3Tn company and they have an army of lawyers. They can do a lot of damage, especially when there may not be much sympathy for OpenAI in Silicon Valley's VC circles.
They could have gone bankrupt, been sued into the ground, taken over by Microsoft...
Just look at the just because they fired their CEO.
Was the success based on GPT or the CEO?
The former is still their and didn't get inferior.
Slower growth doesn't mean shrinking
As a commercial customer, the only things I am interested in is the quality of the commercial product they provide to me. Will they have my interests in mind going forward? Will they devote all their energy in delivering the best, most advanced product to me? Will robust support and availability be there in the future? Given the board's publicly stated priorities (which I was not aware of before!), I am not so sure anymore.
If it's true that this is in part over Dev day and such, and they may have a point, however if useful stuff with AI that helps people is gauche is OpenAI just going to turn into increasingly insular cult? ClosedAI but this time you can't even pay for it?
He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.
Great analysis, thank you.
I believe this decision was ego and vanity driven with this post-hoc rationalization that it was because of the mission of "benefiting humanity."
Once that happens, real and intentional slights start accumulating and de-escalation becomes extremely difficult.
I’ve specifically seen the controlling members of a company realize this after 7-8 months and when that happens it’s a quick change of course. I could see why you’d think it’s ego but I think it’s closer to my previous situation than what you’re stating here. This is a pivotal course correction and they’re not pretty, this just happens to be the most public one ever due to the nature of the business and company.
Sam is clearly one of the top product engineering leaders in the world -- few companies could ever match OpenAI's incredible product delivery over the last few years -- and he's also one of the most connected engineering leaders in the industry. He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.
What about OpenAI's long-term prospects? They rely heavily on money to train larger and larger models -- this is why Sam introduced the product focus in the first place. You can't get to AGI without billions and billions of dollars to burn on training and experiments. If the company goes all-in on alignment and safety concerns, they likely won't be able to compete long-term as other firms outcompete them on cash and hence on training. That could lead to the company getting fully acquired and absorbed, likely by Microsoft, or fading into a somewhat sleepy R&D team that doesn't lead the industry.
The model is extremely simple to integrate and access - unlike something like Uber, where tons of complexity and logistics is hidden behind a simple interface, an easy interface to OpenAI’s model can truly be built in an afternoon.
The safety posturing is a red herring to try and get the government to build a moat for them, but with or without Altman it isn’t going to work. The tech is too powerful, and too easy to open source.
My guess is that in the long run the best generative AI models are built by government or academia entities, and commercialization happens via open sourcing.
If this pivot is what they needed to do, the drama-version isn’t the smart way to do it.
Everyone’s going to be much more excited to see what Sam pulls next and less excited to wait the dev cycles that OpenAI wants to do next.
Following the Jobs analogy, this could be another NeXT failure story. Teams are made by their players much more than by their leaders; competent leaders are a necessary but absolutely insufficient condition of success, and the likelihood that whatever he starts next reproduces the team conditions that made OpenAI in the first place are pretty slim IMO (while still being much larger than anyone else's).
Though he may be less inclined to see closed-but-commercial access as okay as much as Altman, so while it might involve less total access, it might involve more actual open/public information about what is also made commercially available.
Jobs didn't invent the Lisa and Macintosh. Bill Atkinson, Andy Hertzfeld, Larry Tesler etc did. They were the tech visionaries. Some of them benefited from him promoting their efforts while others... (Tesler mainly) did not.
Nothing "wrong" with any of that, if your vision of success is market success... but people need to be honest about what Jobs was... not a technology visionary, but a marketing visionary. (Though in fact the original Macintosh was a market failure for a long time)
In any case comparing Altman with Jobs is dubious and a bit wanky. Why are people so eager to shower this guy with accolades?
If as it seems, dev day was the last straw, what does that say to all the devs?
I get that people feel disappointed, but I can't help but feel like those people were maybe being a bit wilfully blind to the parts of the company that they didn't understand/believe-in/believe-were-meant-seriously.
So you better be planning an exit strategy in case something changes slowly or quickly.
Nothing new here.
Perhaps the competition is inevitably a good thing. Or maybe a bad thing if it creates pressure to cut ethical corners.
I also wonder if the dream of an “open” org bringing this tech to life for the betterment of humanity is futile and the for-profits will eventually render them irrelevant.
The general opinion seems to be estimating this at far above 50% YES. I, personally would bet at 70% that this exactly what will happen. Unless some really damaging information becomes public about Altman, he will definitely have the strong reputation and credibility, definitely will be able to raise very significant funding, and the only expert in industry / research he definitely won’t be able to recruit would be Ilya Sutskever.
I have a contrarian prediction : Due to pressure from investors and a lawsuit against the openai board, the board will be made to resign and Sama & Greg will return to openai.
Anybody else agree ?
At this point, on day 2, I am heartened that their mission was most important, even at the heart of the most important technology maybe ever or since nuclear power or writing or democracy. I'm heartened at the board's courage - certainly they could anticipate the blowback. This change could transform the outcome for humanity and the board's job was that stewardship, not Altman's career (many people in SV have lost their jobs), not OpenAI's sales numbers. They should be fine with the overwhelming volume of investment available to them.
Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?
On day 3 or day 30 or day 3,000, I'll of course come at it from a different outlook.
Nevertheless, I agree that the firing was probably in line with their stated mission.
>They should be fine with the overwhelming volume of investment available to them.
>Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?
This wasn't just some cultural shift. The board of OpenAI created a seperate for profit legal entity in 2019. The for-profit legal entity received overwhelming investment from Microsoft to make money. Microsoft, Early investors, and Employees all have a stake and want returns from this for profit company.
The separate non-profit OpenAI has a major problem on its hands if it thinks its goals are no longer aligned with the co-owners of the for-profit company.
I see it far more likely that openAI will lock down its tech even more, in the name of "safety", but also predict it will always be possible to pay for their services never-the-less.
Nothing in this situation makes me think OpenAI will be any more "open."
If they gave Altman 1 weeks notice and let him save face in the media, what would they have lost? Is there a fear Altman would take all the best engineers on the way out?
They made a deal with Microsoft, who has a long history of exploiting users and customers to make as much money as possible. Just look at the latest version of Windows; Microsoft doesn’t care about AI only as much as it enables them to make more and more money till no end through their existing products. They rushed to integrate AI into all of their legacy products to prop them up rather than offer something legitimately new. And they did it not organically but by throwing their money around, attracting the type of people who are primarily motivated by money. Look at how the vibe of AI has changed in the past year —- lots of fake influencers and the mad gold rush around it. And we are hearing crazy stories like comp packages at OpenAI in the millions, turning AI into a rich man’s game.
For a company that has “Open” in their name, none of their best and most valuable GPT models are open source. It feels as disingenuous as the “We” in WeWork. Even Meta has them beat here.
Sam Altman, while good at building highly profitable SaaS, consumer, & B2B tech startups and running a highly successful tech accelerator, before this point, didn’t have any kind of real background in AI. One can only imagine how he must feel like an outsider.
I think it’s a hard decision to fire a CEO, but the company is more than the CEO, it’s the people who work there. A lot of the time the company is structured in such a way that the CEO is essentially not replaceable, we should be thankful OpenAI fortunately had the right structure in place to not have a dictator (even a benevolent one).
What would you propose instead ?
> Around 30 minutes later, Brockman was informed by Sutskever that he was being removed from his board role but could remain at the company, and that Altman had been fired (Brockman declined, and resigned his role later on Friday).
The board firing the CEO is not a coup. The board firing the CEO behind the chair's back and then removing the chair is a coup.
What stood out:
1. The whole non-profit vs for-profit is like a recipe for problems. Taking billions in investor money, hyper scaling to hundred-millions of users, and partnering with a $1T tech company… you’re already too late to reverse course and say “I changed my mind”.
2. Seeing who runs the OpenAI board is more shocking than the man behind the curtain in the Wizard of Oz. That was really never an issue to partners or investors before? Wow…
3. If OpenAI continues down the “we’re a business / startup” path, their board just shot all their leadership credibility with investors and other potential cloud partners. The one thing people with money and corporate finance offices hate is surprises.
4. You don’t pull a corporate “Pearl Harbor” like this and just blissfully move along without consequences. With such a polarizing move, there’s going to be a fight.
ability to do work < ability to manage others to do work < ability to lead managers to success < ability to convince other leaders that your vision is the right one and one they should align with
The necessity of not saying the wrong thing goes up exponentially with each rung. The necessity of saying the right things goes up exponentially with each rung.
Does anybody know how his responsibilities or what led to that? Seems pretty relevant.
But I suspect a lot of the hires from the last year or so, even in the eng side, are all about the money and would follow sama anywhere given what this signals for OpenAIs economic future. I'm just not sure such a company can work without the core research talent.
Training data is more restricted now, hardware is hard to get, fine tuning needs time.
Bill Gates.
Microsoft is after all invested in OpenAI, and Bill Gates has become "loved by all" (who dont remember evil Gates of the yesteryears.
I am not saying it will happen, 99,999% it wont but still he is well known and may be a good face to splash on top of OpenAI.
After all he is one of the biggest charity guys now right?
But I am much more concerned to be honest those who feel they need to control the development of AI to ensure it is "aligns with their principles", after all principles can change, and to quote Lewis "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience."
What we really need is another Stallman, his idea was first and foremost always freedom, allowing each individual agency to decide their own fate. Every other avenue will always result in men in suits in far away rooms dictating to the rest of the world what their vision of society should be.
A breakdown in coms that took everyone by surprise? Smells like bullshit
What does that title even mean. As we know Open AI is ironicly not known for doing open source work. I’m left guessing he ‘research the open source competition’ as it were.
Can anyone shed further light on the role/research?
Question is, how did the board become so unbalanced where this kind of dispute couldn’t be handled better? The commercial interests were not well-represented in the number of votes.
This is entirely by design. Anyone investing in or working for the for-profit had to sign an operating agreement that literally states the for-profit is entirely beholden to the non-profit's charter and mission and that it is under no obligation to be profitable. The board is specifically balanced so that the majority is independent of of for-profit subsidiary.
A lot of people seem to be under the impression that the intent was for there to be significant representation of commercial interests here, and that is the exact opposite of how all of this is structured.
What was so bad about that day? Wasn't it just gpt4-turbo, gpt vision and gpt store and few small things?
Buy GOOGL?
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
My other main guess is his push for government regulation being seen as stifling AI growth or even collusion with unaligned actors by the more scienc-y side and got him ousted by them.
Maybe because this was not proven in a court still, and "innocent until proven guilty" is still a basic concept that must be preserved.
So a big "allegedly" must be placed here.
That's a cynical take on work. I assume most people have other motivations since work is basically a prison.