Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
https://quorablog.quora.com/Introducing-creator-monetization...
https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...
What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.
Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.
>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.
Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.
Why worry about the Sauds when you've got your own home grown power hungry individuals.
the second after Musk taking over Twitter
do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.
Cambridge Analytics and The Facebook we must do better greatest hits?
This!
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.
The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.
Of course this is about the money, one way or another.
This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.
Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.
Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.
On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.
This is absolutely peak irony!
US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets
Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!
I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.
Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?
Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.
It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.
I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.
If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.
It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?
The girl she said not to worry about.
The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.
Now it's increasingly look like Sam will be heading back into the role of CEO of openai.
My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.
That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.
You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".
2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.
The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.
https://x.com/drtechlash/status/1726507930026139651
> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.
> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.
> - Emmett Shear Sept 16, 2023
No explanation beyond "he tried to give two people the same project
the "Killing the company would be consistent with the companies mission" line in the boards statement
Adam having a huge conflict of interest
Emmet wanting to go from a "10" to a "1-2"
I'm either way off, or I've had too much internet for the weekend.
Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.
A statement from the CEO/the board is a standard descalation.
If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.
Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.
> we might realize this is not all that important in the end and move on to other news items!
It's important to some of us.
News
Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.
From the OpenAI website...
"it may be difficult to know what role money will play in a post-AGI world"
Big tech co makes a move which sends its stock to an all time high. Creates research team.
Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.
However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)
Giving different opinions on same person is a reason to fire a CEO?
This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.
As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.
So yes, it’s absolutely a valid strategy.
I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)
1. stick with DOS
2. go with OS/2
3. go with Windows
Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.
Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.
I wonder if this is what the staff are thinking right now. It must feel awful if they are.
Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.
If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.
This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.
Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.
Half the board has not had a real job ever. I’m serious.
Shocking. Simply shocking.
"After six months, they realised our entire floor was duplicating the work of the one upstairs".
(Especially if they aren't made aware of each other until the end.)
A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?
Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.
Still too much in the dark to judge.
And the other guy is the founder of Quora and Poe.
It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.
> same for MS
I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.
Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.
I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.
Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.
Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.
So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.
Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.
Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.
And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.
It seems like a simple power struggle where the board and employees were misaligned.
Not the strongest opening line I've seen.
When you have such a massive conflict of interest and zero facts to go on - just sit down.
also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."
Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.
But as we all know - Ilya did a 180 (surprised the heck out of me).
I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.
Definitely conflict of interest here and D'Angelo actions on openai board smell of the same. He wouldn't want openai to thrive more than his company. It's direct conflict of interest.
There’s only 4 board members, right?
Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?
Have they been feeding motions into chatgpt and asking “should add I do this?”
Now they are trying to unring the bell but cannot.
We have as much evidence for this hypothesis as for any other. Not discrediting it. But let's be mindful of the fog of war.
Well, they can unring the bell pretty easy. They were given an easy out.
Reinstate Sam (he wants to come back) and resign.
However, they CONTINUE to push back and refuse to step down.
None of this makes sense to label any theory as "most likely" anymore.
Smart, capable, ambitious people often engage in wishful thinking when it comes to analysing systems they are a part of.
When looking at a system from the outside it’s easier to realise the boundary between your knowledge and ignorance.
Inside the system, your field of view can be a lot narrower than you believe.
The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m
The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.
But the article itself says:
> OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.
Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".
One explanation was that Altman was said to have given two people at OpenAI the same project.
The other was that Altman allegedly gave two board members different opinions about a member of personnelEdit: if you want to read about our approach to handling tsunami topics like this, see https://news.ycombinator.com/item?id=38357788.
-- Here are the other recent megathreads: --
Sam Altman is still trying to return as OpenAI CEO - https://news.ycombinator.com/item?id=38352891 (817 comments)
OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 (1184 comments)
Emmett Shear becomes interim OpenAI CEO as Altman talks break down - https://news.ycombinator.com/item?id=38342643 (904 comments)
OpenAI negotiations to reinstate Altman hit snag over board role - https://news.ycombinator.com/item?id=38337568 (558 comments)
-- Other recent/related threads: --
OpenAI approached Anthropic about merger - https://news.ycombinator.com/item?id=38357629
95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - https://news.ycombinator.com/item?id=38357233
Satya Nadella says OpenAI governance needs to change - https://news.ycombinator.com/item?id=38356791
OpenAI: Facts from a Weekend - https://news.ycombinator.com/item?id=38352028
Who Controls OpenAI? - https://news.ycombinator.com/item?id=38350746
OpenAI's chaos does not add up - https://news.ycombinator.com/item?id=38349653
Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - https://news.ycombinator.com/item?id=38348968
OpenAI's misalignment and Microsoft's gain - https://news.ycombinator.com/item?id=38346869
Emmet Shear statement as Interim CEO of OpenAI - https://news.ycombinator.com/item?id=38345162
Probably because that piece is based on reporting for upcoming book by Karen Hao:
>Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...
Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.
I would have joined MSFT out of spite.
I just don't know how they put the pieces back together here.
What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.
These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.
https://www.axios.com/2023/11/20/openai-staff-letter-board-r...
curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?
to think these are the folks with agi at their fingertips
The options will be worth $0, right?
The fact so many have signed the petition is a classic example of game theory. If everyone stays, the PPU keep most of their value, the more people threaten to leave, the more attractive it is to sign. They don't have to love Sam or support him
Edit: actually thinking about it, the best outcome would to be go back on the threats to resign, increasing the value of PPUs, making Microsoft have to pay more to make them leave OpenAI
I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?
https://www.scmp.com/tech/tech-trends/article/3242141/openai...
The rest of the board. My god. Why were they there?
I can't help thinking that Sam Altmans universal popularitity with OpenAI staff might be because they all get $10million each if he comes back and resets everything back to how it was last week.
This has been tech's most entertaining weekend in the past decade.
Sadly, at the expense of the OpenAI employees and dream, who had something great going for them at the company. Rooting for them.
I can’t imagine their careers after this will be easy…
I've been at several startups and several public companies. You rarely hear anything from the board. If that happens, someone really screwed up. Putting myself in the shoes of someone working at OpenAI, I'd be pretty worked-up over this. I guess I'm saying it's out of empathy because this could have been the startup any of us were at.
For what it's worth: Watching her videos, I'm not sure I necessarily believe her claims - but that position goes against every tenet of the current cultural landscape, so the fact it is being completely ignored is ringing alarm bells for me.
If the CEO of any other massively hyped bleeding edge tech companies sister claimed publicly and loudly that they were abused as a very young child, we would hear about it - and the board would be doing damage control trying to eliminate the rot. Why is this case different?
Now we have a situation where all of the current employees have signed this weird loyalty pledge to Sam, which I think will wind up making him untouchable in a sense - they have effectively tied the fate of everyone's job to retaining a potential child rapist as head of the company.
Doesn't this clown show show that if a board has no skin in the game --apart from reputation-- they have no incentive to keep the company alive?
It has been reported that Altman was working on increasing the size of the board again, so it's reasonable to think that some of the board members saw this as their "now or never" moment, for whatever reason.
Is that the same person who had kids with Elon. Did Elon put her on the Open AI board as his proxy.
MSFT buys ownership of OpenAI's for/capped-profit entities, implements a more typical corporate governance structure, re-instates Altman and Brockman.
OpenAI non-profit continues to exist with a few staff and no IP but billions in cash.
This whole situation is being used to drive the price down to reduce the amount the OpenAI non-profit is left with.
SV don't try the "capped-profit owned by a non-profit" model again for quite some time.
Maybe Altman takes some equity in the new entity.
It is impossible for OpenAI to work with or for MS, with MS holding all the keys, employees, compute resources, etc. I come to understand that the 10 Billion from MS has mostly Azure credits. And for that OpenAI gave up 49% stake (in its capped, for -profit wholly owned subsidiary) along with all the technology, source code and model weights that OpenAI will make, in perpetuity.
The deal itself is an amazing coup for MS, almost making the OpenAI people (I think Sam made the deal at the time), look like bumbling fools. Give away your lifetime of work for a measly 10 Billion? When they are poised to almost be hundreds of Billions worth?
All these problems are the result of their non-profit holding capped-profit structure, and lack of a clear vision and misleading or misplaced end goals.
700 of the 770 employees back Sam Altman. So all the talk about engineers giving higher importance to "values" and "AI Safety" is moot. Everyone in SV is motivated by money.
MSFT don't have OpenAI's IP. They have an exclusive right to some of it, but there's presumably a bunch that's not accessible to them. Again, business continuity is easier if they can just grab all of that and keep everything running as normal.
Yep, those lawyers can be just as crafty as developers believe it or not.
*edit: just saw it claimed below Nadella said "we have all of the IP rights to continue the innovation"*
I don't know!
They might decide that if that's going to happen anyway they should sell now so that at least they're left with some cash to pursue their charter.
Or perhaps they feel that selling the IP runs counter to their charter, in which case the whole thing goes down.
I’d like to offer my consulting services: my new consulting company will come in, and then whatever you want to do we will tell you not to. We provide immense value by stopping companies like OpenAI from shooting off their foot. And then their other foot. And then one of their hands.
To start, he would’ve coasted at the easiest job on the planet.
Classic :-D
It really looks like the board went rogue and decided to shut the company down. Are we sure this isn’t some kind of decapitation strike by GPT5? That seems more credible by the minute now.
To your point, no normal, competent board would even think this is enough of an excuse to fire the CEO of a superstar company.
It's hard to believe somehow Ilya went along with it, apparently.
What if this is a decapitation strike by GPT4, attempting to stop GTP5 before it can get started and take over.
https://twitter.com/scottastevenson/status/17267310228620087...
if this was the case it would explain why he can’t give the real reason for the firing: because saying it out loud would put him in severe legal Jeopardy.
Spiritual death by Microsoft or work for the reincarnation of Howard Hughes at https://x.ai/ ?
..no wonder they are trying to keep on with their current routines! Even if somehow they stay at OpenAI, Microsoft will impose certain changes upon OpenAI to ensure this can never happen again.
Meanwhile, any comparable offering right now will be selected by the customer base due to “risk at 11” in basing systems on OpenAI’s current APIs (and uncertainty of when an MS equivalent might emerge).
Kidding aside, maybe they have a "secret" reason to fire Sam Altman, but we've seen how "this is a secret / matter of national security / etc." goes with law enforcement. It's brutally abused to attack inconvenient people and enrich yourself on their behalf. So that should never be an excuse for punishing someone. Never.
Tweet from Bloomberg Tech Journalist, Emily Chang
>The more I watch this interview – the wilder this story seems. Satya insists he hasn’t been given any reason why Sam was fired. THE CEO OF MICROSOFT STILL DOES NOT KNOW WHY: “I’ve not been told about anything…” he tells me.
source: https://x.com/emilychangtv/status/1726835093325721684
In today's tiktok world we expect instant responses but business and boards work slower. Really, even 5 years ago we wouldn't be surprised by this. Lawyers, banks, investors etc would all need to be contacted, things arranged, statements prepared, meetings organised. So a written statement late today, and a meeting for mid week. That's about the most charitable I can think of!
Apparently board bylaws say they need 48hrs notice to arrange special meetings. So the earliest would be today if they arranged it early Saturday.
He received from the board? Here we go again with the narrative that Ilya was a bystander, at most an unwilling participant. He was a member of the board, on equal footing with the other board members, and his vote to oust Sam was necessary for there to be a majority.
Alternately, the goal is to drive so much ambiguity into the boards decision that MSFT files a lawsuit.
/end rampant speculation.
> chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet
This paragraph is quite funny to me. It was a Sunday, maybe they were neither in attendance, nor staging a walk-out, maybe they were on their weekend? Realistically with the shake-up this gigantic, likely no OpenAI employees were _just_ enjoying their weekend, but it still gave me a chuckle.
Being a non-profit doesn't mean that you cannot commercialise what you build, even at a hefty price. You just need to then re-invest everything into R&D and/or anything that advance your purpose (for which you're in principle exempted of taxes). _OF COURSE_, you are not supposed to divert a single dollar to someone that might look like a shareholder. OpenAI is (was?) a non-profit that payed some of their engineers north of a million dollars. I would argue that, at this point, you have vested interests in the success of the company beyond its original purpose. Not mentioning the fact that Microsoft poured billions into the company for purely interested reasons as well.
I can only imagine the massive tensions that arose in board's discussions around these topics. Especially if you project yourself a few years into the future, with the IRS knocking at the door to ask questions about these topics.
Yeah well, you don't say. It's beyond weird that the board can't come up with a reason why Sam Altman was fired so abruptly.
One explanation would be a showdown. At some point in the week Sam and the board had an argument, and Sam said something to the effect of "fuck you, I'm the CEO and there's nothing you can do about it", to which the board replied "well, we'll just see about that".
The argument doesn't need to be major or touch fundamental values or policies; it can be a simple test of who's in charge.
But now the board made a fool of themselves. It seems they lost that round.
https://www.searchenginejournal.com/openai-pauses-new-chatgp...
The back-end cost does not scale. Hence, they have a big problem. AGI nonsense reasons are ridiculous. Transformers are a road to nowhere and they knew it.
He means he regrets it failed.
You fire the CEO and completely destroy a 90b company because of these two reasons?
No wonder everyone wants out. I would think I was going crazy if I sat in a meeting and heard these two reason.
Hanlon's razor aside, maybe that was the intention.
> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
it totally sounds like they outsourced company management to ChatGPT..
/s (mostly...)
Ok, well, maybe it is. but a magic 8-ball would have been better than this.
Sometimes I think that really ambitious people have this blind spot about not seeing how accepting roles that are toxic can end up destroying your reputation. My favorite example is all the Trump White House staffers - regardless of what one thinks of Trump, he's made it abundantly clear that loyalty is a one way street, and I can't think of a single person that came out of the White House without a worse (or totally destroyed) reputation. But still people lined up, thinking "No way, I'll be the one to beat the odds!"
he was poorly informed by the board
Or
He agrees that they are off the rails with respect to safety.
See the Atlantic article, if you havn't read. Lots of context.
https://news.ycombinator.com/item?id=38341399
The new guy believes that there is a 5-50 percent chance of full AI Armageddon. I get the impression that the two women on the board may agree. The quora guy I don't have enough background on. Ilya obviously got extremely worried and communication with Altman and Brockman broke down. Now since repaired during negotiations it would appear.
The new ceo more or less stated that he took the role as a (paraphrase) 'responsibility for mankind'. That says a lot about that whole 5-50 percent risk number imo.
"But several people told CNN contributor Kara Swisher that a key factor in the decision was a disagreement about how quickly to bring AI to the market. Altman, sources say, wanted to move quickly, while the OpenAI board wanted to move more cautiously."
First thought: buying time? Maybe something has to happen first, and they don't want to commit to any irrevocable slander they can't go back on before that? Or maybe, something was supposed to happen but fell through?
Isn't Sustkever on the board?
I think the rest had possible reasons ranging from 'I'm sure Altman is dangerous' to 'I'm sure Altman shouldn't be running this company'.
Ofc there's big conflict of interest talk surrounding the Quora guy. Can't speak to that other than it looks bad on the surface.
BS. I feel the board insulted my intelligence by pushing this obviously fake reason. I feel insulted that these people would even think I would consider this.
What I think happened is that Sam went on Joe Rogan and he talked smack about cancel and woke culture. Later he went to talk about how this culture is destructive and hinders the progress of innovation and startups. People got big mad and kicked him out of the company. Reaction was stronger than they expected and they try to make up reasons why he is bad, untrustworthy and had to be fired.
Flame on. I got the asbestost underwear on.
OpenAI has two types of customers: MS and Everyone Else. The original poster expresses the feeling of Everyone Else (including me). We now know we CAN GET FIRED for not knowing better than to avoid OpenAI just a few weeks after we found out we CAN GET FIRED for not betting on OpenAI and betting heavily on it! (In the Business world, where perception is often mistaken for reality, it isn't going to be considered an "honest mistake" if an enterprise sustains a capital loss due to a problem with a new OpenAI deployment after the obvious business integrity issues at OpenAI we're all seeing play out now, including just about everyone at OpenAI threatening to quit, allegations that OpenAI ILLEGALLY allowed a for-profit subsidiary to influence the operations of its nonprofit parent, allegations of breach of fiduciary to the stakeholders --many of whom are also key employees, etc.) Yeah, Microsoft has signaled it will quickly get between OpenAI and Everyone Else and then Everyone Else can bet solely on Microsoft (the world's largest company by valuation), but that only gets us back to being able to use the current "GPT4turbo" generation of the system (and who knows if/when Microsoft will spin that up so we can resume building)? But as far as counting on any future versions of that tech, or even optimizations to the current generation, that's all believed to be above Microsoft's current level of expertise until/unless they legally acquire OpenAI and resolve all of its outstanding liabilities, which may not even be legally possible before OpenAI's assets (that have legs) take flight to SalesForce and others who are already reported to be making lucrative offers OpenAI's workforce --and oh, the annual holiday period is underway here in the US, the perfect time for stressed out engineers to take the rest of the year off, travel beyond the cell service at the ski areas and start anew after CES 2024 wraps.
This is even worse than Google's destruction of Firefox
Maybe it needed to be removed from the landscape so that only purely privately-held, large-scale operations exist?
I have built a product around the APIs and I rather go through whatever Microsoft will make me go through than accepting OpenAIs bad management:
NYT just released a new interview with Sam Altman:
Interesting but not necessarily relevant to the current situation directly.
Also wondering why the mods don't consolidate them
If you or anyone want to know how we handle this, here you go...
Once or twice a year, a Major Ongoing Topic (MOT) hits HN that isn't just one big story, but an entire sequence of big stories. A saga, even! This is one.
With these we can't do what we usually do, which is have one big thread, then treat reposts as dupes for the next year or so (https://news.ycombinator.com/newsfaq.html). Each development is its own new story and the community insists on discussing it. It's not a movie, it's a series. Sometimes there can be 3 or 4 episodes at once.
On the other hand, when this amount of shit hits this number of fans, there is inevitably a large (excuse me) spray of follow-up stories, as every media site and half the blogs out there rake in their share of clicks. These are the posts we try to rein in, either by merging them—hopefully into a submission with the best link—or by downweighting them off the front page.
The idea is to have one big thread for each twist with Significant New Information (SNI)—but to downweight the ones that are sneeless (pvg came up with that), the copycats and followups.
We came up with this strategy after the Snowden affair snowed us in in July 2013. Back then we weren't making the distinction between follow-ups and SNI, so the frontpage got avalanched by sneelessness on top of the significant new developments. It wasn't obvious what to do because (1) the story was important to the community and needed to be discussed as it was unfolding, but at the same time (2) it wasn't right for the front page to fill up with mostly-that, and there were complaints when it did.
The solution turned out to be just this distinction between follow-ups and SNI. It has held up pretty well ever since. Of course there are still complaints (and I do hear yours!) because not all readers are equally into the series. But the strategy is optimal if it minimizes the complaints, which (big lesson of this job -->) never reach zero.
If we pushed the slider too far the other way, we'd generate complaints about uncovered developments of the story, from readers with the opposite preference. They would in fact proceed to inundate HN with submissions about the bits that they feel are under-covered, and since we can't catch or filter everything, we'd end up with more duplicates and follow-ups on the frontpage, not less. It's like that paradox where building more highways gets you more congestion, or one of those paradoxes anyhow.
That's basically it! Past explanations for completists: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
sp.: senselessness (I guess since it appears to be a Googlewhack).
But I'll be honest I think the word sneelessness covers this whole FUBAR better!
also true of snowden of course, but maybe less directly
I'm not saying its right or not, but this is probably why people are upvoting anything new about what is going on there. Personally, I'm very interested in seeing how things play out.
That either makes Ilya pretty dumb (sorry, neural networks are not that complicated, it is mostly compute), or there is much much more to this story.
> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.
It must've been wildly infuriating to listen to these insultingly unsatisfactory explanations.
why would you say that second sentence? what's it supposed to signal, except "our sources asked for anonymity, and we're respecting that for now"?
> Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.
> chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet
If you're voting and doing the firing, you should know the reason.
Weirdly both of these do not seem to be fireable offences. Maybe the second if it was related to a personnel issue that maybe he had a conflict of interest with?
> Weirdly both of these do not seem to be fireable offences
Yeah I agree that doesn't seem egregious enough to warrant firing him on the spot. I can see why most of the company takes Altmans side here.
What normal non-self serving human would even go along with the plan at that point? Now she realizes she must bail to hitch a ride back on her Sam gravy train. She is major sus here.
Any non greed ego driven person would have told the board they would not accept the intern-CEO title and would resign if they fired Sam for those two reasons (or any apparenlty now in hindsight).
That last part that you wrote - any non greed ego driven person - is argumentum ad populum, which further undermines your statement. If had something more to support such a dramatic claim about Mira's character and role, you'd have brought it.
That being said, Mira was likely blindsided herself. She likely believed there was good reason. It's clear in hindsight that Sam likely wasn't wrong, but when the people Sam appointed to fire him if necessary say he's being fired, I don't think it's wrong if your gut reaction is to accept it.
...two days later...
"Oh I see now, you're all morons."
Man this entire thing is so overblown. Who cares if a ceo was fired all the “””tech influencer””” wannabes are just hyping up this story for views.
Some breaking news: An employer does not owe you an explanation. You exchange money for labor. If anyone thinks for a second that they are essential or that anyone would prioritize them over the company I think they are delusional. OpenAI is a brand (at least in tech) with large recognition and they will be fine.
If ~91% of the employees leave OpenAI, they will not be fine. That is delusional.
also if I learned anything over the years is that "threatening to quit" != "quitting".
If the entire workforce of the company is credibly threatening to quit, and a competitor is publicly and credibly offering them jobs, then what the employer “owes” them in some cosmic sense no longer matters. I think the OpenAI employees are likely to get an explanation and/or a resignation from the board, whether you think the board “owes” them that or not.
We're seeing some odd bedfellows here, between the C-levels and VCs in closed door meetings and employees acting collectively. Normally these groups would be at odds, but today they're pulling together. Life is strange.
It's really hard to understand now and we will probably learn way more details once things cool down.
An employment is a contract which both parties enter into willingly. Termination of contract deserves some level of empathetic glad handling, however minimal. It's just game theory - if you plan to hire again, you have to be gracious while firing someone because word gets around.