https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
[source: https://twitter.com/karaswisher/status/1725702501435941294]
Sounds like you exactly predicted it.
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
Open AI- we need clarity on your new direction.
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.
Dang! He left @elonmusk on read. Now that's some ego at play.
And this time around he would have the sympathies from the crowd.
Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.
The war between OpenAI and Sam AI is just the beginning
Update from Greg
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.
There is nothing business-y about this. As a non-profit OpenAI can do whatever they want.
This feels like real like succession panning out. Every board member is trying to figure out how to optimize their position.
That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.
EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.
https://time.com/collection/time100-ai/6309033/greg-brockman...
Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.
Except for a clumsy fast press release, this doesn’t really have to end badly for anyone.
Even though I have been an OpenAI fan every since I used their earliest public APIs, I am also very happpy that there is such a rich ecosystem, other commercial players like Anthropic, open model support from Meta and Hugging Face, and the increasingly wonderful small models like Mistral that can be easily run at home.
If indeed a similar disagreement happened in OpenAI but this time Hinton (Ilya) came on top- it’s a reason to celebrate.
They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.
They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.
Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.
That is not how IP law works. Even writing new code based on the IP developed at OpenAI would be IP theft.
None of this really makes sense when you consider that Ilya Sutskever, arguably the single most important person at OpenAI, appears to have been a part of removing Sam.
I am not dismissing the possibility, far from it. It sounds very plausible. But are there any credible reports to back it up?
Ignore and focus on your life, the grapevine in your neighborhood about who is selling their car or their house is not as exciting but will net you way more money than this happening thousands of miles away from you. And most importantly without having to fuck with leverage.
But for clarity sake I'm doing neither personally because I'm not a day trader and look more long term.
They're probably firing up the eyeball scanning machines on this news.
But right now, the board undoubtedly feels the most pressure in the realm of safety. This is where the political and big-money financial (Microsoft) support will be.
If all true, Altman's departure was likely inevitable as well as fortunate for his future.
What the hell were they thinking? Just because you are a non-profit doesn't mean you should imitate other non-profits and put crazies on the board.
Honestly, this is the big problem with Big Non Profit (tm). The entire structure of non-profits is really meant for ladies clubs, Rotary groups, and your church down the street, not openai and ikea.
The board is in absolute control in a not-for-profit. The loophole is that some have bylaws that make ad-hoc board meetings and management change votes very difficult to call for non-operating board members, and it can take months to get a motion to fire the CEO up for a vote.
In some not-for-profits, the board often even manages to recruit and seat new board members. Some not-for-profits operate as membership associations, where the organization’s membership elects the board members to terms.
On the few not-for-profits where I was a board member, we started every meeting with a motion to retain the Executive Director (CEO). If the vote failed, so did the Executive Director.
Clearly Microsoft staked its whole product roadmap on 4 random people with no financial skin in the game.
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
Or Altman will start a competitor.
Buy 100 prompts now with AIBucks! Loot box prompts! Get 10 AI ultramax level prompts with your purchase of 5 new PromptSkin themes to customize your AI buddy! Pre-order the NEW UltraThink AI and get an exclusive UltraPrompt skin and 25 AIBucks!
From my brief dealings with SA at Loopt in 2005, SA just does not have a dishonest bone in his body.(I got a brief look at the Loopt pitch deck due to interviewing for a mobile dev position at Loopt just after Sprint invested).
If you want an angel invest play, find out the new VCfund firm Sam is setting up for hard research.
The next AI winter may have just begun...
Because two executives were ousted from a company? That's dramatic.
time to stop playing with existential fire. humans suffice. every flaw you see in humans will be magnified X times by an intelligence X times stronger than humans. whether it is autonomous or human lead.
i'm sick and tired of everyone sticking a chatbot on random crap that doesn't need it and has no reason to ever need it. it also made HN a lot less interesting to read
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
No, I'm confident that it has nothing to do with that. It must have to do with the current business. Maybe there's a financial conflict of interest. Maybe he's been hiding the severity of losses from the board. Maybe something else. But you don't fire a CEO because you discover that he committed a crime at age 13.
None of that makes sense as to why the board would randomly fire him. I don't think it's this.
Here is idea for hacker news crowd: make service which is a proxy for phone number validation: user needs to validate his phone number once in that app and any other 3rd-party service can ask this app for security code which confirms phone number ownership. We use something similar by offloading phone number confirmation via Telegram bot. Also this proxy service could optionally offload management of "bad" phone numbers used by spammers and add other protections
Which news is that?
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
The discussion: https://news.ycombinator.com/item?id=38309611
nothing too crazy
I am genuinely flabbergasted as to how she ended up on the board. How does this happen?
I can't even find anything about fellow board member Tasha McCauley...
Many people in AI safety are young. She has more professional experience than many leaders in the field.
She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.
Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...
She's also co-authored several of the most famous "survey" papers which give an overview of AI safety methods: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22h...
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
(Copy-pasting this comment from another thread where I posted it in response to a similar question.)
Really wonder what this is all about.
Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.
Both seem like they are horribly rushed and no auto complete?
Is it some cute attempt at saying “an AI didn’t write this”?
The game is over boys. The only question is how to make these types of companies pay for the crimes committed.
I’m never going back to Noe Valley for less than $500,000/yr and a netjets membership
Like, who is Mira Murati? We only now that she came from Albania (one of the poorest countries) and somehow got into some pretty good private schools, and then to pretty good companies. Who are her parents? What kind of connections does she pull?