There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.
Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.
This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.
I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?
What sane user would want a shitcoin CEO in charge of a product they depend on?
The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?
http://www.paulgraham.com/5founders.html
Note the date on that.
Elon split was the warning
Interesting. Got any source? Or was it in a private conversation.
Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?
The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.
That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)
The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.
Organizations can't pretend to believe nonsense. They will end up believing it.
Stability AI is looking better after this shitshow.
Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).
Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.
A lot of OpenAI’s reputation is/was Sam Altman’s reputation.
Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.
Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.
Proof of his internal relationship value: employees quitting to go with him
Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.
How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?
How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?
Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.
There go those wacky investors, re-evaluating “brand” value!
Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.
But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.
Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.
So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.
The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.
Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?
You are wildly speculating of course it’s missing something
For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries
Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.
the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :
https://www.theatlantic.com/technology/archive/2023/11/sam-a...
I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.
This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.
Then that execution and operationalization failure is even more profound.