I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly.
I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF.
[0] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit?
This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character.
It could prevent suicide, maybe, but we know that it does cause suicides, at least in some cases. Seems like a poor value proposition.
The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us?
To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers.
So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy.
Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time.
It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment.
Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself.
After glazing OpenAI and Sam personally for 45 minutes straight. But as soon as Sam was questioned in the slightest, he exploded.
If you're familiar with nepobaby brats and narcissists, this is not surprising.
Why? The other person can say "Yes". That doesn't mean ChatGPT has the capability to do it?
"No" is not a reasonable answer to the question. It's like asking an atheist "if god and Jesus and all the angels came to earth and showed themselves for all to see, would you believe in god then?" Well yes of course, I believe in all the things we can all see. The lack of evidence is the whole point.
So asking "if there was evidence would you think differently?" Is either a fundamental misunderstanding of the persons position, or just a cheap ploy to manipulate people. In Sam's case I'm thinking it was the latter. He's a clever guy, he knows he's on camera. He asked that question just to plant the idea in people's minds - not the guy he was talking to, that guy didn't even need to answer the question because as already said there's only one answer to it. But to everyone watching, Sam basically just put it out there that ChatGPT solving quantum gravity is within the realm of possibility. Which it probably isn't.