This makes no sense given that Ilya is on the board.
It’s just speculation, anyway. There isn’t really anything I’ve heard that isn’t contradicted by the evidence, so it’s likely at least one thing “known” by the public isn’t actually true.
[1] https://twitter.com/geoffreyirving/status/172675427761849141...
The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.
Meanwhile the Google AI folks have a long track record of making very misleading statements in public. I remember before Altman came along and made their models available to all, Google was fond of responding to any OpenAI blog post by claiming they had the same tech but way better, they just weren't releasing it because it was so amazing it just wasn't "safe" enough to do so yet. Then ChatGPT called their bluff and we discovered that in reality they were way behind and apparently unable to catch up, also, there were no actual safety problems and it was fine to let everyone use even relatively unconditioned models.
So this Geoffrey guy might be right but if Altman was really such a systematic liar, why would his employees be so loyal? And why is it only AI doomers who make this allegation? Maybe Altman "lied" to them by claiming key people were just as doomerist as those guys, and when they found out it wasn't true they wailed?