This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.
OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]
Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]
Is your understanding of OpenAI's current competitive position similar?
---
[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...
[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...
[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.
Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.
My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?
All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.
Do you see a path through this?
As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?
I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.
> [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.
One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..
Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?
Long time HN lurker, made an account just to say that :)
I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.
I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.
Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?
Can’t wait to read this one, and hope the HN crowd treats you well.
Thank you for fielding questions. And please don't stop, your work is great.
Clearly he's straight up evil; between tanking the global economy, constantly lying, and raping his 3 year old sister, it feels really disingenuous to me to frame this as an open question.
I’m far more concerned with the 25 million dollar personal bribe OpenAI president Greg Brockman gave Donald Trump for his reelection -
the fact that a tech company can influence the outcome of an election directly is evil
Far more evil than Altmans shenanigans
My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.
I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.
I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.
I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.
I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.
While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.
"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.
Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.
The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.
I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.
On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.
My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).
I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.
However I'm not going to pay for yet another subscription to access one article I'm interested in.
I'm sure you can't do anything about this, but I just wanted you to know.
You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.
> “Investors are, like, I need to know you’re gonna stick with this when times get hard,”
Should be:
> “Investors are like, I need to know you’re gonna stick with this when times get hard,”
Or Mr Farrow can you post some evidence somewhere we can see?
I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"
It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!
Amodei and his sister saw through the behavior and called it out.
" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."
FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.
It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.
Can you share more about how this manifested?
What?
You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.
At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...
Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.
Fantastic reporting.
Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.
And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.
Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?
I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.
https://www.cbsnews.com/news/sam-altman-universal-basic-inco...
https://finance.yahoo.com/news/sam-altman-wants-universal-ex...
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.
Noted on the critiques of the subscription options. I'll try to relay.
I just checked and I see: "This introductory offer is charged as $78 for the first year. Automatically renews at $195/year."
Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?
> When Sam Altman was ousted as CEO of OpenAI on November 17, 2023, Kara Swisher started tweeting up a storm of “scoopage,” as she referred to her calls with high-ranking tech figures. Over the days Altman was on the outside, Swisher helped to craft a narrative that a board stacked with his internal rivals had pulled off a coup without a legitimate reason. The face of the AI boom had been betrayed and deserved to retake his position at the helm.
> She’s chosen a few CEOs to more regularly criticize now that they don’t give her the access she craves, but there are many more whose narratives she will happily help turn into the official record whenever it will help them. Altman is in that group, and six months after his removal and return as CEO of OpenAI, it’s become very apparent that Swisher was echoing the Altman line and defending his interests.
I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)
> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”
> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”
> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)
> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."
This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.
The people shaping the future have no taste.
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.
Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.
https://en.wikipedia.org/wiki/Ronan_Farrow
It's got to be one of the most unusual biographies of a living person that I've ever come across. Nearly every sentence is a head-turner. If you made it up no one would believe you
>Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.
(plus it finally resolves the mystery of "what Ilya saw" that day)
Also since it wasn't stated clearly
>“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India
That was Sydney if I understand correctly.
Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.
He doesn't have anything comparable to, say, the operating system platform dominance of Microsoft Windows, or service platform dominance of YouTube.
The entire value proposition of OpenAI is that billions of people don't know that anything other exists than ChatGPT, which is rather tenuous and volatile.
This statement rings true.
JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.
I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.
The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.
Or if the person lying is in a position of power?
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".
The overall response and particularly the body language speaks a lot.
I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.
Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI - unless you're claiming that an AGI would be capable of such independent actions.
AI is similar in transformative power to how the internet was a transformative power - might even be greater, if it is more commonly available for use through out the world. Whether that transformative power is doing good or bad really depends on the people doing it, not on the tech. I would bet that the future is going to be better because of AI, than to imagine a worse future and act to stunt the tech.
The threat of AI is, after all, driven by the people who use it.
Without Sam Altman the compute and improvements for LLMs to be a threat wouldn't have readily existed at all. He was the one who got the ball rolling because of his desperation (SVB collapsed right before the hype bubble started), ego, and quasi-religious desires.
> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
The fact that some (usually toxic) individuals get there shows that the system is flawed.
The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.
We shouldn't follow billionaires, we should redistribute their money.
Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.
We really really need a way for our society to be more equitable and hold these people responsible.
If for no other reason, given what happened when the board fired him... no. I'd say not.
I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.
Altman SAYS he does not recall the exchange. Not the same thing.
P.S. Really enjoyed Catch and Kill.
I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.
Some concepts from the book:
> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.
> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.
> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.
> Trust your instincts over a person's social role (e.g., doctor, leader, parent)
Check and check.
OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.
E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.
>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."
Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.
Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.
Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.
By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."
At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.
So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.
That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.
Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.
Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.
---
1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.
I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.
Therefore, I feel like “Sam Altman may control our future” is a far stretch.
You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.
It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.
I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.
Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.
Even after the uncountable shady things about big tech, people just fall for that convenience left and right. Paying $200/mo to outsource my thinking skill to some rent seeking cartel that steals and is territorial is wild
We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.
People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.
Right now times are only merely very bad.
Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?
It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.
Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.
Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.
lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.
shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.
The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.
I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.
So, the article and headline are dramatic but not much really there.
I think all the AI safety obsessed people turn out to have been the ones off course.
The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.
for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.
if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.
the concentration of power in one/two persons is the problem.
Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.
We have drastically escalated what claims are necessary to motivate startup employees. It used to be that you could merely dangle an interesting problem in front of a researcher. Then you could earn millions, then billions. TAMs in the trillions. AGI will destroy humanity unless you, personally step in. Elon is talking about Kardashev III civilizations. The universe cannot bear the hype being loaded upon it.
Anyone who deliberately seeks power should not be given it.
The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.
Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.
What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...
What? OpenAI was a non-profit until Sam made it for-profit.
The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.
How is this even a question?
These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".
Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.
No, he cannot.
https://www.reuters.com/technology/openai-signs-deal-with-co...
This is a damage control piece, and you see that the most stinging comments here get downvoted.
TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
2. You cannot "control" superintelligent AI.
I could more or less infer the rest from that.
The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.
The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."
Who cares???????
The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.
If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.
180 people lost their lives because of bad data in spreadsheet, but not AI.
Looks like that's the only thing that happens in every war, according to useful westerners.
I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.
"the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy