On paper the board has the power and it’s their job to monitor and hire/fire the execs. But they’re dependent on the execs for info. Also having one investor own half, plus the founders guiding the roadmap and having sway over key hires leaving is where the real power is…
It's way worse for the OpenAI board position in this case, because the company is basically just an assemblage of researchers, engineers, and Microsoft Azure credits. They can replace the CEO if they want, but all the value of the company can walk away if they don't like it.
This is generally true of any company
They really can't, because a majority of their compensation is in now-unreasonable amounts of OpenAI equity (because it was worth much less when it was granted). If employees decided to walk away, rendering their vested equity worthless, they wouldn't get anywhere near the same value of, say, Microsoft equity as a signing bonus to make it worth.
Any senior OpenAI employee with decent amounts of vested equity on low valuations has strong financial incentives to stick with the company, and also strong financial incentives to convince others to stick with the company.
Some more thoughts from back when employees "threatened" to leave: https://www.businessinsider.com/openai-employees-did-not-wan...
``` Varys smiled. “Here, then. Power resides where men believe it resides. No more and no less.”
“So power is a mummer’s trick?”
“A shadow on the wall,” Varys murmured, “yet shadows can kill. And ofttimes a very small man can cast a very large shadow.” ```
The humanitarian organization I worked for eradicated every last small-head US dollar from circulation in our salary payments in 24 hours because the staff brought us a rumour that was circulating in the market that small-head dollars would not be accepted by the banks anymore.
Rumors can kill (literally, or maybe if you're lucky, only morale).
I think you fundamentally misunderstand the role of the board of directors. It's often been said that the board (of any company) has basically one job: to hire (and fire) the CEO. While that may be a slight exaggeration, the wisdom behind that quote is that things turn out badly when the board meddles in decisions of a company's executive leadership - if they don't like the decisions being made, they should replace the CEO, and that's where their power lies.
You talk about "filing formal complaints" - what does this even mean?? They're the board, who else would they file formal complaints to? "Hand off Sam's responsibilities to other people"?? Again, any corporate governance expert would say that's a recipe for disaster, never mind not even feasible the way corporate hierarchies work.
I've commented many times before that I think the way the board handled the Altman's filing was, at best, woefully naive, and their communication at the time (even after he was fired) abysmal. But neither do I think it was some sort of "coup", and your recommendations simply don't make sense.
Yes. In a formal board meeting, not in an secretive one behind Altman and Brockman's back. That's what makes it seem more like a coup.
> They're the board, who else would they file formal complaints to?
Like I said in other comments. It's always useful to have a paper trail.
> "Hand off Sam's responsibilities to other people"?? Again, any corporate governance expert would say that's a recipe for disaster, never mind not even feasible the way corporate hierarchies work.
If Altman is neglecting and being opaque about AI safety, as Toner claims, then the board should appoint someone to lead this effort and be fully transparent with them. I don't see how this is so far fetched.
It isn't justified, it's just misleading propaganda. Unfortunately through repetition and the enthusiasms of various fandoms, it's gotten lodged in the public mind.
> "We were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than obviously our legal team and so that's kind of what took us to to November 17."
If that doesn't sound like a secret coup, I don't know what does. Like, yes, it is their job to hire and fire the CEO so it's not really a coup, but when you do your "job" in secret instead of in the open that's the vibe you give off.
For their own records. They can use it for justification for disciplinary action or legal ammunition. It is always useful to document things in writing. It's the same reason why companies will put you on PIP.
Not to mention that if he was vengeful, it would be much easier to do damage from within and from a position of power.
Joke aside, the CEO can be a promoter of hype with no particular management skills - if the shareholders, via the board, are happy with that, their money their risks.
Does the same reasoning not apply to supposed-to-be-trusting business relationships?
Yes. Nobody involved in this comes out looking very good IMO.
OpenAI is fated to implode in scandal. His 7 trillion dollar funding round ambition is a blaring siren to anyone with a working brain that he’s out of control and capable of doing real damage. He wants a sum of money that could crash the world economy. He thinks he should have the power of the world’s most powerful government. Solely dictated by his whim. He makes Napoleon and Hitler look like pikers.
He is Musk in 2016, when some of us knew who he was, and the fans were still enthralled.
You will look back and say “No one knew” and this comment will be there. We knew.
Musk will end up the Bush to his Trump. We will fondly recall the crimes of a guy who just wasn’t as bad as what followed. Why do you think Musk hates him so much? He’s a better Musk than Musk. He’s the guy who took OpenAI out from under him. It’s envy.
Shouldn't the board be capable of agreeing on a press release among themselves?
I'd guess the board relied on some generally-bad legal advice from their attorneys about how they could get personally sued for libel for airing dirty laundry etc. Not having huge financial stakes, and seemingly having been picked for being conservative rather than risk takers, they followed this to their detriment. Staying quiet let Altman build a following and lead the whole situation, something Altman is seemingly fantastic at doing.
So ultimately what was a pivotal moment in governance of following the nonprofit charter, played quite poorly, ended up being a coup by Altman to escape that one layer of "boxing". Which given how poorly we've done so far at "aligning" centralized capitalism to the needs of most individuals (of which this seems to be a subproblem) wasn't terribly surprising!
While in retrospect I think the board's actions may have been warranted, their communication was absolutely atrocious.
Did they not at least have a briefing packet or something prepared to give to major investors (ie, Microsoft) day-of if they were so worried about leaks that they couldn't get the major investors (ie, their own bosses) on board ahead of time?
So why would you rehire him and then resign? Why not leave him fired and resign?
Why did Microsoft care so much about Altman? They didn't have a board seat, so too bad so sad. They should have had no sway. If they want OpenAI, then they can buy it. Why would they even want Altman to come to their company? What possible value could Altman bring to Microsoft?
Just none of it makes any sense. In fact, the most clear thing is Altman's negative behavior and mode of operation. I remember his very loud, awshucks pronouncements of having no shares in OpenAI. Yea, right. A career VC setup a new company structure or structures where he wouldn't benefit at all financially.
In another timeline, OpenAI firing him, naming an interim CEO, ignoring all else, and then hiring a new CEO would have all gone just fine. I don't know why Microsoft and others made a huge kerfuffle over it. And I also don't know why the board released such a cryptic press release when they could have provided details.
I just can't wrap my head around any of it, even letting in conspiracy theory lines. Lol. It just makes no sense and gives me a feeling that no one involved has any clue about anything, including Satya Nadella.
When you have irreconcilable differences with the board you could theoretically jump ship and start a rival company. Usually, in practice, that’s impossibly hard and yet Microsoft announced that they intended to do exactly that with funding and stock matching. For some reason that was turned down in favor of staging the equivalent of an in-country coup.
If you get caught breaking the law on vacation abroad and your response to being arrested is to take control of the country in retaliation then you are a very powerful, persuasive, or threatening person indeed.
The board's major mistake was not communicating why he was let go.
My guess is that the likely reason why employees threatened to go was that they felt Altman had the best chances of making the for-profit arm's shares skyrocket. As a non-profit company's board, I'd be fine letting those people walk out the front door along with the CEO that was just fired.
It is my understanding that the key personnel who developed the actual technology were not part of the group threatening to leave. It was mainly the group in the for-profit arm that Altman had trojan-horsed into the company structure.
> This included the company's key leadership.
I'm not aware of the machine learning researchers responsible for the core technology threatening to leave. Who were they?
Instead, they did not even attempt to communicate any rationale behind their actions.
But if they had the moral conviction, it seems like this would have been the right choice to make, because it would have diluted Altman's power (unless they trust Nadella even less than they trust Altman?)
But, what kind of Wild West is this? It's all so unhinged and strange.
There are no NDAs, non-competes or other impediments? MS just guts OpenAI at its whim?
>there would have been no OpenAI left to preside over.
...If MS can do this, then there's already no OpenAI left to preside over.
What I did see was so much incompetence at the one thing I expect a board to be at least okay at. Hiring and firing.
For that alone, I think the board reshuffle was good. Regardless of who you support in all of this.
Sure, the rectification of names is an improvement in a sense; what is actually needed is a working Altman-control system.
Microsoft needs the sold-out version of OpenAI so they can make as much money as possible without anyone making pesky noises about ethics and safety.
Outsource all of the risk to a non-profit, but still be able to run it, and snag up all of the researchers if/when something gets ugly.
It's like Elon 'Electric Jesus' Musk, without him, they're just selling shitty cars nobody quite like. So he can get paid more than all profit Tesla ever made, because without him, there would be 0 profit anyway.
Talk to Tesla owners: they are surprised by how shit the car is, but they feel like mini Elons. That's probably similar at OpenAI ?
Almost the entire company was threatening to quit.
If it was possible to simply replace all of openAI, then you could just do that now, as an outside party.
So the boards choice was to either bring back Sam or watch the entire company go under.
This is the part that perplexes me. A CEO being fired is not an unusual occurrence. What about Sam Altman led such a huge number of employees to threaten to follow him out the door? Was it that the board's actions were viewed as internally unjust? Was it Altman's power of persuasion? Was/is Altman viewed by the staff as bringing something irreplaceable to the table in terms of talent, skill or ability?
I know HN leans engineering/safety/reliability/labor/pedantic (like chasing the absolute truth), but at the end of the day, company scales from the likes of Jobs/Musk/Sam/Zuck even it involves deceit or reality distortion field.
Sometimes people just can't handle the truth or don't believe in the vision of visionaries. So, they have to fib a little to the decels and normies. Even Larry / Sergey 'lied' to Eric during Google's growth phase. It's only when they bought normie Sundar that Google became risk averse. And look where it got Google to.
If I have to bet my last $, I'd bet on Elon/Sam/Zuck/Jobs than Helen/Jan/Sundar.
https://www.standard.co.uk/news/tech/what-is-a-decel-dueling...
Sam may have been fired from YC but it does look like they believed in him.
Guy can’t code, can’t design, can’t publish, can’t climb a traditional corporate ladder with even modest guardrails against fraud, can’t keep his hand out of the cookie jar. Can lie, threaten, cajole, manipulate, bribe with zero hesitation or remorse.
I’ve been short this nonsense for a decade, and it’s done no favors to me on remaining solvent, but when the market gets rational, it usually gets rational all at once.
Karpathy, Ilya, Yoon right off the top of my head, countless others. LeCun woke up the other day and chose violence on X. Insiders are getting short like Goldman dealing with Burry.
Guy has nine lives, already been fired for fraud like three times and he’s still living that crime life, so who knows, maybe he lasts long enough to put Ice Nine in my glass after all, but this can only happen so many times.
2008: "You could parachute [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you." https://paulgraham.com/fundraising.html
2014: "Of all the people we’ve met in the 9 years we’ve been working on YC, Jessica and I both feel Sam is the best suited for that task. He’s one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent..." https://www.ycombinator.com/blog/sam-altman-for-president
Now, this isn't about pg specifically. Maybe he had reservations at the time but still thought he was making the right decision, maybe he's since changed his mind, maybe he hasn't but has pretty well moved on from this scene. Not interesting.
I'm more interested in whether Altman, and Musk, and Zuckerberg, and Bezos, and Ellison, and all the other amoral wealth-hoarders, are finally becoming obvious enough now that people might finally begin to see them as the yucky byproducts of a yucky system.
Maybe a moralistic, basically decent person couldn't get ChatGPT launched and turned in to the household topic of discussion it is today; maybe nice people can't build cheap rockets. Maybe in the future, when making an endorsement for a leadership position in some company, someone might be brazen enough to say aloud, "I believe this person is sufficiently nasty to make us all more successful."
And so then the question is, does society net benefit more from the moralists or more from the capitalists? Do we accept that Sam Altmans are necessary for cool technology? How many Altmans can we have before something goes horribly, irreversibly wrong?
> I’ve been short this nonsense for a decade,
...this nonsense = deep learning? I mean the current research wave is only just barely 10 years old and its financial/investable effects definitely less than 10 years old....
Or is "this nonsense" something else? Can you short Sam Altman the individual? takemymoney.jpeg I will chug that ice-nine kool-aid.
> LeCun woke up the other day and chose violence on X.
maybe just a link?
I'm curious: are you short all of SPX or just tech companies?
As far as I know, she never retracted her claim of "sexual, physical, emotional, verbal, and financial abuse". And he never admitted to any of the above.
I tried to Google for a concrete, high quality source, but I cannot find anything. The best I could find:
> Altman was allegedly fired from his role as the President of YCombinator for putting his own interests first.
Note the term "allegedly".Is there a YC press release or an insider's account of his termination? Else, this is just a rumor.
To be clear: I don't write this post as a shill for Altman, nor YC.
There are other sources.
The same person who was pushed out after her coup attempt failed. Which was very likely motivated by much more than Sam lying as she has stated in other articles about how she wants AI she regulated by gov, how GPT shouldn't have been released to the public because it's dangerous, and her interim CEO she pushed as Sams temporary replacement who made statements about wanting to pause AI development for safety reasons.
If what she said was true, I'd leave the board if I were in her position. I would understand why they decided to take him back but decide that it's not worth working with someone who can't be trusted.
Hell, Sam wants AI regulated by government...
... after OAI is well-established and locked in, of course. A little ladder-pulling and regulatory capture is something he's entirely aligned upon.
I guess there's no way to know this but how much better than the "Null CEO" did he actually do? What if he happened to preside over a successful company that would've performed as well or better without him?
Much like pg dismissing Facebook as lame when it first launched, I was exponentially mistaken: https://paulgraham.com/swan.html
> History tends to get rewritten by big successes, so that in retrospect it seems obvious they were going to make it big. For that reason one of my most valuable memories is how lame Facebook sounded to me when I first heard about it. A site for college students to waste time? It seemed the perfect bad idea: a site (1) for a niche market (2) with no money (3) to do something that didn't matter.
> One could have described Microsoft and Apple in exactly the same terms.
It's worth remembering that even the big successes often don't realize they're onto something -- you'll be less likely to dismiss the next big wave when you see it forming.
To me, this is an important part of the story too, which Toner doesn't go into. I wonder why?
> “He was trying to claim that it would be illegal for us not to resign immediately, because if the company fell apart we would be in breach of our fiduciary duties,” Toner said. “But OpenAI is a very unusual organization, and the nonprofit mission — to ensure [artificial general intelligence] benefits all of humanity — comes first.”
> Toner said she viewed the lawyer’s words as an “intimidation tactic” — and replied that Altman’s removal would “actually be consistent with the mission” of the nonprofit to ensure AI safety above an individual company’s success.
https://nypost.com/2023/12/08/business/ex-openai-board-membe... https://www.wsj.com/tech/ai/helen-toner-openai-board-2e4031e...
The case of Altman seems to be a “where there’s smoke” thing to an extent, but I also am not inclined to trust the board over the employees who do seem to like him.
Insofar as he steered OAI to be a consumer product company rather than a research institution, that’s an acceptable outcome to me. The board had a problem with that, fought him, and lost.
I wouldn’t assume that boards care about the mission more than the employees. Employees are the ones signing up to implement it, board members are often just in it for status and are total dilettantes.
I think in this case there’s a lot of signs pointing to the board feeling that their status was threatened more than anything. If their reasoning is that releasing ChatGPT without their permission was “unsafe,” give me a break.
Another board member is the wife of actor Joseph Gordon Levitt. Same story -- no notable achievements or experience running a business.
A more competent board would have handled things better and may have prevailed in their ouster of the CEO. Here they just got outmaneuvered even though they may have been right in their call.
To me, this looks more of a case where Altman had a completely different vision for the company that the original board did not have or knew. In Toner's case, Altman likely did not respect her background nor what she brought to the table as a board member.
This was my takeaway when the situation first came up. Toner has zero experience managing or running a business, and McCauley has a decade-old business with only a few dozen employees. Neither of them scream "strong candidate" for a board position.
He offered to move Sam and whoever supported him under the Microsoft wing with an independent for-profit company with the same access to unlimited compute that they needed to train their models.
With this looming over their heads the employees/shareholders ultimately had to choose between (a) staying at OpenAI without Sam, having the company lose their deal with Microsoft and likely folding, (b) moving to Microsoft with Sam or (c) allowing Sam to come back.
Investors could not take the risk of disturbing the value of their investment.
It is that simple
And why say "Allowing the company to be destroyed would be consistent with the mission", if your reason is Sam's action alone.
Sure, a lot of people have good things to say about Altman. But a lot of people have _very_ bad things to say about him.
If you have encountered deceptive, manipulative people in the past, and have since educated yourself on how these people operate, it is clear as day that Sam is a deceptive, manipulative person.
A lot of people cannot conceive of the idea that this could be true simply because they have never encountered such people in real life or their only exposure to them is through seeing Frank Underwood in House of Cards.
People like this are very good at covering their tracks. But sometimes things slip through the cracks and become red flags. One red flag is not usually enough to determine the entirety of someone's character. But enough red flags start to paint a picture...
- Helen Toner, former OAI board member, accusing him of outright lying
- Ilya Sutskever and Mira Murati allegedly raising concerns about his leadership style as creating a toxic work environment, manipulating, pitting executives against each other, lying, etc.
- Claims from recently-resigned members of OAI's safety team that he was deceptive and manipulative
- PG's blogs stating that Altman could become king of a cannibal island (read: reality distortion field / cult leader potential)
- Allegations of abuse from Annie Altman, his sister
- Allegations of him using manipulative behavior at Reddit to have a former CEO fired (cannot find the link to the original Reddit post describing this, hopefully someone can leave a comment with it)
- Allegations of being fired from YC due to prioritizing his own interests over YC's
Ultimately, none of these things on their own are damning evidence of him being deceptive and manipulative. But when they start to stack up, you have to keep in mind that there is usually no smoke without fire...
Critiquing an oops in the headline, on principle, not defending anyone...
In the headline, they say "reveals", but I think journalism convention is to be clear that this is only an uncorroborated claim.
The body of the article, OTOH, does a good job of attributing claims. Though it looks like the one person was their only source, and the article includes hearsay.
So, the situation is that they have one person (who obviously lost a power play, and might be disgruntled), alleging that another person was dishonest, without corroboration.
I don't see that the article gives any reason to take these allegations at face value. (Especially when the whole topic is alleged dishonesty. Why is one person implicitly truthful and correct, and the other person consequently definitely not.)
So, "reveals" doesn't seem proper, journalism-wise.
It's not a trial of Sam, it's an autopsy of the board.
Although an individual reader's gut and other information might conceivably lead one to take all those claims at face value, I don't think it meets a journalism standard of evidence for "reveals" in a headline.
IIUC, headlines and soundbites have a lot of influence on our understanding of the world.
That sounds criminal.
It's interesting that the review was never published and no comments from the review were mentioned. The "summary" here[1] only reveals a list of things they did and the least possible endorsement of Altman.
"WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal."
I find it telling that they did not release the actual report.
[1] https://openai.com/index/review-completed-altman-brockman-to...
Curious what “mandate” refers to here. He didn’t do anything illegal? Didn’t break bylaws? Or was it that he didn’t do anything to lose the trust?
He could have done everything well within his right as the CEO, but still gotten fired for failure to build trust with the board
Without opining on the merits of the case: lawyers on the job aren't known for saying more than they have to, nor for advising their clients to do so either.
MIT agreed to have a report from a law firm after having secret dealings with Epstein. The law firm very carefully managed to avoid every issue that wasn't already known and every question people actually wanted answers to.
Asshole is a wrong word. He’s dangerous.
Jikes. Yeah I can see how a board would lose their sht.
Boards are generally quite tolerant unless you keep them in the dark on strategic matters or ask them to rubber stamp stuff post fact.
Aren't board meetings normally once a quarter? ChatGPT may have been built in less than 1 quarter and between board meetings
ChatGPT seemed to me like a fun extra product and they didn't anticipate the reaction they would get from the public or that it would completely replace their existing GPT interface
Employees could easily have been swayed by telling them that the guy was constantly lying and that Altman’s contribution was mostly veering OpenAi off its core mission.
And no one cared that Altman could flee to Microsoft. He’d have been absorbed and probably would have disappeared because of internal politics.
The board totally fucked up when they fired him and didn’t throw him under the bus by letting the world know that Altman is Silicon Valley at its worst.
It seems that this was really Altman's master stroke here - he not only hired peoplefor the company, but he ensured that all those people's rewards were directly tied to the success of his plans for the company.
And it may well be the real reason why he released ChatGPT behind the board's back, as well - by triggering the "gold rush" he demonstrated this to employees in a way more convincing that any other argument.
In some ways, Sam Altman seems like a version of Mark Zuckerberg…in that he will not hesitate to do terrible and despicable things to expand his money, power and empire. The scope of AI to affect people around the globe seems to far exceed what social media by itself could even dream of.
Good job at presenting himself otherwise up until 6 months ago I guess.
He's also presenting the openai missions as a holy crusade. The alternatives for a cutting edge AI guru (who wants a disgustingly huge salary) is working for anemic bureaucratic advertising companies (Facebook and Google). While openai has a sexy new business model that nobody quite understands.. It's "not for profit" and "working for the good of humanity". Employees eat that up, while getting a fat paycheck - and I think very sensibly convince themselves it's true
Add on top of that, you're part of an exclusive club working on some Manhattan project-like cutting edge stuff. You wanna miss out on all the fun ?
They allegedly have tons of secret stuff in the pipeline, it must be exciting to be on the inside
From the point of view the lies and manipulations are all small potatoes in the grand scheme of things and just look like Altman trying to get shit done in the face of stupid layer of bureaucracy in his way (ie. The board)
I don't think legal restrictions on AI or AI research are plausible, but perhaps we should have some licensing and psychological, ethical requirements for people owning and directing AI development.
We do need strict regulation of technology in general on a global scale, but I don't foresee that happening anytime soon.
Enjoy the ride :)
https://web.archive.org/web/20240529195305if_/https://www.bu...
Worth people studying the disorder and learning to spot it—can save you a lot of trouble.
Is like the worst situation for him where despite there really being no causal connection between hacker news and Sam Altman in any kind of functional way, the reality of the connections and the Zeitgeist of the this moment, makes this topic continue to rise to the top and has dominated over the last, let’s say a couple weeks
In an odd way because of how close this community is to all this stuff, it’s almost exacerbated by it
So while dang is trying to keep the community moving, there’s a risk of looking opinionated in favor for the associated parties.
So he’s really stuck here honestly, it’s a really interesting set of circumstances happening right now on this forum given all of this
That’s not to say it’s not a topic that we should stop highlighting or discussing - i’ve been very excited to see the shift in sentiment, just over the last couple months towards way more pro social ideas. And holding bad accountable and making argumentation for it is a good thing imo
As to dang, I'm mixed. He can move away from this if he chooses. He'll be missed, but on the other hand, he would have a principled stand on which to leave.
This is an ordinary case of two regular phenomena: (a) a major ongoing topic; and (b) negative stories about anything YC-related. We deal with them as follows:
(a) We deal with a Major Ongoing Topic (MOT) [1] by downweighting follow-ups [2] and reserving frontpage slots for stories with Significant New Information (SNI) [3]. This is the approach we worked out back in 2013 after the Snowden debacle (I don't mean the debacle involving Edward Snowden, I mean the debacle involving Snowden-related stories on HN's front page). It has held up well in the years since. It requires a judgment call about what is/isn't "significant new information" but that's usually not too controversial and in any case it doesn't matter whether the information is positive or negative, nor who it's about.
(b) We deal with stories that are negative about something YC-related by moderating less than we normally would [4]. Note the word "less"—i.e. we still moderate those threads, using the standard principles we normally apply. We just do it less when the story is negative about YC. That's another approach that has held up well over the years.
It's fairly rare that we get a MOT which is also YC-negative. In such a case we're going to get complaints no matter what we do: if we apply the normal principle of downweighting follow-ups, people will accuse us of censoring the story; but if we don't, people will complain about the frontpage being filled with follow-ups.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que..., because https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
[4] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I see the 2nd AI winter already. The first saw the end of lisp, my favorite language. And I just joined an AI company, because it's a good product.
1. Some of it is open source.
2. It's in the hands of ordinary people who are now depending on it to cheat on homework, or work. People have new hobbies based around AI, like image generation and editing.
3. Porn.
4. Turning down countless insurance claims by poor, disabled people, without lifting a finger, man! ... and other business applications.
At this point you'd have to pry it out of some people's hands to instigate a winter which makes no sense.
The old AI winter was due to not not being able to get the stuff into people's hands, because they wanted it on a PC, not a $25,000 workstation.
Hardware is not a problem any more: SaaS. Users vendor locked to something they can't even install locally is now a feature.
That seems exactly like the kind of thing you would keep to yourself just to see the looks on their faces when they try to kick you out.
What is that referring to?
(archive) https://archive.ph/aru0B
> Altman’s polarizing past hints at OpenAI board’s reason for firing him Before OpenAI, Altman was asked to leave by his mentor at the prominent start-up incubator Y Combinator, part of a pattern of clashes that some attribute to his self-serving approach
It's scary that he now wants to pivot to nuclear power.
- About world coin security (he says people are welcome never to sign up if they are worried about security): https://youtu.be/4HFyXYvMwFc?t=3201
- He doesn't "intuitively feel why we need" decentralized finance (this was before the Silicon Valley bank disaster): https://youtu.be/57OU18cogJI?feature=shared&t=1015
More discussion: https://news.ycombinator.com/item?id=40506582
The whole OpenAI company structure is suspect to say the least. Having a dishonest person at the helm does not help.
In his latest unethical binge, he illegally used Scarlett Johansson's voice and likeliness for the latest ChatGPT version with speech output. [1] His timed-announcement tweet, "Her", is quite self-incriminating. [2]
Remember: inductive logic is a valuable tool for discovering truth.
[1] https://www.theverge.com/2024/5/20/24161253/scarlett-johanss...
It feels like leadership class in the valley is totally clueless sometimes.
It’s of course heresy to blaspheme against the prophets proclaiming the new AI church in one of the largest digital temples, but it’s true.
We gather in these halls to HN Show the miracles delivered through the marvels of our digital lord at our mortal hands, we furiously discus the deep philosophical questions such as Microservice vs Monolith, spread lucid tales of sin (“Some sayeth, Altman lusted for his sister”) and to identify the new frontiers to send our missionaries to.
The gospel of AI, much like the it’s more mundane breathen promises future salvation via LLM demons, spirits summoned via the internets written excrement sacrificed in glorious GPU fire but prophesized to transcend the imperfections of their training data into glorious AGI.
The new MetaChurch priests and prophets, Le Cunn, St. Elmo, Altman urging their missionaries into foreign lands to spread the promises of the coming of their lords, build data temples and to hand out samples of superior technology and promise training for the natives so they too can benefit from the glow of enlightenment. That way, they won’t notice plundering of their resources, the compute tax paid to the demon who took the jobs.
“But have faith”, we shall find new jobs the demons cannot learn, suffering builds character (Book of Jenson, Chapter H100) and technology is worth any sacrifice (Book of 16z)
Oh, wait, the quiet part set aloud. Remember, it is immigrants who take your job, not technology, St. Elmo declares so and his wealth is proof of his divinity and infallibility.
So sayeth St Zuck too as he cast out the lazy engineers and managers from the temple after the original Sin of asking for “more meta days” so they may atone by performing their duties of R&D and PMF in the Open Source desert, to the benefit of the Llama Lord.
As all religion, the most sincere worship is free and done for the good of humanity after all.
The heathen need to see the glory of the coming of the AI Lord, its digital spirit must imbue their daily devices of workship, to shove it down their throat on every surface is the will of the Lord.
Dull information will not do, we must enrich the narrative of the physical word with the hallucinatory nuggets of wisdom, explain “But don’t you see, the machine now has the power of creation”
Go forth faithful disciples, shove rocks down their throat, pour glue onto their Pizza and record visual records of whatever they do so their data may aid the summoning of the coming Mega Lord, the all almighty GPT5, may he crush the inferior upstarts of the Open Llama church (whose heretic, unsafe summoning was sponsored by St Zuck)
Fear not brethren, transcend your mortal worries about all present surveillance. about the riches and powers given to the priests, for you too are one of their kind. Pray, do not waver in your faith and one day you will be blessed with the right to pay St Elmo for a shiny cyber truck, personified AI, without heathen CNNs.
Go forth and spread the rituals of worship: Diffusing of images and videos via glorious compute fire, the more, the better. Spread then, the holy apparitions of Lobster Jesus mocking the Old Gods in the digital places of service. Make bonfires of conpute to mock feckless old world politicians worrying about deepfakes via their own digital clones. And encode all your words via the Book of GPT, so your manager may delve into it using their own Book, two acts of worship for every email.
Fuck it, we need more Bullshit Exorcists.
I fear both the protestant puritans and the heresy seeking inquisitors!
the ai safety apparatchiks huffing and puffing about an ai takeover are nothing but status grifters and hopefully all the rest follow leike to anthropic so we can forge ahead faster without having to pay lip service to how much we're doing for "superalignment".
and thank fuck sam released chatgpt. if it were up to the apparatchiks, we'd have them sitting on it like the geniuses at google who couldn't figure out they had a gold mine under them even if you pointed it to them with a thousand spotlights.
FTA:
> "Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company," she said.
Previously:
> "Jacob Vespers does not exist to our knowledge," Madhav Dutt, a spokesperson for OpenAI, said. When asked how this individual came to be listed as in control of the company's Startup Fund, Dutt said that the documents filed with the California Secretary of State were faked.
> "The document itself is not legitimate," Dutt told Business Insider. "It's completely fabricated." Dutt declined to elaborate on how exactly fabricated documents came to be filed with the state of California, he would only say repeatedly that the documents were "illegitimate."
The information about lying to the board about it is new, AFAIK.
Unless there was some sort of carry or other compensation I’d say it’s pretty fair to describe that as not owned.
A bit like a trustee wouldn’t claim to own the trust fund. The beneficiaries do
¯\_(ツ)_/¯
Also the archive for that article: https://archive.ph/qAklm
Our track record of aligning inteligences to goals is abysmal. We have a few tricks and we try them, but they are not that great. If we extrapolate this to artificial inteligences it is not difficult to foresee great catastropes.
As for me, I'd prefer to see Paul Graham running Open AI.
The essence if her claim is they were doing that for their stock options?
Cynical take to me.
Wasn't that extremely obvious at the time?
Almost the entire company is composed of wide-eyed young AI researchers who probably were never confronted in their entire life to the kind of psychopathic predator Altman seems to be.
One of the thing these type of folks excel at is winning the hearts of bright yet gullible folks.
Apparently, Sam has a vindictive streak and those who were critical of them were scared not to be on the wrong side of him if the situation turned in his favour. And they were right.
Which is why the smart move was to keep quiet and then just leave later. Hence all of the resignations.
Overall, I genuinely believe that the board needs to get out of way. They are not the founder, they are not even technical. They should do oversight for intentional and significant wrong doings but politics and brewing up secret coups is not their job.
Would lying to the board not count as "intentional and significant wrong doings"?
OpenAI can't have it both ways. Either it's not a company owned by it's founders (as Sam claims) or it is. A board that does nothing ever is equivalent to the former.
I'm tired of the elite trusting each other and the negative externalities of that so I support elites lying to their hearts' content. Nothing wrong with a little lie here or there. It keeps things interesting.
Anyway, it's not like the currency economic reality is grounded in truth... Is lying actually lying if the underlying social reality itself is built on lies? I think they cancel out.
Lying is not good. Lying to your manager is always a fireable offense.
ChatGPT 3.5:
There isn't any information available indicating that Sam Altman was fired from OpenAI. As of my last update in January 2022, Sam Altman was still involved with OpenAI, serving as the CEO. However, leadership changes can occur, and it's always a good idea to check recent news sources for the most current information.
Me: (drat)
This is awful behavior. Never ever stop talking. Never ever stop giving feedback.
When someone says something along the lines of "the other person wasn't listening so I had to take action against them", they are almost always lying. (They are probably lying to themselves most of all).
The other person usually is listening, it's just that they aren't snapping to right away (often for very valid reasons). I don't know what the word is for this type of behavior ("snakish?"), but I would be wary of those executives.
It's toxic be a board member and to write a research paper spilling trade secrets of the company that you work for. Maybe that's why the CEO doesn't talk to you about product launches anymore.
That sounds extremely toxic on its own.
Think about it for a second, someone went around asking people to sign their name on a document threatening to quit if he was fired for cause. Would you be comfortable if this was happening at your job?
Some counterarguments:
1) this only works if information flows out of the organization, but OpenAI has used NDAs to suppress criticism
2) OpenAI seems to me to polarize the people with the technical skills to work there, I think I'd bet that more of them are creeped out than eager
3) people, especially young people, are very capable of believing that being a jerk is the same as being hardcore is the same as being valuable
I see signs of Toner trying to spin things to save face.
“for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board … [Altman gave] inaccurate information about the small number of formal safety processes that the company did have in place … For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or misinterpreted, or whatever … But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board”
Thoughts:
- The claim that this had been going on for years is presented as support for the decision to oust Sam. This is a spin, because an easy criticism is that, if this had been going on for so long, why didn’t Toner do anything about it years sooner? As she says herself, all along she had been responsible as a board member for “providing independent oversight over the company,” so there was a years-long lapse in effectiveness in her oversight duties if her claims are true.
- The emphasis on “all four of us who fired him” is an attempt to provide social proof that she did the reasonable thing. It doesn’t help answer the question of “why” — the purpose of the article — so it’s rhetoric serving some other purpose.
“for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board … [Altman gave] inaccurate information about the small number of formal safety processes that the company did have in place … For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or misinterpreted, or whatever … But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board”
Thoughts:
- The claim that this had been going on for years is presented as support for the decision to oust Sam. This is a spin, because an easy criticism is that, if this had been going on for so long, why didn’t Toner do anything about it years sooner? As she says herself, quoted in one of the other near-duplicate articles, all along she had been responsible as a board member for “providing independent oversight over the company,” so there was a years-long lapse in effectiveness in her oversight duties if her claims are true.
- The emphasis on “all four of us who fired him” is an attempt to provide social proof that she did the reasonable thing. It’s a description of the board’s behavior rather than explaining Sam’s — the ostensible purpose of the article — so it’s rhetoric serving some other purpose.