My understanding: OpenAI follows the same model Mozilla does. The nonprofit has owned a for-profit corporation called OpenAI Global, LLC that pays taxes on any revenue that isn’t directly in service of their mission (in a very narrow sense based on judicial precedent) since 2019 [1]. In Mozilla’s case that’s the revenue they make from making Google the default search engine and in OpenAI’s case that’s all their ChatGPT and API revenue. The vast majority (all?) engineers work for the for-profit and always have. The vast majority (all?) revenue goes through the for-profit which pays taxes on that revenue minus the usual business deductions. The only money that goes to the nonprofit tax-free are donations. Everything else is taxed at least once at the for-profit corporation. Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes. They don’t get to just take any taxable revenue stream and declare it tax free.
All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit. They’re allowing the for profit to create more shares and distribute them to entities other than the non-profit. Or am I completely misinformed?
[1] https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
As you've realized, this should have been (and was) obvious for a long time. But that doesn't make it any less hypocritical or headline worthy.
The board of the non-profit fired Altman and then Altman (& MS) rebelled, retook control, & gutted the non-profit board. Then, they stacked the new non-profit board with Altman/MS loyalists and now they're discharging the non-profit.
It's entirely about control. The board has a legally enforceable duty to its charter. That charter is the problem Altman is solving.
In this case, Mozilla as a non-profit owning a for-profit manages to more or less fulfill the non-profit's mission (maintaining an open, alternative browser). OpenAI has been in a hurry to abandon it's non-profit mission for a while and the complex details of its structure doesn't change this.
Yes, but going from being controlled by a nonprofit to being controlled by a typical board of shareholders seems like a pretty big change to me.
All? As far as I know this is unprecedented.
[1] "The Mozilla Foundation has no members" https://hacktivis.me/articles/mozilla-foundation-has-no-memb...
Right now, OpenAI, Inc. (California non-profit, lets say the charity) is the sole controlling shareholder of OpenAI Global LLC (Delaware for-profit, lets say the company). So, just to start off with the big picture: the whole enterprise was ultimately under the sole control of the non-profit board, who in turn was obligated to operate in furtherance of "charitable public benefit". This is what the linked article means by "significant governance changes happening behind the scenes," which should hopefully convince you that I'm not making this part up.
To get really specific, this change would mean that they'd no longer be obligated to comply with these CA laws:
https://leginfo.legislature.ca.gov/faces/codes_displayText.x...
https://oag.ca.gov/system/files/media/registration-reporting...
And, a little less importantly, comply with the guidelines for "Public Charities" covered by federal code 501(c)(3) (https://www.law.cornell.edu/uscode/text/26/501) covered by this set of articles: https://www.irs.gov/charities-non-profits/charitable-organiz... . The important bits are:
The term charitable is used in its generally accepted legal sense and includes relief of the poor, the distressed, or the underprivileged; advancement of religion; advancement of education or science; erecting or maintaining public buildings, monuments, or works; lessening the burdens of government; lessening neighborhood tensions; eliminating prejudice and discrimination; defending human and civil rights secured by law; and combating community deterioration and juvenile delinquency.
... The organization must not be organized or operated for the benefit of private interests, and no part of a section 501(c)(3) organization's net earnings may inure to the benefit of any private shareholder or individual.
I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.Naturally, this makes it very hard to explain how the nonprofit could give up basically all of its control without breaking its obligations.
All the above covers "why does it feel unfair for a non-profit entity to gift its assets to a for-profit", but I'll briefly cover the more specific issue of "why does it feel unfair for OpenAI in particular to abandon their founding mission". The answer is simple: they explicitly warned us that for-profit pursuit of AGI is dangerous, potentially leading to catastrophic tragedies involving unrelated members of the global public. We're talking "mass casualty event"-level stuff here, and it's really troubling to see the exact same organization change their mind now that they're in a dominant position. Here's the relevant quotes from their founding documents:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact...
It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
From their 2015 founding post: https://openai.com/index/introducing-openai/ We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity...
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
From their 2018 charter: https://web.archive.org/web/20230714043611/https://openai.co...Sorry for the long reply, and I appreciate the polite + well-researched question! As you can probably guess, this move makes me a little offended and very anxious. For more, look at the posts from the leaders who quit in protest yesterday, namely their CTO.
I don't think that's true? A non-profit can sell products or services, it just can't pay out dividends.
The non-profit could maybe sell its assets to investors, but then what would it do with the money?
I'm sure OpenAI has an explanation, but I really want to hear more details. In the most simple analysis of "non-profit becomes for-profit", there's really no way to square it other than non-profit assets (generated through donations) just being handed to somebody for private ownership.
The biggest problem with this is that there's basically no chance that the sale price of the non-profit assets is going to be $150 billion, which means that whatever the gap is between the valuation of the assets and the valuation of the company is pure profit derived from the gutting of the non-profit.
If this is allowed, every startup founded from now on should rationally do the same thing. No taxes while growing, then convert to for profit right before you exit.
If that's how it works, why wouldn't you start every startup as a non-profit?
Investment is tax deductible, no tax on profits...
Then turn it into a for-profit if/when it becomes successful!
It's been many years, but the plan was essentially this:
* The original, non-profit would still exist
* A new, for-profit venture would be created, with the hospital having a board seat and 5% ownership. Can't remember the exact reason behind 5%. I think it was a threshold for certain things becoming a liability for the hospital as they'd be considered "active" owners above 5%. I think this was a healthcare specific issue and unlikely to affect non-profits in other fields.
* The for-profit venture would seek, traditional VC funding. Though, the target investors were primarily in the healthcare space.
* As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.
* Everyone working for the "startup" would need to sign a new employment contract with the for-profit.
* Viola! You've converted a non-profit into a for-profit business.
I'm fuzzy on a lot of details, but that was the high level architecture of the setup. It's one of those things where the lawyers earn a BOAT LOAD of money to make sure every technicality is accounted for, but everything is just a technicality. The practical outcome is you've converted a non-profit to a for-profit business.
Obviously, this can't happen without the non-profit's approval. From the outside, it seems that Sam has been working internally to align leadership and the board with this outcome.
-----
What will be interesting is how the employees are treated. These types of maneuvers are often an opportunity for companies to drop employees, renegotiate more favorable terms, and reset vesting schedules.
Every answer moving forward now will contain embedded ads for Sephora, or something completely unrelated to your prompt...
That money will go into the pockets of a small group of people that claim they own shares in the company... Then the company will pull more people in who invest in it, and they'll all get profits based on continually rising monthly membership fees, for an app that stole content from social media posts and historical documents others have written without issuing credit nor compensating them.
As long as the money doesn't go into someone's pocket, it's all good (except that Sam Altman is also getting equity but I assume they found a way to justify that.)
OpenAI will eventually be forced to convert from a public charity to a private foundation and will be forced to give away a certain percentage of their assets every year so this solves that problem also.
https://www.businessinsider.com/sam-altman-openai-note-more-...
OpenAI has been one of the most insane business stories in years. I can't wait to read a full book about it that isn't written by either Walter Isaacson or Michael Lewis.
List of crawlers for those who now want to block: https://platform.openai.com/docs/bots
https://github.com/ai-robots-txt/ai.robots.txt/blob/main/rob...
cloudflare have a button for this:
https://blog.cloudflare.com/declaring-your-aindependence-blo...
"If I read your book and I have a photographic memory and can recall any paragraph do I need to pay you a licensing fee?"
"If I go through your library and count all the times that 'the' is adjacent to 'end' do I need to get your permission to then tell that number to other people?"
But the board's lack of communication apparently allowed Altman to demonstrate he was more important to the organization than the formal/legal structure, 90% signed intents to quit and the board backed down. It seemed that Altman simply represented the attitude of the many Silicon Valley tech-people - once you have a chance of money, don't hold back, do everything you can to make it.
Ironically the one person with resources fighting it in a tangible way, even if for spite, is Elon Musk.
I can see large copyright holders lining up with takedowns demanding they revise their originating datasets since there will now be a clear-cut commercial use without license.
I thought so for a moment but then again Meta, Anthropic (I just checked and they have a "for profit and public benefit" status whatever that means), Google or that Musk's thing aren't non-profits, are they ? There are lawsuits in motion for sure but with how it stands today I think ai gets off the hook.
Early hires, who were lured there by the mission?
Donors?
People who were supposed to be served by the non-profit (everyone)?
Some government regulator?
In a normal situation, the primary people with standing to prevent such a move would be the board members of the non-profit, which makes sense. Luckily for Sam, the employees helped kick out all the dissenters a long time ago.
Whether the case is any good is another matter.
Foundations and charitable organizations that pubically get their funding are a different story but I'm talking about non profit companies.
I even had one fellow say that the green bay packers were less corrupt than the other for profit nfl teams , which sounds ridiculous.
The NFL’s non-profit status is a farce though. Similarly, their misuse of copyright (“you cannot discuss this broadcast”) and the trademark “Super Bowl” (“cannot be used in factual statements regarding the actual Super Bowl”) should have their ownership of that ip revoked, if only because it causes massive confusion about the underlying law with a big chunk of the US population.
This claim he made was likely helpful in ensuring the OpenAI team’s willingness to bring him back after he was temporarily ousted by the board last year for alleged governance issues. (Basically: “don’t worry about me guys, I’m in this for the mission, not personal enrichment”)
Since his claim likely helped him get re-hired, he can’t claim it was immaterial.
I really hope someone from the SEC scrutinizes him someday. The Singularity is too important to let it be run by someone with questionable ethics.
Is it well played if you simply decide to lie brazenly? Anyone can win at monopoly if they decide to steal from the bank.
In practice it’s doable though. You can just create a new legal entity and move stuff and/or do future value creating activity in the new co. IF everyone is on board with the plan on both sides of the move then that’s totally doable with enough lawyers and accountants
There's a lot of jurisdiction around preventing this sort of abuse of the non-profit concept.
The reason why the people involved are not on trial right now is a bit of a mystery to me, but could be a combination of:
* Still too soon, all of this really took shape in the past year or two.
* Only Musk has sued them, so far, and that happened last month.
* There's some favoritism from the government to the leading AI company in the world.
* There's some favoritism from the government to a big company from YC and Sam Altman.
I do believe Musk's lawsuit will go through. The last two points are worth less and less with time as AI is being commoditized. Dismantling OpenAI is actually a business strategy for many other players now. This is not good for OpenAI.
Interesting timing of the news since Murati left today, gdb is 'inactive' and Sutskevar has left to start his own company. Also seeing few OpenAI folks announcing their future plans today on X/Twitter
Although I guess it doesn't really matter. What if we all understood climate change earlier? wouldn't really have made a difference anyway
Altman was fucking with OpenAI for long before the board left in protest, since about the time Elon Musk had to leave due to Tesla's AI posing a conflict of interest. He got more and more brazen with the whole fake-altruism shit, up to and including contradicting every point in their mission statement and promise to investors in the "charity."
>Beethoven's reaction to Napoleon Bonaparte's declaration of himself as Emperor of France in May 1804 was to violently tear Napoleon's name out of the title page of his symphony, Bonaparte, and rename it Sinfonia Eroica
>Beethoven was furious and exclaimed that Napoleon was "a common mortal" who would "become a tyrant"
Sketchy.
This whole silicon valley attitude of fake effective altruism, "I do it for the good of humanity, not for the money (but I actually want a lot of money)" fake bullshit is so transparent and off-putting.
@sama, for the record - I am not saying making is a bad thing. Labor and talent markets should be efficient. But when you pretend to be altruistic when you are obviously not, then you come off hypocritical instead of altruistic. Sell out.
I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.
That entity will scrape the internet and train the models and claim that "it's just research" to be able to claim that all is fair-use.
At this point it's not even funny anymore.
As a moral fig leaf. They can always point to it when the press calls -- "see it is a non-profit".
A tale as old as time. Some of us could see it, from afar <says while scratching gray, dusty beard>. Lack of upvotes and excitement does not mean support, but how to account for that in these times? <goes away>
the well known scammer successfully scammed everyone twice. obviously he's keeping it around for the third (and forth...) time
Going for-Profit, and several top exec leaving at same time? Before getting the money?
"""Question: why would key people leave an organization right before it was just about to develop AGI?" asked xAI developer Benjamin De Kraker in a post on X just after Murati's announcement. "This is kind of like quitting NASA months before the moon landing," he wrote in a reply. "Wouldn't you wanna stick around and be part of it?"""
https://arstechnica.com/information-technology/2024/09/opena...
Is this the beginning of the end for OpenAI?
Also because you know… the non-profit tried to strangle the for profit to take over control when they tried to oust Sam, so there’s that.
You can name your company "ThisProductWillCureYouFromCancer" and the FDA cannot do a thing about it if you put it on a bottle of herbal pills.
I wonder though whether Microsoft is still interested. The free Bing Copilot barely gets any resources and gives very bad answers now.
If the above theory is correct (big if!), perhaps Microsoft wants to pivot to the military space. That would be in line with idealist employees leaving or being fired.
Yes, I too can see how sama could end up as Microsoft’s CEO as a result of this
Seems more likely that OpenAI's biggest secret is that they have no secrets, and they are desperately trying to come up with a second act as tech companies with more robust product portfolios begin to catch up.
Hint: it won’t.
There are two main interpretations of what he's saying:
1) He sincerely believes that AGI is around the corner.
2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.
Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.
Personally I still believe he thinks that way (in contrast to what ~99% of HN believes) and that he does care deeply about potential existential (and other) risks of ASI. I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.
I don't know why the promises to the safety team weren't kept (thus triggering their mass resignations), but I don't think it's something as silly as him becoming extremely power hungry or no longer believing there were risks or thinking the risks are acceptable. Perhaps he thought it wasn't the most rational and efficient use of capital at that time given current capabilities.
That seems like fraud to me.
I can't think of a single product or company that used the "open" word for something that was actually open in any meaningful way.
So I'm assuming the game plan here is to adjust the charter of the non profit to basically say we are going to still keep doing "Open AI" (we all know what that means), but through the proceeds it gets by selling chunks of this for-profit entity, so the essence could be the non-profit parent isn't fulfilling its mission by controlling what openai does but how it puts the money to use it gets from openai.
And in this process, Sam gets a chunk (as a payment for growing the assets of the non-profit, like a salary/bonus) and the rest as well....?
Basically, the plan was to create a new for-profit entity then have the not-for-profit license the existing IP to the for-profit. There were a lot of technicalities to it, but most of that was handled by lawyers drawing up the chartering paperwork.
Safe AI, altruistic AI, human-centric AI, are all dead. There is only money-generating AI. Fuck.
so much for sam "i have no equity" altman
I'm not surprised in the least.
Who is going to give billions to a non-profit with a bizarre structure where you don't actually own a part of it but have some "claim" with a capped profit? Can you imagine bringing that to Delaware courts if there was disagreement over the terms? Investors can risk it if it's a few million, but good luck convincing institutional investors to commit billions with that structure.
At that point you might as well just go with a standard for-profit model where ownership is clear, terms are standard and enforceable in court and people don't have to keep saying "explain how it works again?".
It's hard to say if there is much brand value left with "OpenAI" - lots of history, but lots of toxicity too.
At the end of the day they'll do as well as they are able to differentiate and sell their increasingly commoditized products, in a competitive landscape where they've got Meta able to give it away for free.
Aaron Burr raised capital for a fake water company and applied for a banking charter for what is now JP Morgan Chase
some people are playing by a more effective set of rules and others are being lied to from a young age
OpenAI is Microsoft's AI R&D spin-off and Microsoft means business.
The stinking peasants will never realize what's happening until it's too late to stop!
On March 1st, 2023, a warning was already sounding: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (https://news.ycombinator.com/item?id=34979981)
https://www.msn.com/en-us/money/other/openai-to-become-for-p...
Text-only:
https://assets.msn.com/content/view/v2/Detail/en-in/AA1rcDWH
Which is why we need to reopen more asylums and bring back involuntary commitment.
but still, you'd think some of them would have finally had enough and have enough opportunities elsewhere that they can leave.
"openai is nothing without its people." well, the key people left. soon, it will just be sam and his sycophants.
I get strong "next Mark Zuckerberg" vibes from Sam. Build a zombie product that approaches worthlessness after a few years, but made himself hugely rich in the process, and buys off tech and people as needed to maintain some kind of relevance.
What happened to all the people making fun of Helen Toner for attempting to fire Sama? She and Ilya were right.
Sam Altman is a poison pill.
Why the h are they called "openAI" too? nothing is open for them but your own wallet.
"Do I shock you? This is capitalism."
Yishan Wong describes a series of actions by Yishan and Sam Altman as a "con", and Sam jumps in to brag that it was "child's play for me" with a smiley face. :)
Reputationally... the net winner is Zuck. Way to go Meta (never thought I'd think this).
"OpenAI to remove non-profit control and give Sam Altman equity"
https://www.reuters.com/technology/artificial-intelligence/o...
> 57. OpenAI to Become For-Profit Company (wsj.com) 204 points by jspann 4 hours ago | flag | hide | 110 comments
/s
I'm shocked. Shocked!
I better stock up on ways of disrupting computational machinery and communications from a distance. They'll build SkyNet if it means more value for shareholders.