> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
https://pdfernhout.net/on-funding-digital-public-works.html#...
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
This is originally from The Art of War.
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.
The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.
I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.
I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
That's always been pretty overtly the winner-take-all AGI scenario.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.
If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.
For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.
But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.
OpenAI loses billions and is at the mercy of getting new investors to fund the losses. It has many plausible competitors.
What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.
Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.
MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.
Switching from ChatGPT to the many competitors is neither expensive nor painful.
Yeah; and:
We want to open source very capable models.
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!We need to get closer to the norm and give shares of a for-profit to employees in order to create retention.
Please promise to come back to this comment in 2030 and playfully mock me for ever being worried and I will buy you a coffee. If AGI is invented before 2030 please buy me one and let me mock you playfully.
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
Which sounds pretty in-line with the SV culture of putting profit above all else.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.
probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong
It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.
If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.
Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.
From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.
[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.
[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.
LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day
Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.
That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.
There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).
The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.
I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.
So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.
Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.
We have zero evidence for this. (Folks said the same shit in the 80s.)
I want to believe, man.
― Sun Tzu
Quite the arc from the original organization.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
Any of the signatories here match your criteria? https://safe.ai/work/statement-on-ai-risk#signatories
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
Dependent on UBI, existing in a basic pod, eating rations of slop.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.
For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them last several millennia more.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
Sounds like payola for the enterprising and experienced mercenary.
Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
The $20 monthly payment is not enough though and companies like Google can keep giving away their AI for free till OpenAI is bankrupt.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
That step, along with getting politicians to pass it, is the only thing that will stop that outcome.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public: https://newpublic.org/ "Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war. https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War "According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads: https://www.urban.org/policy-centers/cross-center-initiative... "In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet): "Build 21000 flexible fabrication facilities across the USA" https://web.archive.org/web/20100708160738/http://pcast.idea... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.
Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.
If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.
Condoning "honest liars" enables a whole other level of open and unrestricted criminality.
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
Tim Apple did it too, and we don’t assume he’s an authoritarian now too, do we? I imagine they would probably have done similarly regardless of who won the election.
It sure seems like an endorsement, but I think it’s simply modern corporate strategy in the American regulatory environment, same as when foreign dignitaries stay in overpriced suites in the Trump hotel in DC.
Those who don’t kiss the ring are clearly and obviously punished. It’s not in the interest of your shareholders (or your launch partners) to be the tall poppy.
>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
Regardless of intent, it was most definitely sold to the American public on that premise.
Altman keeps on talking about AGI as if we're already there.
But reasonable people could argue that we've achieved AGI (not artificial super intelligence)
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Fwiw, Sam Altman will have already seen the next models they're planning to release
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
Can some business person give us a summary on PBCs vs. alternative registrations?
(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)
[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...
[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...
Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.
I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.
That's just me, though.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
If every major player had an AI option, i'm just not understanding how because OpenAi moved first or got big first, the hugely massively successful companies that did the same thing for multiple decades don't have the same advantage?
Why are you still pretending anything is going to come out of this?
> taken it from a toy to genuinely insanely useful.
Really?
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
I'd say Chain-of-Thought has massively improved LLM output. Is that "incremental"? Why is that more incremental than the move from GPT-2 to GPT-3? Sure you can say that this is when LLMs first passed some sort of Turing test, but fundamentally there was no technological difference from GPT-3 to GPT-4. In fact I would say the quality of GPT-4 unlocked thousands (millions?) more use-cases that were not very viable with the quality delivered by GPT-3. I don't see any reason for more use-cases to keep being unlocked by LLM improvements.
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
This makes me want to invest in malpractice lawyers, not OpenAI
Oh we know: https://pmc.ncbi.nlm.nih.gov/articles/PMC11006786/
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
But "take over the world" good? Unlikely.
The world is changing and that is scary.
OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.
1: https://www.techpolicy.press/transcript-senate-judiciary-sub...
OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.
Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.
B2B market will stay open but OpenAI has certainly not peaked yet.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
So again, I ask, what makes it sticky?
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.
OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.
If you look at Gemini, I know people using it daily.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
LLMs themselves aren't the moat, product integration is. Google, Apple and Microsoft already have the huge user bases and platforms with a big surface area covering a good chunk of our daily life, that's why I think they're better positioned if models become a commodity. OpenAI has the lead now, but distribution is way more powerful in the long run.
This moat is non-existent when it comes to Open AI.
All dissidents went into Little Wadyia.
When the Dictator himself visited it, he started to fake his name by copying the signs and names he saw on the walls. Everyone knew what he was.
Internet social networks are like that.
Now, this moat thing. That's hilarious.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
The names don't even matter when everything is baked in.
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
The fact that people know Coca Cola doesnt mean they drink it.
That name recognition made Coca Cola into a very successful global corporation.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently. Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
Market share of OpenAI is like 90%+.
Source? I've seen 10 to 20% [1][2].
[1] https://iot-analytics.com/leading-generative-ai-companies/
[2] https://www.enterpriseappstoday.com/stats/openai-statistics....
The only thing OpenAI has right now is the ChatGPT name, which has become THE word for modern LLMs among lay people.
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.
There was never a coherent explanation of its firing the CEO.
But they could have stuck with that decision if they believed in it.
Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).
The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.
But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.
So the board had no choice but to capitulate and leave.
Right; so, "Worker Unions" work.
This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.
Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.
Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.
We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.
The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.
There’s a lot more to be ambitious about than just money.
Google/Anthropic are catching up, or already surpassed.
--Gordon Gekko
St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.
> Sam’s Letter to Employees.
> OpenAI is not a normal company and never will be.
Where did I hear something like that before...
> Founders' IPO Letter
> Google is not a conventional company. We do not intend to become one.
I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
OpenAI admitting that they're not going to win?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
There is a lot to criticize about OpenAI and Sama, but this isn't it.
Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.
* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital
* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai
* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware
* The non-profit won’t be the largest shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)
* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
It's just that this bait has a shelf life and it looks like it's going to expire soon.
They already fight transparency in this space to prevent harmful bias. Why should I believe anything else they have to say if they refuse to take even small steps toward transparency and open auditing?
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.
What other AI players we need to convince?
sam altman: "OpenAI is not a normal company and never will be."
Hmmm
More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?
Related to that:
> or the needs for hundreds of billions of dollars of compute to train models and serve users.
How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?
> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.
> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.
When's the good kick in?
> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.
^1 who subscribe to our services
Because they're concerned about AI use the same way Google is concerned about your private data.
---
## *What Has Changed*
### 1. *OpenAI’s For-Profit Arm is Becoming a Public Benefit Corporation (PBC)*
* *Before:* OpenAI LP (limited partnership with a “capped-profit” model). * *After:* OpenAI LP becomes a *Public Benefit Corporation* (PBC).
*Implications:*
* A PBC is still a *for-profit* entity, but legally required to balance shareholder value with a declared public mission. * OpenAI’s mission (“AGI that benefits all humanity”) becomes part of the legal charter of the new PBC.
---
### 2. *The Nonprofit Remains in Control and Gains Equity*
* The *original OpenAI nonprofit* will *continue to control* the new PBC and will now also *hold equity* in it. * The nonprofit will use this equity stake to fund “mission-aligned” initiatives in areas like health, education, etc.
*Implications:*
* This strengthens the nonprofit’s influence and potentially its resources. * But the balance between nonprofit oversight and for-profit ambition becomes more delicate as stakes rise.
---
### 3. *Elimination of the “Capped-Profit” Structure*
* The old “capped-return” model (investors could only make \~100x on investments) is being dropped. * Instead, OpenAI will now have a *“normal capital structure”* where everyone holds unrestricted equity.
*Implications:*
* This likely makes OpenAI more attractive to investors. * However, it also increases the *incentive to prioritize commercial growth*, which could conflict with mission-first priorities.
---
## *Potential Negative Implications*
### 1. *Increased Commercial Pressure*
* Moving from a capped-profit model to unrestricted equity introduces *stronger financial incentives*. * This could push the company toward *more aggressive monetization*, potentially compromising safety, openness, or alignment goals.
### 2. *Accountability Trade-offs*
* While the nonprofit “controls” the PBC, actual accountability and oversight may be limited if the nonprofit and PBC leadership overlap (as has been a concern before). * Past board turmoil in late 2023 (Altman's temporary ousting) highlighted how difficult it is to hold leadership accountable under complex structures.
### 3. *Risk of “Mission Drift”*
* Over time, with more funding and commercial scale, *stakeholder interests* (e.g., major investors or partners like Microsoft) might influence product and policy decisions. * Even with the mission enshrined in a PBC charter, *profit-driven pressures could subtly shape choices*—especially around safety disclosures, model releases, or regulatory lobbying.
---
## *What Remains the Same (According to the Letter)*
* OpenAI’s *mission* stays unchanged. * The *nonprofit retains formal control*. * There’s a stated commitment to safety, open access, and democratic use of AI.
Is OpenAI making a profit?
I've been feeling for some time now that we're sort of in the Vietnam War era of the tech industry.
I feel a strong urge to have more "ok, so where do we go from here?" and "what does a tech industry that promotes net good actually look like?" internal discourse in the community of practice, and some sort of ethical social contract for software engineering.
The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.
We need a tech counter-culture. We had one once, but now we need one.
But there are still plenty of mission-focused technology non-profits out there. Many of which have lasted decades. For example: Linux Foundation, Internet Archive, Mozilla, Wikimedia, Free Software Foundation, and Python Software Foundation.
Don't get me wrong, I'm also disappointed in the direction and actions of big tech, but I don't think it's fair to dismiss the non-profit foundations. They aren't worth a trillion dollars, however they are still doing good and important work.
This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
Threats of legal action are among the only behavioral signals it can act on while staying in its mandate. Others include regulation and the market.
This is all operating as it was designed, by humans, multiple economic cycles ago.
So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
Edit: also apparently known as contronym.
It generally means broadening access to something. Finance loves democratising access to stupid things, for example.
> word is a homonym of its antonym?
Inflammable in common use.
free (foss) -> non-profit -> capped-profit -> public benefits corporation -> (you guessed it)1) You're successful.
2) You mess up checks-and-balances at the beginning.
OpenAI did both.
Personally, I think at some point, the AGs ought to take over and push it back into a non-profit format. OAI undermines the concept of a non-profit.
(1) be transparent about exactly which data was collected for the model
(2) release all the source code
If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?
Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.
Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.
Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
He's a symptom of a problem. He's not actually the problem.
But to speak plainly, Musk is a complex figure, frequently problematic, and he often exacts a tool on the people around him. Part of this is attributable to his wealth, part to his particulars. When he goes into "demon mode", to use Walter Isaacson's phrase, you don't want to be in his way.
The newer version included sponsored products in its response. I thought that was quite effed up.
Key Structure Changes:
- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure - Converting for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit remains in control but also becomes a major shareholder
Reading Between the Lines:
1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.
2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.
3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.
4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).
Red Flags:
- Vague details about actual control mechanisms - No specifics on nonprofit board composition or appointment process - Heavy reliance on buzzwords ("democratic AI") without concrete governance details - Unclear what specific powers the nonprofit retains besides shareholding
This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
I was trying to put all the text into gpt4 to see what it thought, but the select all function is gone.
Some websites do that to protect their text IP, which would be crazy to me if that’s what they did considering how their ai is built. Ha