In practice it’s doable though. You can just create a new legal entity and move stuff and/or do future value creating activity in the new co. IF everyone is on board with the plan on both sides of the move then that’s totally doable with enough lawyers and accountants
The IRS isn’t stupid. The rules on what counts as taxable income and what the nonprofit can take tax-free have been around for decades.
https://www.propublica.org/article/how-the-irs-was-gutted (2018)
> An eight-year campaign to slash the agency’s budget has left it understaffed, hamstrung and operating with archaic equipment. The result: billions less to fund the government. That’s good news for corporations and the wealthy.
You sign an exclusive, non-revocable licensing agreement. Ownership of the original IP remains 100% with the original startup.
Now, this only works if the non-profit's board is on-board.
I'm wondering if OpenAI's charter might provide a useful legal angle. The charter states:
>OpenAI’s mission is to ensure that [AGI ...] benefits all of humanity.
>...
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
>Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
>...
>We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. [...]
>...
I'm no expert here, but to me, this charter doesn't appear to characterize OpenAI's behavior as of the year 2024. Safety people have left, Sam has inexplicably stopped discussing risks, and OpenAI seems to be focused on racing with competitors. My question: Is the charter legally enforceable? And if so, could it make sense for someone to file an additional lawsuit? Or shall we just wait and see how the Musk lawsuit plays out, for now?
I'm curious about the "fiduciary duty" part. As a member of humanity, it would appear that OpenAI has a fiduciary duty to me. Does that give me standing? Suppose I say that OpenAI compromises my safety (and thus finances) by failing to discuss risks, having a poor safety culture (as illustrated by employee exits), and racing. Would that fly?
I think the real issue Musk was complaining about is that sama is quickly becoming very wealthy and powerful and Musk doesn't want any competition in this space.
Hopefully some people watching all this realize that the people running many of these big AI related projects don't care about AI. Sam Altman is selling a dream about AGI to help make himself both wealthier and more powerful, Elon Musk is doing the same with electric cars or better AI.
People on HN are sincerely invested in the ideas behind these things, but it's important to recognize that the people pulling the strings largely don't care outside how it benefits them. Just one of the many reasons, at least in AI, truly open source efforts are essential for any real progress in the long run.