This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Aug. 20, 2024

OpenAI's structure raises questions about protecting its charitable mission

Elon Musk, an original co-founder of OpenAI, alleges that the company has abandoned its mission and claims breach of contract, RICO, fraud, false advertising, and breach of fiduciary duty.

Rose Chan Loui

Rose Chan Loui is founding executive director, Lowell Milken Center on Philanthropy and Nonprofits at UCLA School of Law.

Shutterstock

In the latest development in the OpenAI saga, Elon Musk has, for the second time, sued the group of entities known as OpenAI, this time in federal court in northern California. Musk complains that OpenAI abandoned its purpose of developing artificial intelligence that benefits humanity. Musk, an original co-founder of OpenAI, Inc., had filed a similar suit in California state court in February 2024, which he quietly dismissed in June 2024. This complaint is more detailed, alleging violations of RICO, fraud, breach of contract, false advertising, and breach of fiduciary duty.

The creator of ChatGPT, OpenAI became front-page news in the business and legal world when its board fired founder and CEO Sam Altman, reportedly for lack of candor in his communications with the board. After an outcry from investors, including Microsoft and OpenAI's own employees, Altman was returned to his CEO position, and the board was reconstituted with people perceived as friendlier to Altman. The ousting and return of Altman highlighted the fact that the parent company at the top of the OpenAI structure is a tax-exempt, nonprofit entity, with a board charged with protecting its charitable mission. While not uncommon for nonprofit organizations to own and profit from for-profit subsidiaries (e.g., Patagonia and Newman's Own), the value (reportedly $80-90 billion) of OpenAI's for-profit operations and the extent of external investment in those operations may be unparallelled, raising the question whether OpenAI's carefully designed structure can adequately protect its charitable mission.

The parent entity, OpenAI, Inc. ("Nonprofit") is a tax-exempt, nonprofit company organized in Delaware in 2015. Per its certificate of incorporation, Nonprofit's purpose is to provide funding for research, development, and distribution of technology related to artificial intelligence. Nonprofit states in public filings that its goal is "to advance digital intelligence in a way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." Because it operates in California, Nonprofit is registered as a charity with, and subject to the oversight of, the California Attorney General.

In 2019, the board of Nonprofit determined that donations were insufficient to achieve the company's charitable purpose of developing AI. Having raised through philanthropy only $130 million of the $1 billion needed, they sought a structure that would allow for private investments. According to the OpenAI website, the structure was expanded as follows: Nonprofit formed Open AI, LP, to be owned by Nonprofit, employees and other investors but governed by Nonprofit and operated in accordance with Nonprofit's charitable purposes. Open AI, LP would have a subsidiary, OpenAI LLC ("For-Profit"), in which Microsoft would have a 40% profits interest. Under this structure, Microsoft has reportedly invested over $10 billion in OpenAI. According to Musk's complaint, the structure may now contain more entities than are described on the OpenAI website.

OpenAI's website describes various features of the investment structure that are intended to align investors' motives with Nonprofit's mission. First, Nonprofit wholly owns and controls the manager entity (OpenAI GP LLC) that controls and governs For-Profit. Second, Nonprofit's directors are required to perform their fiduciary duties in furtherance of Nonprofit's mission to produce safe AGI (Artificial General Intelligence). Third, the board remains majority independent, with "independent" defined as not holding equity in OpenAI. Fourth, as stated above, profit allocated to investors and employees, including Microsoft, is capped, so "all residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity."  Fifth, the board determines when OpenAI has attained AGI, which OpenAI defines as "a highly autonomous system that outperforms humans at most economically valuable work." Notably, OpenAI's commercial agreements with Microsoft apply only to pre-AGI technology.

From the nonprofit law perspective, one question is whether, with For-Profit being valued at $80-90 billion and both employees and outside investors "invested" in that valuation, these features will succeed in protecting the charitable mission. First, while OpenAI boasts that Nonprofit's board of independent directors will protect the mission, OpenAI defines independence as equity ownership. Other interests, such as Microsoft's profits interest, don't count. Neither do economic interests in partners of For-Profit. Musk's complaint alleges that Altman has significant interests in various companies that have profitable business relationships with OpenAI. If this is true, Altman doesn't need a direct equity interest in For-Profit to have an interest in its profitability. Second, one of the board's key responsibilities is to determine the complex question of whether AGI has been achieved and to ensure that the path to AGI development is safe for humanity. There is no general agreement on how to define AGI, and the academic and public policy board members who arguably had the expertise to determine whether AGI was being developed safely, or had been achieved, have left. Further, key employees who have left (e.g., former chief scientist Ilya Sutskever) say OpenAI is prioritizing profit over safe development of AGI, indicating there is internal disagreement about OpenAI's commitment to its nonprofit mission. Moreover, no board is immune to pressure from its donors, or in this case, its investors. Since Microsoft's profits interest is limited to pre-AGI technology, the structure is incentivized to delay public acknowledgement that AGI has been achieved. Third, although the "capped-profit" structure would seem to ensure that Nonprofit benefits from its ownership and control of For-Profit, some wonder whether 100x investment is a cap at all. For some context, Nvidia, a prominent AI stock, has risen around 30 times in the last five years. Although some early-stage tech companies can do better than public companies like Nvidia, very few companies make 100x their investments, and in this case, Microsoft alone has invested over $10 billion. Further, we do not have insight into how much other outside investors have invested in For-Profit, and as far as we know, there is no limit on how much additional investment OpenAI can accept.

There is certainly more to come in this saga. Altman has told some investors that OpenAI may become a for-profit benefit corporation (like rivals Anthropic and xAI), which would not be controlled by Nonprofit. That raises the issue of what the Nonprofit would be entitled to in a conversion. In comparison with the $80-90 billion dollar valuation of For-Profit, the Nonprofit showed assets of $19 million (consisting of cash, savings and cash investments) in its 2022 filing with the California AG. As more transparency is gained, perhaps through investigations by the Federal Trade Commission (in collaboration with the Department of Justice) and the European Commission, as well as Musk's lawsuit, we may be better able to judge whether this nonprofit/for-profit structure will succeed in serving its lofty charitable purpose of developing artificial intelligence for the benefit of humanity.

#380443


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com