When OpenAI was founded in 2015, its mission was unambiguous: to ensure that artificial general intelligence benefits all of humanity — safely. That single word, tucked into the organization’s founding charter, served as a philosophical guardrail, a promise that the most powerful technology in human history would be developed with caution and public accountability at its core. Now, that word is gone.
In a move that has sent ripples through the artificial intelligence community, OpenAI has removed the word “safely” from its mission statement as part of a sweeping corporate restructuring that will transform the nonprofit research lab into a for-profit public benefit corporation. The change, while subtle in language, represents a seismic shift in the organization’s identity — one that raises urgent questions about whether the world’s most prominent AI company is prioritizing shareholder returns over societal safeguards.
From Nonprofit Idealism to For-Profit Pragmatism
As reported by The Conversation, OpenAI’s restructuring plan will see the company convert from its current unusual hybrid structure — a nonprofit parent overseeing a capped-profit subsidiary — into a Delaware public benefit corporation (PBC). Under this new arrangement, the nonprofit arm will retain a minority stake in the for-profit entity, but the balance of power will shift decisively toward commercial interests. The company, which was recently valued at $300 billion following a massive funding round, is now firmly positioned as a technology juggernaut with obligations to investors that dwarf its original charitable mandate.
The restructuring has been months in the making, accelerated by OpenAI’s explosive growth following the launch of ChatGPT in late 2022. CEO Sam Altman, who was briefly ousted and then reinstated by the board in a dramatic November 2023 power struggle, has championed the transition as necessary to attract the capital required to build artificial general intelligence (AGI). OpenAI has raised over $40 billion from investors including Microsoft, SoftBank, and Thrive Capital, and the company argues that the nonprofit structure was never designed to support operations at this scale.
The Deletion That Speaks Volumes
But it is the quiet editorial change to the mission statement that has drawn the sharpest criticism. According to The Conversation, the removal of “safely” from OpenAI’s stated purpose is not merely cosmetic. It reflects a broader pattern of the company systematically dismantling the safety infrastructure that once defined it. Over the past two years, OpenAI has seen the departure of several high-profile safety researchers, including co-founder Ilya Sutskever, who left to start his own safety-focused AI lab, Safe Superintelligence Inc. Jan Leike, who co-led OpenAI’s superalignment team — a group dedicated to ensuring future AI systems remain under human control — also departed, publicly criticizing the company for allowing safety culture to take “a backseat to shiny products.”
The superalignment team itself was effectively dissolved in 2024, with its resources reportedly redirected toward product development. These departures and organizational changes suggest that the deletion of “safely” from the mission statement is less an oversight and more a codification of priorities that have already shifted in practice. As The Conversation noted, the word’s removal is “a test for whether AI serves society or shareholders.”
Delaware’s Public Benefit Corporation: Shield or Fig Leaf?
OpenAI’s defenders point to the public benefit corporation structure as evidence that the company remains committed to its broader social mission. Under Delaware law, a PBC is required to balance the interests of shareholders with those of stakeholders affected by the company’s conduct, including the public at large. The company has stated that its board will continue to consider the societal implications of its work, and that the nonprofit entity will maintain meaningful oversight through its equity stake.
However, legal scholars and governance experts have expressed skepticism about the enforceability of PBC obligations. As The Conversation detailed, the PBC framework has never been tested at the scale or stakes that OpenAI represents. Unlike traditional nonprofits, which are legally bound to serve their charitable purpose, a PBC’s social obligations are broadly defined and difficult to enforce through litigation. Directors have wide discretion in determining how to balance profit and purpose, and there is little precedent for courts intervening when a PBC prioritizes financial returns over public benefit.
The Broader Industry Is Watching
OpenAI’s transformation is unfolding against a backdrop of intensifying debate about AI safety and regulation. In recent months, multiple AI companies have faced scrutiny over their safety practices. Anthropic, founded by former OpenAI researchers Dario and Daniela Amodei, has positioned itself as a safety-first alternative, though it too has faced questions about the tension between rapid commercialization and careful development. Google DeepMind, Meta’s AI division, and a growing constellation of startups are all racing to develop increasingly powerful models, creating competitive pressures that make safety investments feel like a luxury rather than a necessity.
The regulatory environment remains fragmented. The European Union’s AI Act, which began phased implementation in 2024, represents the most comprehensive attempt to govern AI development, but its impact on U.S.-based companies like OpenAI remains uncertain. In the United States, federal AI legislation has stalled repeatedly, leaving governance largely to executive orders and voluntary industry commitments. California’s proposed SB 1047, which would have imposed safety requirements on frontier AI models, was vetoed by Governor Gavin Newsom in 2024 after intense lobbying from the tech industry — lobbying in which OpenAI played a notable role.
The Attorney General’s Scrutiny and Legal Challenges
OpenAI’s restructuring has not gone unchallenged. California Attorney General Rob Bonta has been reviewing the conversion, given that the original nonprofit’s assets were built with tax-exempt donations intended to support a charitable mission. The central legal question is whether the nonprofit is receiving fair value for its stake in the for-profit entity — a question complicated by OpenAI’s astronomical valuation and the difficulty of pricing assets that include some of the most advanced AI systems ever created.
Elon Musk, who co-founded OpenAI and has become one of its most vocal critics, has filed legal challenges arguing that the conversion betrays the organization’s founding principles. While Musk’s motivations are complicated by his own competing AI venture, xAI, his legal arguments have resonated with a broader community of researchers and ethicists who believe that OpenAI’s original nonprofit structure was a crucial check on the unconstrained pursuit of AGI. Courts have so far declined to block the restructuring, but the litigation continues to cast a shadow over the company’s plans.
What the Safety Community Fears Most
For AI safety researchers, the concern is not merely symbolic. The development of increasingly capable AI systems — models that can write code, conduct scientific research, and engage in complex reasoning — raises the stakes of getting safety wrong. OpenAI’s own research has acknowledged the potential for catastrophic risks from advanced AI, including the possibility of systems that pursue goals misaligned with human values or that concentrate power in dangerous ways.
The company’s internal safety frameworks, including its Preparedness Framework for evaluating the risks of new models before deployment, remain in place. But critics argue that these frameworks are only as strong as the institutional culture that supports them. When the mission statement itself no longer mentions safety, and when the organizational structure creates fiduciary obligations to investors who expect returns on tens of billions of dollars in capital, the incentives to cut corners become enormous. As multiple former employees have noted, safety testing takes time, and time is the one resource that a company in a fierce competitive race can least afford to spend.
A Defining Moment for the AI Era
OpenAI’s restructuring represents more than a corporate governance story. It is a test case for how humanity will manage the development of what many experts believe could be the most transformative — and potentially dangerous — technology ever created. The company that did more than any other to popularize the idea that AI safety should be a central concern is now, by its own admission, redefining what that commitment means in practice.
The deletion of a single word from a mission statement might seem like a minor editorial choice. But in the context of a company that controls some of the world’s most powerful AI systems, employs thousands of researchers, and commands hundreds of billions in capital, words matter enormously. They signal priorities, shape culture, and define the boundaries of acceptable behavior. When “safely” disappears from the mission, the question becomes: what fills the void?
For now, the answer appears to be growth, capital, and competitive ambition. Whether OpenAI’s new structure can preserve meaningful safety commitments while satisfying the demands of investors and the pressures of a global AI race will be one of the defining questions of the coming decade. The world — and the technology industry — will be watching closely to see whether a public benefit corporation can truly serve the public, or whether the benefit flows primarily to those who hold the shares.
