The head of OpenAI’s robotics effort has departed the company, a move that comes at a particularly charged moment as the artificial intelligence giant accelerates its push into defense work and physical-world applications. The exit raises pointed questions about the internal tensions simmering beneath OpenAI’s rapid commercial expansion — and about whether the company’s evolving identity is driving away some of the researchers who built its technical foundations.
Peter Welinder, who led OpenAI’s robotics initiatives, has left the organization, The Information reported. His departure was not publicly announced with fanfare. It quietly surfaced as OpenAI was simultaneously finalizing a deal with the U.S. Department of Defense, a partnership that marks a significant philosophical shift for a company that once positioned itself as a cautious, safety-first nonprofit research lab.
Welinder’s exit is significant for several reasons. He wasn’t a marginal figure. As the person overseeing robotics — a domain OpenAI had previously explored and then shelved before reviving it — he sat at the intersection of the company’s most ambitious long-term bets. Robotics represents the bridge between large language models that generate text and code, and systems that can act on the physical world. Losing the person steering that effort, right as OpenAI is courting military applications, sends a signal. What that signal means depends on whom you ask.
OpenAI’s relationship with defense has been complicated from the start. The company’s original charter emphasized broadly distributed benefits and safety. For years, its policies explicitly prohibited military and warfare applications of its technology. That changed in January 2024, when OpenAI quietly updated its usage policies to remove the blanket ban on military use. The revision didn’t attract enormous attention at the time, but it laid the groundwork for everything that followed.
And what followed has been a rapid courtship with Washington.
OpenAI has been building relationships across the defense and intelligence establishment. The company hired former National Security Agency director Paul Nakasone to its board of directors in 2024, a move clearly designed to signal credibility and seriousness to government buyers. It has engaged with defense contractors and explored how its models could support logistics, planning, and intelligence analysis for the Pentagon. The new Department of Defense deal, details of which remain closely held, represents the most concrete manifestation of that effort yet.
For some inside OpenAI, this trajectory feels like a betrayal of the company’s founding principles. For others, it’s a pragmatic recognition that if American AI companies don’t work with the U.S. military, Chinese competitors will fill the vacuum. Both arguments have merit. Neither fully accounts for the human cost of the tension — the researchers and leaders who joined OpenAI under one set of assumptions and now find themselves working for a very different kind of organization.
Welinder’s background makes his departure particularly notable. He joined OpenAI years ago, when the organization was still genuinely operating as a nonprofit research institution focused on ensuring artificial general intelligence would benefit humanity. He was part of the team that originally explored robotics at OpenAI, work that produced impressive results — including a robotic hand that could solve a Rubik’s Cube — before the company disbanded the robotics team in 2021 to focus resources on large language models. When OpenAI later decided to revive its robotics ambitions, Welinder was tapped to lead the renewed push.
That revival wasn’t just nostalgia. The robotics market has exploded with new entrants and fresh capital. Companies like Figure AI, which has raised billions in funding, and established players like Boston Dynamics are racing to build humanoid robots powered by foundation models. Tesla’s Optimus project has Elon Musk’s full attention. Chinese firms including Unitree Robotics are shipping increasingly capable machines at aggressive price points. OpenAI recognized that its expertise in large multimodal models gave it a potential advantage in the brains that would power the next generation of physical robots. Losing the person charged with capturing that advantage is not trivial.
The timing also matters because of what’s happening in the broader AI-defense space. Anduril Industries, the defense technology company founded by Palmer Luckey, has been aggressively positioning itself as the connective tissue between Silicon Valley AI and Pentagon procurement. Scale AI, led by Alexandr Wang, has built a substantial government business around data labeling and AI evaluation for defense and intelligence agencies. Palantir Technologies, once considered a controversial outlier for its close government ties, now looks prescient. Its stock has surged as investors bet that AI-powered defense applications will be a massive growth market.
OpenAI entering this arena changes the competitive dynamics considerably. The company’s models — GPT-4o, the forthcoming GPT-5, and its reasoning-focused o-series — are among the most capable in the world. Applying them to defense use cases could give the Pentagon access to AI capabilities that far exceed what’s currently deployed. But it also means OpenAI is now competing for defense dollars alongside companies that have spent years building the specialized infrastructure, security clearances, and institutional knowledge required to work with the military. It’s not a market you waltz into.
The internal dynamics at OpenAI have been turbulent for well over a year. The boardroom crisis of November 2023, when CEO Sam Altman was briefly ousted and then reinstated, exposed fault lines between those who prioritized safety and deliberation and those who favored aggressive commercialization. Since Altman’s return, the commercialization camp has clearly won. OpenAI has converted from a nonprofit to a for-profit structure, raised over $40 billion in new funding at a valuation exceeding $300 billion, and launched a dizzying array of products and partnerships.
Several prominent safety-focused researchers have left in the aftermath. Jan Leike, who co-led OpenAI’s superalignment team, departed for Anthropic. Ilya Sutskever, OpenAI’s co-founder and former chief scientist who was instrumental in the board’s decision to fire Altman, left to start his own company, Safe Superintelligence Inc. The pattern is unmistakable: people who joined OpenAI to work on safety and long-term beneficial AI are finding fewer reasons to stay.
Whether Welinder’s departure fits neatly into that pattern isn’t entirely clear. Robotics leadership and safety research are different domains, and people leave companies for all sorts of reasons — compensation, burnout, better opportunities, personal circumstances. But the context makes it hard to view his exit in isolation. When the head of robotics leaves just as the company is signing defense contracts, the optics are unavoidable.
There’s also a strategic dimension worth examining. OpenAI’s robotics ambitions and its defense ambitions could be deeply intertwined. Military applications of robotics are among the most lucrative and technically demanding use cases in the field. Autonomous logistics vehicles, inspection drones, bomb disposal robots, and eventually humanoid systems for hazardous environments — these are all areas where the Defense Department is actively investing. If OpenAI’s robotics play was always partly about building toward defense applications, then losing the robotics lead could slow that pipeline.
Or it might not matter at all. OpenAI has demonstrated a remarkable ability to replace departing talent and maintain momentum. The company employs thousands of researchers and engineers, and its brand remains the most powerful recruiting tool in AI. Someone else will lead robotics. The defense deal will proceed. The machine keeps moving.
But the accumulation of departures tells a story that no single exit can. OpenAI is undergoing a fundamental identity transformation — from research lab to technology conglomerate, from nonprofit idealism to hardnosed commercial ambition, from AI safety pioneer to defense contractor. Each of those transitions is defensible on its own terms. Taken together, they describe a company that has become something its founders might not recognize.
Sam Altman has been characteristically direct about the company’s direction. He has argued that OpenAI needs massive revenue to fund the compute required to build artificial general intelligence, and that working with governments — including on defense — is both appropriate and necessary. In public statements, he has framed the defense work as supporting democratic values and national security rather than enabling warfare. The distinction matters legally and ethically, but it’s a line that many employees and outside observers find blurry.
The financial pressures are real. OpenAI is burning through cash at an extraordinary rate. Its annualized revenue reportedly exceeded $5 billion in early 2025, but its costs — driven by compute, talent, and infrastructure — are immense. The company’s recent $40 billion funding round, led by SoftBank, came with expectations of continued hypergrowth. Government contracts, particularly defense contracts, offer the kind of large, recurring revenue streams that can help justify a $300 billion valuation. The incentives are powerful.
And the geopolitical argument has genuine force. China’s AI capabilities are advancing rapidly. Companies like DeepSeek have demonstrated that Chinese firms can produce competitive models at lower cost. The Chinese military is investing heavily in AI-powered systems. If the United States wants to maintain technological superiority in defense, it needs its best AI companies working on the problem. OpenAI is, by many measures, the best AI company in the world. The logic follows naturally.
Still, logic doesn’t resolve the moral and organizational questions. Every company that moves into defense work faces the same fundamental challenge: how to retain talent that signed up for a different mission. Google learned this painfully with Project Maven in 2018, when employee protests forced the company to abandon an AI contract with the Pentagon. The tech industry’s relationship with defense has warmed considerably since then — partly because the geopolitical environment has grown more threatening, partly because the money has grown too large to ignore. But individual employees still have choices, and some of them are choosing to leave.
Welinder’s next move hasn’t been publicly disclosed. He could join one of the many well-funded robotics startups. He could go to a competitor like Anthropic or Google DeepMind. He could start something new. Whatever he does, his departure from OpenAI is one more data point in a pattern that industry observers are watching closely.
The question now isn’t whether OpenAI will continue its push into defense and robotics. It will. The question is whether the company can execute on its sprawling ambitions — consumer products, enterprise software, robotics, defense, AGI research — without the kind of focused technical leadership that people like Welinder represented. OpenAI has bet that scale, capital, and brand can compensate for the steady erosion of its original brain trust. That bet is looking increasingly expensive to test.

Pingback: OpenAI’s Robotics Chief Walks Out the Door as the Company Sprints Toward Pentagon Contracts and Physical AI – AWNews