In a move that underscores the intensifying competition among frontier artificial intelligence companies for distribution and computing power, Anthropic has restructured its commercial arrangements with major cloud providers, offering them significantly more favorable revenue-sharing terms in exchange for deeper integration and expanded access to computing infrastructure. The shift, first reported by The Information, signals a new phase in the rapidly evolving relationship between AI model makers and the hyperscale cloud platforms that serve as their primary conduits to enterprise customers.
The San Francisco-based AI company, founded by former OpenAI executives Dario and Daniela Amodei, has been renegotiating deals with Amazon Web Services and Google Cloud, the two cloud giants that have collectively poured billions of dollars into the startup. Under the revised terms, Anthropic is reportedly offering these cloud providers a larger cut of the revenue generated when enterprise customers access its Claude family of models through their respective cloud marketplaces. The sweetened economics are designed to incentivize the cloud platforms to more aggressively promote and distribute Anthropic’s technology to their vast customer bases.
A Strategic Recalibration as AI Distribution Wars Heat Up
The renegotiated terms come at a pivotal moment for Anthropic and the broader AI industry. OpenAI, Anthropic’s chief rival, has been deepening its exclusive partnership with Microsoft Azure, while Google has been investing heavily in its own Gemini models. For Anthropic, which has positioned itself as a multi-cloud AI provider available through both AWS and Google Cloud, the challenge is ensuring that neither partner treats its models as an afterthought relative to their own proprietary AI offerings or those of competitors.
According to The Information, the revised deals reflect Anthropic’s recognition that cloud providers need stronger financial incentives to prioritize selling third-party AI models alongside — or even instead of — their own. Amazon, for instance, has been developing its own Nova family of AI models while simultaneously serving as Anthropic’s largest investor and distribution partner through AWS Bedrock. Google, meanwhile, continues to push Gemini across its Cloud platform even as it offers Claude through its Model Garden and Vertex AI marketplace.
The Economics of AI Model Distribution
The financial mechanics of these cloud marketplace deals are critical to understanding why Anthropic is willing to sacrifice margin for market share. When an enterprise customer uses Claude through AWS Bedrock or Google Cloud’s Vertex AI, the cloud provider typically takes a percentage of the revenue — a commission that covers not just distribution but also the underlying compute infrastructure, sales support, and billing integration. By increasing the cloud providers’ revenue share, Anthropic is effectively paying more for distribution, but the calculus appears to be that broader adoption will more than compensate for the reduced per-transaction economics.
This approach mirrors strategies common in enterprise software, where vendors routinely offer channel partners generous margins to drive sales volume. But in the AI model business, the stakes are considerably higher. The cloud providers are not merely passive resellers; they control the compute infrastructure that AI models require to run, they manage the enterprise relationships that determine which models get deployed, and they increasingly offer competing products. Anthropic’s willingness to share more revenue is an acknowledgment of this asymmetric power dynamic.
Amazon and Google: Investors, Partners, and Competitors
The complexity of Anthropic’s relationships with AWS and Google Cloud cannot be overstated. Amazon has committed up to $8 billion in investment in Anthropic, making it the startup’s largest financial backer. Google, through its cloud division and parent company Alphabet, has invested approximately $2 billion. These investments came with commitments from Anthropic to make its models available on the respective cloud platforms and to use their computing infrastructure — arrangements that blur the traditional boundaries between investor, customer, supplier, and competitor.
For Amazon in particular, the Anthropic relationship has been central to its AI strategy. AWS CEO Matt Garman has repeatedly highlighted Claude’s availability on Bedrock as a key differentiator for enterprise customers evaluating cloud AI platforms. At AWS re:Invent 2024, Garman emphasized the importance of offering customers choice among frontier models, with Anthropic’s Claude positioned as a flagship third-party option. Yet Amazon’s simultaneous development of its own Nova models creates an inherent tension: every dollar of revenue that flows to Anthropic through Bedrock is a dollar that could theoretically have gone to Amazon’s own AI products.
Anthropic’s Revenue Trajectory and the Push for Scale
The sweetened cloud deals come as Anthropic has been on a steep revenue growth curve. The company reportedly reached an annualized revenue run rate exceeding $2 billion in early 2025, up from roughly $900 million at the end of 2024, according to previous reporting by The Information and other outlets. Much of this growth has been driven by enterprise adoption through cloud marketplaces, making the AWS and Google Cloud channels indispensable to Anthropic’s commercial trajectory.
Anthropic is also preparing for what could be one of the largest private funding rounds in venture capital history. The company has been in discussions to raise additional capital at a valuation that could approach or exceed $60 billion, according to multiple reports. At such lofty valuations, demonstrating sustained revenue growth and expanding enterprise market share becomes essential — and that requires keeping the cloud distribution channels firing on all cylinders. The willingness to offer better terms to cloud partners is thus not just a tactical sales decision but a strategic imperative tied to the company’s fundraising and valuation narrative.
Competitive Implications for OpenAI and the Broader Market
Anthropic’s move to sweeten cloud provider economics has implications that extend well beyond its own business. OpenAI, which generates the majority of its enterprise AI revenue through its exclusive partnership with Microsoft Azure, faces a different set of channel dynamics. Microsoft takes a significant share of OpenAI-related revenue flowing through Azure, and the two companies have been renegotiating the terms of their complex partnership. Recent reports have indicated that OpenAI has been seeking more flexibility to distribute its models through other channels, a reflection of the tension inherent in exclusive distribution arrangements.
For smaller AI model providers — companies like Mistral, Cohere, and Meta’s open-source Llama ecosystem — Anthropic’s willingness to offer cloud providers richer economics could raise the bar for what hyperscalers expect from third-party model makers. If Anthropic is offering a larger revenue share, cloud providers may demand similar or better terms from less established competitors, potentially squeezing margins across the industry and making it harder for smaller players to compete for prominent placement on cloud marketplaces.
The Broader Implications for Enterprise AI Adoption
From the perspective of enterprise customers, the restructured deals between Anthropic and the cloud providers could have tangible benefits. When cloud sales teams have stronger financial incentives to promote Claude, enterprises are more likely to receive proactive recommendations, technical support, and integration assistance for Anthropic’s models. This could accelerate adoption of Claude in industries ranging from financial services and healthcare to legal and government, where cloud marketplace procurement is often the preferred — and sometimes the only approved — method for acquiring AI capabilities.
The deals also reinforce a broader trend in enterprise AI: the cloud marketplace is becoming the dominant distribution channel for frontier AI models, much as app stores became the primary distribution mechanism for mobile software. Companies that fail to secure favorable positioning on these platforms risk being marginalized regardless of the technical quality of their models. Anthropic’s aggressive moves to lock in better cloud partnerships suggest that the company’s leadership understands this dynamic acutely and is willing to trade short-term margin for long-term strategic positioning.
What Comes Next in the AI Partnership Arms Race
As the AI industry matures, the relationships between model makers and cloud infrastructure providers will continue to evolve in complexity and consequence. Anthropic’s decision to offer sweeter terms to AWS and Google Cloud is a calculated bet that distribution dominance matters more than near-term profitability — a bet that only makes sense if the company believes its models will continue to be competitive at the frontier and that enterprise demand for AI will continue its exponential growth trajectory.
The coming months will reveal whether this approach pays off. If Anthropic can translate better cloud economics into meaningfully faster enterprise adoption, it will validate a model that other AI companies may be forced to emulate. If the sweetened terms fail to move the needle — or if cloud providers pocket the better margins without materially increasing their promotional efforts — Anthropic may find itself with thinner economics and little to show for the concession. Either way, the deal restructuring marks a significant moment in the ongoing negotiation between the companies building the most powerful AI systems and the platforms that control access to the world’s computing infrastructure and enterprise customers.
View Comments (0)