Anirban Ghoshal
Senior Writer

Amazon’s $5B Anthropic bet is really about compute, not just cash

news
Apr 21, 20265 mins

The investment underscores a shift from pure funding to securing long-term compute, as Anthropic looks to ease capacity constraints and lock in infrastructure amid intensifying demand for AI workloads, analysts say.

Amazon und Anthropic
Credit: JRides / Shutterstock.com

Amazon on Monday said it was investing an additional $5 billion in Anthropic, a move that analysts say is aimed as much at easing the AI startup’s growing infrastructure bottlenecks as at deepening their strategic partnership.

As part of the deal, Anthropic will lock in up to 5 gigawatts of compute capacity across AWS’s Trainium chips, including the new Trainium 3 and upcoming Trainium 4, the companies said in a joint statement.

“Right now, users see limits like throttling and session caps because Anthropic is running out of capacity and must ration usage to avoid crashes. This deal helps fix that,” said Pareekh Jain, principal analyst at Pareekh Consulting.

“Over time, the expanded capacity will let Anthropic support more users at once, build bigger models, and reduce these limits, especially for paid and enterprise users,” Jain added.

The analyst was referring to Anthropic’s move to throttle usage across its Claude subscriptions, especially during peak demand hours, which also coincided with other concerns, such as complaints of degradation in Claude’s reasoning performance across complex tasks.

Scaling compute capacity

A significant portion of Trainium 3 capacity is expected to come online this year, they added. Anthropic already uses Trainium 2 via AWS’ Project Rainer, which is a cluster of nearly half a million chips, to train and run its models.

The agreement between Amazon and Anthropic also includes an expansion of inference capacity in Asia and Europe, which Jain said should improve Claude’s speed and reliability globally. Anthropic will also have the option to buy future generations of Trainium as they become available.

However, Anthropic isn’t alone when it comes to model providers trying to add compute capacity to train and run their models.

Earlier in February, rival OpenAI signed a deal with Amazon, Nvidia, and SoftBank to raise around $110 billion to add infrastructure to increase compute capacity.

As part of the arrangement, OpenAI has committed to consuming at least 2GW of AWS Trainium-based compute tied to Amazon’s $50 billion investment, along with 3GW of dedicated inference capacity from Nvidia under its separate $30 billion commitment.

From funding to supply chain financing

In fact, deals such as these, analysts say, reflect a broader shift in how AI infrastructure is getting financed presently.

“Rather than simple cash-for-equity, these deals bundle equity investment with massive cloud-spend, or GPU spend commitments by locking in customers, securing capex returns, and validating infrastructure buildouts in a single transaction. This isn’t venture capital anymore, it’s supply chain financing,” Jain said.

The pattern present in these deals, Jain noted, is consistent across the ecosystem, giving examples of Microsoft, Oracle, and Nvidia.

“Microsoft invested tens of billions into OpenAI while simultaneously committing Azure capacity for training and inference, with OpenAI’s Azure spend now running at a multi-billion dollar annual rate,” Jain said.

“Oracle, too, signed a $30 billion cloud deal with OpenAI, then followed it with a staggering $300 billion five-year compute commitment starting in 2027. Nvidia took it further still with its $100 billion investment in OpenAI, which was paid in GPUs, not dollars — a model it replicated with xAI,” Jain added.

That framing, however, according to Greyhound Research chief analyst Sanchit Vir Gogia, may miss a deeper shift.

Such deals, Gogia said, are more about securing scarce compute supply ahead of competitors. “What capital does is improve your position. It allows you to commit earlier and at greater scale,” the analyst pointed out, adding that the real advantage lies in locking in infrastructure before others can.

On the flip side, though, long-term capacity commitments tend to anchor companies to specific providers, Gogia cautioned.

While model providers may operate across platforms and hyperscalers, their largest infrastructure commitments ultimately shape where they optimize workloads, build features, and direct spending, the analyst pointed out.

For Anthropic, the Amazon deal comes with equally significant long-term obligations. The company has committed to spending more than $100 billion on AWS over the next decade.

For Amazon, the $5 billion investment builds on its earlier $8 billion bet on Anthropic and comes with the potential to commit up to an additional $20 billion tied to certain commercial milestones, which were not revealed. Anthropic is also looking beyond AWS. The company recently said it plans to add capacity using Google’s TPUs. These chips are expected to come online by next year.

The article originally appeared in NetworkWorld.