Anthropic's Google Cloud Deal Includes 1 Million TPUs, 1 GW of Capacity In 2026 (cnbc.com)
(Thursday October 23, 2025 @11:30PM (BeauHD)
from the quiet-power-plays dept.)
- Reference: 0179859202
- News link: https://tech.slashdot.org/story/25/10/23/2054219/anthropics-google-cloud-deal-includes-1-million-tpus-1-gw-of-capacity-in-2026
- Source link: https://www.cnbc.com/2025/10/23/anthropic-google-cloud-deal-tpu.html
Google and Anthropic have [1]finalized a cloud partnership worth tens of billions of dollars , granting Anthropic access to up to one million of Google's Tensor Processing Units and more than a gigawatt of compute power by 2026. CNBC reports:
> Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips. While competitors tout even loftier projections -- OpenAI's 33-gigawatt "Stargate" chief among them -- Anthropic's move is a quiet power play rooted in execution, not spectacle. Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.
>
> A key to Anthropic's infrastructure strategy is its multi-cloud architecture. The company's Claude family of language models runs across Google's TPUs, Amazon's custom Trainium chips, and Nvidia's GPUs, with each platform assigned to specialized workloads like training, inference, and research. Google said the TPUs offer Anthropic "strong price-performance and efficiency." [...] Anthropic's ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints. According to a person familiar with the company's infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.
[1] https://www.cnbc.com/2025/10/23/anthropic-google-cloud-deal-tpu.html
> Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips. While competitors tout even loftier projections -- OpenAI's 33-gigawatt "Stargate" chief among them -- Anthropic's move is a quiet power play rooted in execution, not spectacle. Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.
>
> A key to Anthropic's infrastructure strategy is its multi-cloud architecture. The company's Claude family of language models runs across Google's TPUs, Amazon's custom Trainium chips, and Nvidia's GPUs, with each platform assigned to specialized workloads like training, inference, and research. Google said the TPUs offer Anthropic "strong price-performance and efficiency." [...] Anthropic's ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints. According to a person familiar with the company's infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.
[1] https://www.cnbc.com/2025/10/23/anthropic-google-cloud-deal-tpu.html
Usage Limi (Score:2)
by crabboy.com ( 771982 )
Does this mean they can relax the usage limits next year? ðY
just need to get fans to spin at 88MPH! (Score:3)
just need to get fans to spin at 88MPH!