Nvidia scoffs at threat from Google TPUs after rumored Meta tie-up
(2025/11/26)
- Reference: 1764112597
- News link: https://www.theregister.co.uk/2025/11/25/nvidia_google_tpu_meta/
- Source link:
Growing demand for Google's homegrown AI accelerators appears to have gotten under Nvidia's skin amid reports that one of the GPU giant's most loyal customers may adopt the Chocolate Factory's tensor processing units (TPUs).
Nvidia's share price dipped on Tuesday following a [1]report by The Information that Meta was in talks to deploy Google's TPUs in its own datacenters beginning in 2027.
In response, Nvidia took to the social network formerly known as Twitter, where it offered Google a backhanded compliment on the successes of its TPUs.
[2]
"We're delighted by Google's success — they've made great advances in AI and we continue to supply to Google," Nvidia's Newsroom account [3]posted on X. "Nvidia is a generation ahead of the industry — it's the only platform that runs every AI model and does it everywhere computing is done. Nvidia offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions."
[4]
[5]
As we've previously [6]reported , Google's seventh generation of TPUs, codenamed Ironwood, not only gives Nvidia Blackwell accelerators a run for their money, but can also scale far beyond the GPU giant's 72-GPU racks to pods containing anywhere from 256 to 9,216 chips. Nvidia's next-gen Vera Rubin accelerators are faster, but Google has scale on its side.
"We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs; we are committed to supporting both, as we have for years," Google told El Reg in a statement, conveniently avoiding the subject of Meta.
[7]
Wider adoption of Google's TPUs may pose a threat to Nvidia's bottom line on paper, but it's not clear whether Meta would – or even could – choose them over competing platforms.
For one, Google would need to break from convention and offer its TPUs for sale on the open market. Historically, the accelerators have only ever been available for lease through Google Cloud.
But even if Google did agree to sell its chips to Meta, Zuckercorp would still face significant integration challenges.
[8]
TPU deployments don't look anything remotely like the AMD- and Nvidia-based clusters Meta is used to. Rather than using packet switches to stitch together hundreds or thousands of AMD or Nvidia GPUs into large scale-out compute fabrics, TPUs are connected into large toroidal meshes using optical circuit switch (OCS) tech.
We've discussed the [9]significance of OCS in the past, but the important bit is these appliances operate on completely different principles from packet switches, and often require a different programming model.
The bigger challenge, however, is PyTorch, the deep learning library Meta developed to enable machine learning workloads to run seamlessly across CPU and GPU hardware. PyTorch can run on TPUs, but they don't support the framework natively, meaning Meta would need to employ a translation layer called PyTorch/XLA.
Given the army of software devs at big tech's disposal, Meta and Google could certainly overcome this challenge. But why would they care to? If the talks did take place as reported, the more likely scenario is that Meta was simply discussing inference optimizations targeting Google TPUs for its family of Llama models.
Running inference on a model requires an order of magnitude fewer compute resources than training one. Inference workloads also benefit from proximity to end users, which cuts down on latency and improves interactivity.
Historically, Meta has released its family of large language models (LLMs) to the public on repositories like Hugging Face, where customers can download and run them on any number of accelerators, including Google's TPUs. So Meta needs to ensure Llama runs well on TPUs so enterprises will adopt it. But if mere inference is the goal, Meta doesn't need to own the chips itself - enterprises can simply run Llama on a TPU leased directly from Google.
[10]Rent-a-GPU neoclouds need to adapt or die as the AI market evolves
[11]Google's Ironwood TPUs represent a bigger threat than Nvidia would have you believe
[12]Google imagines out of this world AI - running on orbital datacenters
[13]Nvidia pushes out hotfix after Windows 11 October update tanks gaming performance
Having said all of that, Google is indeed seeing greater interest in its TPU tech from rival model builders, including Anthropic. The Claude developer has been heavily reliant on custom Trainium AI accelerators from Amazon Web Services, but it's diversifying.
In October, Anthropic [14]announced plans to use up to a million TPUs to train and serve its next generation of Claude models. This is a lot less jarring than moving from GPUs - as we reported earlier this month, both Google's TPU and Amazon's Trainium use mesh topologies in their compute clusters, lowering the transition cost.
But Anthropic didn't stop there. Last week, Anthropic [15]announced a strategic partnership with Microsoft and Nvidia to purchase $30 billion worth of Azure compute capacity, and to contract additional compute capacity of up to one gigawatt. In exchange, Nvidia and Microsoft agreed to invest up to $10 billion and $5 billion respectively in the AI startup.
In other words, all the big AI players are hedging their bets and making alliances with everybody else.
The Register reached out to Meta for comment, but had not heard back at the time of publication. ®
Get our [16]Tech Resources
[1] https://www.theinformation.com/articles/google-encroaches-nvidias-turf-new-ai-chip-push
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://x.com/nvidianewsroom/status/1993364210948936055
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/11/06/googles_ironwood_tpus_ai/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2024/03/25/coherent_optical_circuit_switch/
[10] https://www.theregister.com/2025/11/25/rentagpu_neoclouds_need_to_adapt/
[11] https://www.theregister.com/2025/11/06/googles_ironwood_tpus_ai/
[12] https://www.theregister.com/2025/11/04/google_takes_ai_aspirations_orbital/
[13] https://www.theregister.com/2025/11/20/nvidia_windows_11_hotfix/
[14] https://www.googlecloudpresscorner.com/2025-10-23-Anthropic-to-Expand-Use-of-Google-Cloud-TPUs-and-Services
[15] https://www.anthropic.com/news/microsoft-nvidia-anthropic-announce-strategic-partnerships
[16] https://whitepapers.theregister.com/
Nvidia's share price dipped on Tuesday following a [1]report by The Information that Meta was in talks to deploy Google's TPUs in its own datacenters beginning in 2027.
In response, Nvidia took to the social network formerly known as Twitter, where it offered Google a backhanded compliment on the successes of its TPUs.
[2]
"We're delighted by Google's success — they've made great advances in AI and we continue to supply to Google," Nvidia's Newsroom account [3]posted on X. "Nvidia is a generation ahead of the industry — it's the only platform that runs every AI model and does it everywhere computing is done. Nvidia offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions."
[4]
[5]
As we've previously [6]reported , Google's seventh generation of TPUs, codenamed Ironwood, not only gives Nvidia Blackwell accelerators a run for their money, but can also scale far beyond the GPU giant's 72-GPU racks to pods containing anywhere from 256 to 9,216 chips. Nvidia's next-gen Vera Rubin accelerators are faster, but Google has scale on its side.
"We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs; we are committed to supporting both, as we have for years," Google told El Reg in a statement, conveniently avoiding the subject of Meta.
[7]
Wider adoption of Google's TPUs may pose a threat to Nvidia's bottom line on paper, but it's not clear whether Meta would – or even could – choose them over competing platforms.
For one, Google would need to break from convention and offer its TPUs for sale on the open market. Historically, the accelerators have only ever been available for lease through Google Cloud.
But even if Google did agree to sell its chips to Meta, Zuckercorp would still face significant integration challenges.
[8]
TPU deployments don't look anything remotely like the AMD- and Nvidia-based clusters Meta is used to. Rather than using packet switches to stitch together hundreds or thousands of AMD or Nvidia GPUs into large scale-out compute fabrics, TPUs are connected into large toroidal meshes using optical circuit switch (OCS) tech.
We've discussed the [9]significance of OCS in the past, but the important bit is these appliances operate on completely different principles from packet switches, and often require a different programming model.
The bigger challenge, however, is PyTorch, the deep learning library Meta developed to enable machine learning workloads to run seamlessly across CPU and GPU hardware. PyTorch can run on TPUs, but they don't support the framework natively, meaning Meta would need to employ a translation layer called PyTorch/XLA.
Given the army of software devs at big tech's disposal, Meta and Google could certainly overcome this challenge. But why would they care to? If the talks did take place as reported, the more likely scenario is that Meta was simply discussing inference optimizations targeting Google TPUs for its family of Llama models.
Running inference on a model requires an order of magnitude fewer compute resources than training one. Inference workloads also benefit from proximity to end users, which cuts down on latency and improves interactivity.
Historically, Meta has released its family of large language models (LLMs) to the public on repositories like Hugging Face, where customers can download and run them on any number of accelerators, including Google's TPUs. So Meta needs to ensure Llama runs well on TPUs so enterprises will adopt it. But if mere inference is the goal, Meta doesn't need to own the chips itself - enterprises can simply run Llama on a TPU leased directly from Google.
[10]Rent-a-GPU neoclouds need to adapt or die as the AI market evolves
[11]Google's Ironwood TPUs represent a bigger threat than Nvidia would have you believe
[12]Google imagines out of this world AI - running on orbital datacenters
[13]Nvidia pushes out hotfix after Windows 11 October update tanks gaming performance
Having said all of that, Google is indeed seeing greater interest in its TPU tech from rival model builders, including Anthropic. The Claude developer has been heavily reliant on custom Trainium AI accelerators from Amazon Web Services, but it's diversifying.
In October, Anthropic [14]announced plans to use up to a million TPUs to train and serve its next generation of Claude models. This is a lot less jarring than moving from GPUs - as we reported earlier this month, both Google's TPU and Amazon's Trainium use mesh topologies in their compute clusters, lowering the transition cost.
But Anthropic didn't stop there. Last week, Anthropic [15]announced a strategic partnership with Microsoft and Nvidia to purchase $30 billion worth of Azure compute capacity, and to contract additional compute capacity of up to one gigawatt. In exchange, Nvidia and Microsoft agreed to invest up to $10 billion and $5 billion respectively in the AI startup.
In other words, all the big AI players are hedging their bets and making alliances with everybody else.
The Register reached out to Meta for comment, but had not heard back at the time of publication. ®
Get our [16]Tech Resources
[1] https://www.theinformation.com/articles/google-encroaches-nvidias-turf-new-ai-chip-push
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://x.com/nvidianewsroom/status/1993364210948936055
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/11/06/googles_ironwood_tpus_ai/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSaJa22OehbTn8EZkAXpkQAAAI0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2024/03/25/coherent_optical_circuit_switch/
[10] https://www.theregister.com/2025/11/25/rentagpu_neoclouds_need_to_adapt/
[11] https://www.theregister.com/2025/11/06/googles_ironwood_tpus_ai/
[12] https://www.theregister.com/2025/11/04/google_takes_ai_aspirations_orbital/
[13] https://www.theregister.com/2025/11/20/nvidia_windows_11_hotfix/
[14] https://www.googlecloudpresscorner.com/2025-10-23-Anthropic-to-Expand-Use-of-Google-Cloud-TPUs-and-Services
[15] https://www.anthropic.com/news/microsoft-nvidia-anthropic-announce-strategic-partnerships
[16] https://whitepapers.theregister.com/
Sorry that handle is already taken.
China's awash with Nvidia GPUs, including the "banned" ones.
"Nvidia is a generation ahead of the industry — it's the only platform that runs every AI model and does it everywhere computing is done..."
China has entered chat...