News: 1746099912

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google details plans for 1 MW IT racks exploiting electric vehicle supply chain

(2025/05/01)


Google is planning for datacenter racks supporting 1 MW of IT hardware loads, plus the cooling infrastructure to cope, as AI processing continues to grow ever more energy intensive.

OK great, UK is building loads of AI datacenters. How are we going to power that? [1]READ MORE

At the [2]Open Compute Project (OCP) Summit in Dublin, Google discussed changes in server room tech that it touts as being critical to AI's continued ability to scale up, presumably to deliver ever larger and more complex models.

While the power consumption of a typical datacenter rack might fall somewhere between 5 kW to about 30 kW, the explosion in the use of servers stuffed with power-hungry GPU accelerators has seen this figure rise to 100 kW or more, with [3]Nvidia's DGX GB200 NVL72 system pushing 120 kW.

Now the cloud-and-search giant says that switching from the 48 volts direct current (VDC) power distribution previously championed by OCP to a +/-400 VDC system will allow those server rooms to support up to 1 MW per rack.

"This is about much more than simply increasing power delivery capacity - selecting 400 VDC as the nominal voltage allows us to leverage the supply chain established by electric vehicles (EVs), for greater economies of scale, more efficient manufacturing, and improved quality and scale," Google says in a [4]blog post authored by Principal Engineers Madhusudan Iyengar and Amber Huffman.

[5]

Also part of this vision is a disaggregation of the power components from the IT rack, and into a separate rack unit on the same row in a data hall. Google says this is a project known as [6]Mt Diablo which it is working on with rival hyperscalers Meta and Microsoft, promising that a 0.5 draft release of the specifications will be available for industry perusal in May.

[7]

[8]

In practice, this will result in what the Chocolate Factory dubs a "sidecar" dedicated AC-to-DC power rack that feeds power to the other racks, the idea being to free up more space within each unit for servers stuffed with GPUs.

"Longer term, we are exploring directly distributing higher-voltage DC power within the datacenter and to the rack, for even greater power density and efficiency," the Google authors claim.

[9]

The Mountain View biz also says it is developing a fifth generation of its cooling tech, previously deployed as part of the cloud infrastructure running its Tensor Processing Units (TPUs) to accelerate machine learning workloads.

[10]Heat can make Li-Ion batteries explode. Or restore their capacity, say Chinese boffins

[11]Google datacenters in Nevada to go full steam ahead with geothermal energy

[12]Sustainability still not a high priority for datacenter industry

[13]US DoE wants developers to fast-track AI datacenters on its land

Its implementation is based on in-row coolant distribution units (CDUs), backed by uninterruptible power supplies (UPS) for high availability.

The CDU supplies the server racks and is in turn connected to the data hall's wider distribution loop. Coolant is ultimately delivered via flexible hoses to cold plates directly attached to the high-power chips – a system familiar to many high-performance compute (HPC) shops.

Google says its CDU architecture, named Project Deschutes, features redundant pump and heat exchanger units for greater reliability, and that this has allowed it to achieve a CDU availability of 99.999 percent since 2020.

The new kit is currently still in development, but the cloud-and-search firm says it will contribute the design to the OCP later this year, in a bid to help other companies adopt liquid cooling at scale. ®

Get our [14]Tech Resources



[1] https://www.theregister.com/2025/04/10/uk_ai_energy_council_meets/

[2] https://www.opencompute.org/summit/emea-summit

[3] https://www.theregister.com/2024/03/21/nvidia_dgx_gb200_nvk72/

[4] https://cloud.google.com/blog/topics/systems/enabling-1-mw-it-racks-and-liquid-cooling-at-ocp-emea-summit/

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/aiinfrastructuremonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aBOanV889TeecXgYWLOTewAAA0M&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[6] https://techcommunity.microsoft.com/blog/azureinfrastructureblog/mt-diablo---disaggregated-power-fueling-the-next-wave-of-ai-platforms/4268799

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/aiinfrastructuremonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBOanV889TeecXgYWLOTewAAA0M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/aiinfrastructuremonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBOanV889TeecXgYWLOTewAAA0M&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/aiinfrastructuremonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBOanV889TeecXgYWLOTewAAA0M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[10] https://www.theregister.com/2025/04/17/heat_repairs_li_ion_batteries/

[11] https://www.theregister.com/2024/06/13/google_geothermal_datacenters/

[12] https://www.theregister.com/2025/04/24/sustainability_still_not_a_high/

[13] https://www.theregister.com/2025/04/04/doe_ai_datacenters/

[14] https://whitepapers.theregister.com/



400VDC

Anonymous Coward

Apart from leveraging an existing mass produced technology I was wondering if there were particular engineering considerations behind 400V and DC.

I always imagined higher voltages and kHz AC (ferrite transformer cores) would be a more efficient.

Given the heat produced I imagine the systems could be arranged in a circle venting into the centre and stacking vertically to form a chimney stack the convective draw would be enough to keep the whole boiling cool. Keeping the rain and pigeons out wouldn't be a problem as they would vaporised or cooked if either managed to overcome the updraft to enter the space.

1 MW computing racks ....

Alan Mackenzie

Madness, sheer madness.

Re: 1 MW computing racks ....

ecofeco

It is of vital strategic national security and corporate duty that the details of which hand everyone uses to wipe their arse with be accurately recorded!

Three-phase

Peter Gathercole

I understand the thinking behind using the high voltage infrastructure designed for EV charging for delivering power to the racks, but it's not really new.

IBM Mainframes and the larger supercomputer variants of Power systems have had three-phase power to the rack for a long time. Generally they have what is called a bulk power unit installed at the top of the rack that converts the input to a lower voltage which is then distributed through the rack.

I no longer have the technical details to hand for the Power 7 775 systems that I used to look after, but these had 100's of KW per rack, 10 years ago.

Re: Three-phase

cyberdemon

> 100's of KW per rack, 10 years ago.

I must admit I was skeptical when I read that, but you're right! Wikipedia has a [1]blurry photo of a Power7 775 rack that claims to be 360kW, using 400V DC (350-520VDC)

[1] https://upload.wikimedia.org/wikipedia/commons/b/b5/Blue_Waters_Rack_%285185721330%29.jpg

Fun stuff

rgjnk

"greater economies of scale, more efficient manufacturing, and improved quality and scale" is something you also get with all that 48V stuff that comes from telecoms and automotive already.

Going for 400V/800VDC as a next step just means you can continue to benefit from some of that scale.

Upside of going 400VDC & up is you don't need quite such ridiculously oversized conductors & connectors, though they'll still be substantial at the power levels involved & might even need plenty of cooling too. Big downside is how solidly hazardous a DC supply at that voltage is in multiple ways & will require some serious safety design & changes to working practices.

The other comical bit is going to be cooling the equipment at that power density, it's not exactly trivial. There's one or two fun examples from old compact supercomputer projects (taking an existing design and making it deskside sized for deployment into flying/floating use), and I've done testing myself with kit that needed ridiculously sized coolant feeds to extract the energy; amazingly easy to heat a full flow full bore watermain if you dump enough power into it. The cooling system will probably end up as a serious hazard in itself.

Maybe the better option is to accept density limits and not chase into the realm of silly engineering requirements and safety hazards?

Re: Fun stuff

cyberdemon

Yes er, 1.25 kA busbars instead of 21kA busbars .... madness indeed!

I wouldn't be surprised if they were using 11kV AC input too.. That would still be 90 Amps per rack!

Answer a fool according to his folly, lest he be wise in his own conceit.
-- Proverbs, 26:5