SC25 gets heavy with mega power and cooling solutions
(2025/11/20)
- Reference: 1763674702
- News link: https://www.theregister.co.uk/2025/11/20/heavy_industry_invades_sc25/
- Source link:
SC25 Hydrogen-fueled gas turbines, backup generators, and air handlers probably aren't the kinds of equipment you'd expect on the show floor of a supercomputing conference. But your expectations would be wrong.
At SC25, datacenter physical infrastructure took center stage with sprawling dioramas of evaporative cooling towers and coolant distribution units (CDUs) filling massive booths rivaling those of major chip vendors and OEMs like Nvidia, AMD, and HPE Cray.
Among the largest of these displays were those from Mitsubishi Heavy Industries and Danfoss, which are best known for building power plants and industrial-scale air handling and facility cooling equipment.
[1]
Located directly across from Nvidia's booth was datacenter physical infrastructure Vertiv vendor, which had mocked up a datacenter aisle to show how their power and cooling tech could support the highest density deployments - Click to enlarge
These displays weren't tucked away at the back of the show floor, either. Nestled just a few feet from Nvidia's booth was Vertiv's, which had built out a full-scale mockup of a data hall to demonstrate how its liquid cooling and power delivery tech could support the deployment of the GPU giant's latest-gen rack systems.
HPC is becoming big business
The reason for this invasion of datacenter physical infrastructure and heavy industrial equipment is simple. The construction of AI datacenters — and yes, AI is an HPC workload whether you like it or not — has become a big business, with power and cooling infrastructure becoming a key bottleneck for the buildout of new high-density datacenters.
HPC practitioners are no strangers to dense liquid-cooled systems. HPE's Cray EX4000 systems can be specced with up to 512 AMD MI300A APUs, totaling more than 293 kW per cabinet. Lawrence Livermore National Laboratory's [2]El Capitan , the number one ranked system on the Top500 ranking of publicly known supercomputers, features 87 of these cabinets.
[3]
But while El Capitan's 44,544 APUs or Aurora's 63,744 GPUs are certainly among the largest scientific instruments ever built, they pale in comparison to the AI superclusters being built for OpenAI in Abilene, Texas or Meta in Richland Parish, Louisiana.
[4]
[5]
When complete, OpenAI's first [6]Stargate datacenter will exceed 400,000 Nvidia GPUs consuming 1.2 gigawatts of power. Meta's Hyperion datacenter will be even larger. Assuming of course the multi-year project is actually completed, it's expected to swell to more than 5 gigawatts, roughly 150x the design capacity of El Capitan.
These deployments aren't just larger; they're designed to support some of the densest machines ever built. Today, Nvidia's [7]NVL72 racks are rated for anywhere between 120 kW and 140 kW each, putting them roughly on par with HPE Cray's EX cabinets, which while higher in capacity, are twice the size of a typical rack.
[8]
Needless to say, at this density, liquid cooling is a foregone conclusion, and that means, for every eight NVL72 racks, what Nvidia calls a "Superpod" datacenter, operators need a coolant distribution unit (CDU) with at least a megawatt of cooling capacity. While some datacenters have the facility water systems to support these deployments, many don't.
At SC25, numerous thermal management vendors, including Vertiv, Nidec, nVent, and others showed off CDUs capable of dissipating anywhere from a few hundred kilowatts to more than two megawatts of power.
[9]
Nidec was one of several thermal management vendors showing off high-density coolant distribution units (CDUs) with upwards of 2 MW of capacity, enough for about 16 GB200 NVL72 racks today or three of Nvidia's 600 kW Kyber racks - Click to enlarge
If you're not familiar, CDUs are responsible for pumping coolant to and from connected systems, and exchanging the captured heat by way of either a liquid-to-air or liquid-to-liquid heat exchanger. Once captured, that heat has to be rejected to the atmosphere, something that's usually accomplished using cooling towers from the likes of Danfoss or Fourier.
[10]
Datacenter cooling towers aren't exactly the kind of things you can easily fit into an exhibition booth so Danfoss has gone back to tried and true dioramas - Click to enlarge
These datacenters are only going to get denser as time goes on. Within the next two years, Nvidia plans to bring racks to market with as many as 576 GPU dies and 600 kW of system power with its Kyber reference designs. At this density, a CDU that could handle an entire superpod worth of NVL72 can barely manage 1.5 600 kW Kyber racks.
[11]
By 2027, Nvidia CEO Jensen Huang expects racks to surge to 600 kW with the debut of the Rubin Ultra NVL576 - Click to enlarge
Further complicating matters is that, unlike a traditional enterprise datacenter, these systems don't use AC power directly. Instead, these racks run entirely on DC delivered via bus bars at the back of the racks.
Today, systems like Nvidia's GB200 or GB300 NVL72 run on 54 volt DC power but with the move to 600 kW racks, Nvidia is now adopting an 800 volt architecture.
At SC25, both power systems vendors Eaton and Vertiv showed off 800 V sidecar power racks, specifically designed for high-density deployments.
[12]
At SC25, Eaton showed off its latest power sidecars, which increase voltages to 800 V in preparation for Nvidia's next-gen 600 kW rack platform - Click to enlarge
In addition to all of the electronics to convert from AC to DC, each of these sidecars is packed to the gills with batteries and capacitors. Eaton tells us that the engineering behind these systems was heavily influenced by the companies working on electric vehicles. These batteries effectively function as a UPS, providing up to 90 seconds' worth of clean power in the case of a brownout.
Even at 800 volts, these sidecars are only enough to support a single 600 kW Kyber rack. Because of this, companies like Eaton are already looking at liquid-cooled sidecars that boost the DC voltages to 1,500 volts.
[13]Scientific computing is about to get a massive injection of AI
[14]Europe joins US as exascale superpower after Jupiter clinches Top500 run
[15]Nvidia-backed photonics startup Ayar Labs eyes hyperscale customers with GUC design collab
[16]Need AI? Dell backs up the truck and tips out servers, storage, blueprints
Sidestepping the power crunch
For these higher voltage sidecars to matter, you first need adequate utility to support them, which is by no means guaranteed. Hyperscalers like Meta have been forced to finance the construction of large-scale gas generator plants by local utilities to fuel their AI ambitions.
Building out these plants takes time, which can be particularly problematic if your GPU servers don't have adequate power.
[17]
To get around this, some bit barn builders have taken to using mobile natural gas plants. xAI is using [18]these plants as a stopgap to power its 200,000 GPU Colossus supercomputer in Memphis, Tennessee. At one point, the most unhinged chatbot on the internet was running on more than 35 of these portable generators.
Even after a second substation was completed, xAI said it would keep 15 of the turbines onsite for backup power.
[19]
Hydrogen-powered turbine power plants probably aren't the kinds of things you'd expect to see on the show floor of a supercomputing conference - Click to enlarge
Datacenter power has become such a bottleneck that Mitsubishi Power was showing modular power plants — or at least miniatures of them — and even turbines designed specifically to run on hydrogen gas.
Given the scale of datacenter buildouts today, we wouldn't be surprised to see small modular reactor (SMR) startups like X-Energy or Kairos Power at next year's SC. ®
Get our [20]Tech Resources
[1] https://regmedia.co.uk/2025/11/20/vertiv.jpg
[2] https://www.theregister.com/2024/11/18/top500_el_capitan/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/10/06/stargate_openai_amd/
[7] https://www.theregister.com/2024/03/21/nvidia_dgx_gb200_nvk72/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[9] https://regmedia.co.uk/2025/11/20/nidec_2mw_cdu.jpg
[10] https://regmedia.co.uk/2025/11/20/danfoss_cooling_tower.jpg
[11] https://regmedia.co.uk/2025/03/18/vera_rubin_nvl576.jpg
[12] https://regmedia.co.uk/2025/11/20/eaton_800v_power.jpg
[13] https://www.theregister.com/2025/11/18/future_of_scientific_computing/
[14] https://www.theregister.com/2025/11/17/europe_jupiter_supercomputer/
[15] https://www.theregister.com/2025/11/16/ayar_guc_collab/
[16] https://www.theregister.com/2025/11/17/dell_ai_lineup/
[17] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[18] https://www.theregister.com/2025/05/08/xai_turbines_colossus/
[19] https://regmedia.co.uk/2025/11/20/mitsubishi_heavy_industry_turbine.jpg
[20] https://whitepapers.theregister.com/
At SC25, datacenter physical infrastructure took center stage with sprawling dioramas of evaporative cooling towers and coolant distribution units (CDUs) filling massive booths rivaling those of major chip vendors and OEMs like Nvidia, AMD, and HPE Cray.
Among the largest of these displays were those from Mitsubishi Heavy Industries and Danfoss, which are best known for building power plants and industrial-scale air handling and facility cooling equipment.
[1]
Located directly across from Nvidia's booth was datacenter physical infrastructure Vertiv vendor, which had mocked up a datacenter aisle to show how their power and cooling tech could support the highest density deployments - Click to enlarge
These displays weren't tucked away at the back of the show floor, either. Nestled just a few feet from Nvidia's booth was Vertiv's, which had built out a full-scale mockup of a data hall to demonstrate how its liquid cooling and power delivery tech could support the deployment of the GPU giant's latest-gen rack systems.
HPC is becoming big business
The reason for this invasion of datacenter physical infrastructure and heavy industrial equipment is simple. The construction of AI datacenters — and yes, AI is an HPC workload whether you like it or not — has become a big business, with power and cooling infrastructure becoming a key bottleneck for the buildout of new high-density datacenters.
HPC practitioners are no strangers to dense liquid-cooled systems. HPE's Cray EX4000 systems can be specced with up to 512 AMD MI300A APUs, totaling more than 293 kW per cabinet. Lawrence Livermore National Laboratory's [2]El Capitan , the number one ranked system on the Top500 ranking of publicly known supercomputers, features 87 of these cabinets.
[3]
But while El Capitan's 44,544 APUs or Aurora's 63,744 GPUs are certainly among the largest scientific instruments ever built, they pale in comparison to the AI superclusters being built for OpenAI in Abilene, Texas or Meta in Richland Parish, Louisiana.
[4]
[5]
When complete, OpenAI's first [6]Stargate datacenter will exceed 400,000 Nvidia GPUs consuming 1.2 gigawatts of power. Meta's Hyperion datacenter will be even larger. Assuming of course the multi-year project is actually completed, it's expected to swell to more than 5 gigawatts, roughly 150x the design capacity of El Capitan.
These deployments aren't just larger; they're designed to support some of the densest machines ever built. Today, Nvidia's [7]NVL72 racks are rated for anywhere between 120 kW and 140 kW each, putting them roughly on par with HPE Cray's EX cabinets, which while higher in capacity, are twice the size of a typical rack.
[8]
Needless to say, at this density, liquid cooling is a foregone conclusion, and that means, for every eight NVL72 racks, what Nvidia calls a "Superpod" datacenter, operators need a coolant distribution unit (CDU) with at least a megawatt of cooling capacity. While some datacenters have the facility water systems to support these deployments, many don't.
At SC25, numerous thermal management vendors, including Vertiv, Nidec, nVent, and others showed off CDUs capable of dissipating anywhere from a few hundred kilowatts to more than two megawatts of power.
[9]
Nidec was one of several thermal management vendors showing off high-density coolant distribution units (CDUs) with upwards of 2 MW of capacity, enough for about 16 GB200 NVL72 racks today or three of Nvidia's 600 kW Kyber racks - Click to enlarge
If you're not familiar, CDUs are responsible for pumping coolant to and from connected systems, and exchanging the captured heat by way of either a liquid-to-air or liquid-to-liquid heat exchanger. Once captured, that heat has to be rejected to the atmosphere, something that's usually accomplished using cooling towers from the likes of Danfoss or Fourier.
[10]
Datacenter cooling towers aren't exactly the kind of things you can easily fit into an exhibition booth so Danfoss has gone back to tried and true dioramas - Click to enlarge
These datacenters are only going to get denser as time goes on. Within the next two years, Nvidia plans to bring racks to market with as many as 576 GPU dies and 600 kW of system power with its Kyber reference designs. At this density, a CDU that could handle an entire superpod worth of NVL72 can barely manage 1.5 600 kW Kyber racks.
[11]
By 2027, Nvidia CEO Jensen Huang expects racks to surge to 600 kW with the debut of the Rubin Ultra NVL576 - Click to enlarge
Further complicating matters is that, unlike a traditional enterprise datacenter, these systems don't use AC power directly. Instead, these racks run entirely on DC delivered via bus bars at the back of the racks.
Today, systems like Nvidia's GB200 or GB300 NVL72 run on 54 volt DC power but with the move to 600 kW racks, Nvidia is now adopting an 800 volt architecture.
At SC25, both power systems vendors Eaton and Vertiv showed off 800 V sidecar power racks, specifically designed for high-density deployments.
[12]
At SC25, Eaton showed off its latest power sidecars, which increase voltages to 800 V in preparation for Nvidia's next-gen 600 kW rack platform - Click to enlarge
In addition to all of the electronics to convert from AC to DC, each of these sidecars is packed to the gills with batteries and capacitors. Eaton tells us that the engineering behind these systems was heavily influenced by the companies working on electric vehicles. These batteries effectively function as a UPS, providing up to 90 seconds' worth of clean power in the case of a brownout.
Even at 800 volts, these sidecars are only enough to support a single 600 kW Kyber rack. Because of this, companies like Eaton are already looking at liquid-cooled sidecars that boost the DC voltages to 1,500 volts.
[13]Scientific computing is about to get a massive injection of AI
[14]Europe joins US as exascale superpower after Jupiter clinches Top500 run
[15]Nvidia-backed photonics startup Ayar Labs eyes hyperscale customers with GUC design collab
[16]Need AI? Dell backs up the truck and tips out servers, storage, blueprints
Sidestepping the power crunch
For these higher voltage sidecars to matter, you first need adequate utility to support them, which is by no means guaranteed. Hyperscalers like Meta have been forced to finance the construction of large-scale gas generator plants by local utilities to fuel their AI ambitions.
Building out these plants takes time, which can be particularly problematic if your GPU servers don't have adequate power.
[17]
To get around this, some bit barn builders have taken to using mobile natural gas plants. xAI is using [18]these plants as a stopgap to power its 200,000 GPU Colossus supercomputer in Memphis, Tennessee. At one point, the most unhinged chatbot on the internet was running on more than 35 of these portable generators.
Even after a second substation was completed, xAI said it would keep 15 of the turbines onsite for backup power.
[19]
Hydrogen-powered turbine power plants probably aren't the kinds of things you'd expect to see on the show floor of a supercomputing conference - Click to enlarge
Datacenter power has become such a bottleneck that Mitsubishi Power was showing modular power plants — or at least miniatures of them — and even turbines designed specifically to run on hydrogen gas.
Given the scale of datacenter buildouts today, we wouldn't be surprised to see small modular reactor (SMR) startups like X-Energy or Kairos Power at next year's SC. ®
Get our [20]Tech Resources
[1] https://regmedia.co.uk/2025/11/20/vertiv.jpg
[2] https://www.theregister.com/2024/11/18/top500_el_capitan/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/10/06/stargate_openai_amd/
[7] https://www.theregister.com/2024/03/21/nvidia_dgx_gb200_nvk72/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[9] https://regmedia.co.uk/2025/11/20/nidec_2mw_cdu.jpg
[10] https://regmedia.co.uk/2025/11/20/danfoss_cooling_tower.jpg
[11] https://regmedia.co.uk/2025/03/18/vera_rubin_nvl576.jpg
[12] https://regmedia.co.uk/2025/11/20/eaton_800v_power.jpg
[13] https://www.theregister.com/2025/11/18/future_of_scientific_computing/
[14] https://www.theregister.com/2025/11/17/europe_jupiter_supercomputer/
[15] https://www.theregister.com/2025/11/16/ayar_guc_collab/
[16] https://www.theregister.com/2025/11/17/dell_ai_lineup/
[17] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511sycompsupercomputing&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aR-difXfVVPzBb30tLyPeAAAAJA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[18] https://www.theregister.com/2025/05/08/xai_turbines_colossus/
[19] https://regmedia.co.uk/2025/11/20/mitsubishi_heavy_industry_turbine.jpg
[20] https://whitepapers.theregister.com/