News: 1773226808

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Your datacenter's power architecture called. It's not happy

(2026/03/11)


Feature Hyperscale computing was built on a foundation of certainty. For years, 12V and 48V rack architectures – implemented at a steady 50–54 VDC (Volts of Direct Current) - ruled the datacenter floor, engineered to perfection for power densities of 10–15 kW per rack. These systems were finely tuned machines, optimized around the predictable, steady-state demands of general-purpose CPUs and storage servers. The infrastructure was stable. The math was settled.

Then accelerated computing arrived, and blew the entire playbook apart.

GPU clusters and AI accelerators don't operate on the old rules. They don't ask for 15 kW. They demand hundreds of kilowatts per rack, an order-of-magnitude leap that legacy electrical and thermal architectures were never designed to survive. The comfortable assumptions baked into decades of datacenter design are now liabilities, and the industry is facing a reckoning it can no longer defer.

[1]

The Nvidia GB200 NVL72 rack-scale system, for example, requires 120 kW per rack. At these power levels, the physics of low-voltage distribution face challenges. The requirement to deliver 120 kW at 48V requires currents exceeding 2.5 kA. To handle thousands of amperes within a rack means thick busbars, heavy copper mass, overheating connectors, significant resistive losses, and serviceability issues.

[2]

[3]

AI has pushed the industry beyond the 48V comfort zone, where the limiting factor is safely and efficiently carrying the current. One emerging solution to this problem is to increase the distribution voltage (400V or 800V), which reduces the current at the same power level. This is why the industry is now moving to high-voltage DC (HVDC) power architecture for next-generation AI factories.

Challenges with 48V power distribution

Let's talk about the current-squared problem and resistive losses. Because power loss scales with the square of the current, even small reductions in current lead to significant increases in efficiency. The power distribution efficiency is governed by Joule resistive loss (P loss = I 2 R).

In this equation, power loss scales linearly with resistance but quadratically with current. This creates a non-linear disadvantage for maintaining low distribution voltages as power requirements scale. When the rack power demands increases, the current required to deliver that power at a fixed low voltage rises, which results in higher losses.

For the NVL72 rack system, the busbar must be capable of handling a peak electrical power of approximately 192 kW, corresponding to more than 3.8 kA. Even with an optimized busbar resistance of 0.1 mΩ (0.0001 Ω), which is difficult to achieve across a full rack height with multiple joint interfaces, the resistive loss is significant. Using Joule resistive loss, the resistive loss comes to 625 W.

[4]

However, in real-world deployments, resistance includes contact interfaces, cable terminations, and internal shelf impedances. All of these drive the total path resistance toward 0.5 mΩ or higher in complex distributions. At 0.5 mΩ, losses increase to 3125 W.

In contrast, for an equivalent power-distribution path resistance, the 800V scenario handling 150 A yields 2.25 W of P loss . Even if we assume the higher-voltage infrastructure uses thinner connectors with 10x the resistance (1 mΩ), the loss is still only 22.5 W. The shift to 800V reduces distribution losses by orders of magnitude. Therefore, without losing the kilowatts, they can be used for computing rather than for heating the busbar.

Copper overload and contact resistance

Ampacity, which is the maximum current a conductor can carry before exceeding its temperature rating, is a function of cross-sectional area. As current density increases, the cross-sectional area of the conductor must grow to maintain acceptable thermal limits.

To carry 2.5 kA at 48V, OCP Open Rack v3 (ORv3) specifications depend on a massive, heavy, solid copper busbar. The busbar required to carry such a high current would weigh significantly. This imposes severe structural loads on data enter infrastructure and occupies the volume needed for airflow and liquid cooling.

Nvidia claims that an 800VDC power distribution architecture enables a copper reduction of up to 45 percent compared with traditional configurations. In the dense environment of an AI rack, where airflow or liquid cooling competes for space, the volume occupied by power delivery is a crucial constraint.

[5]

Connector physics comes as a third barrier to contact resistance. When the current rises, the voltage drop across the mechanical interfaces increases. This leads to localized heat generation. At 2.5 kA, a contact resistance degradation of just 0.1 mΩ results in a localized heat generation of 625 W.

The new power hierarchy

The power hierarchy is divided into four layers. At the top (utility distribution), power enters as medium-voltage AC (typically ~13.8 kV). This power level remains similar to traditional facilities, where high-voltage AC is efficient for transmitting power over distances. The key change is what happens next in the data center. Instead of multiple conversions and step-downs scattered throughout, new designs aim to convert AC to DC once and then distribute it.

At the facility level, the emerging approach is to perform centralized AC-to-DC conversion where the output is a high-voltage DC. By rectifying to DC near the source, datacenters can eliminate many intermediate AC/DC conversions, which improves efficiency and reliability.

This concept is highlighted in the Nvidia 800VDC solution. They propose converting the 13.8 kV AC feed to 800VDC at the perimeter using industrial rectifiers, and then busing 800VDC throughout the datacenter. Fewer conversion stages simplify backup. For example, battery systems can be connected directly to the DC bus.

In today’s state-of-the-art racks, they use 48-54 VDC busbars. In ORv3, each rack has one or more power shelves that receive facility AC (or DC) and output 50V DC to a busbar serving all servers. A typical ORv3 power shelf is a 1U unit that provides up to 15 kW or 18 kW gross, and multiple shelves can be paralleled to support higher rack loads.

For instance, Eaton’s ORv3 shelf delivers 18 kW in 1U and connects to the 48V busbar. This architecture is a significant improvement over 12V racks. However, with AI racks now targeting 100+ kW, even 48V ORv3 is nearing its practical limits. Future HVDC racks will likely accept an 800V feed and use high-efficiency DC/DC converters to step down to the 48V or 12V domain at the shelf level.

Ultimately, each server or accelerator board must convert to the low voltages used by chips. High-current voltage regulator modules take 12V or 48V input and generate sub-1V for processors. As rack distribution voltages rise, the burden on on-board power electronics grows. This is where GaN (gallium nitride, and SiC (silicon carbide),) devices are increasingly used in both front-end DC/DC and intermediate bus converters.

Navitas Semiconductor, for example, announced new GaN and SiC components for Nvidia 800VDC AI architecture to deliver higher efficiency and power density from the grid to the GPU.

However, today’s AI GPU workloads can draw significant power in milliseconds as different layers of a neural network interact with the hardware. An inference might have all 72 GPUs in a rack idling at one moment, and then suddenly each drawing its maximum as they synchronize for an all-reduce operation. These step-load transients pose challenges beyond supplying large power.

At rack scale, many GPUs operating simultaneously can cause compound transients, in which currents and voltages fluctuate across the power distribution network. Therefore, engineers worry about things like voltage droop on a board’s 48V or 12V rail when a GPU goes from 0 to 100 percent load in microseconds, or dI/dt induction effects along busbars and cables that cause momentary voltage dips.

To mitigate these bursts, engineers are increasingly treating energy storage as a first-class component of the architecture. Nvidia says that energy storage solutions to handle load spikes and sub-second-scale GPU power fluctuations are part of its 800VDC rack strategy.

From ORv3 to 800V

The current generation of datacenter power architecture was a significant step up from the previous 12V motherboard-centric distribution to 48V rack-level distribution in a modular and efficient way. The widespread adoption of ORv3 by hyperscalers and OCP members shows a large ecosystem of 48V power shelves, busbars, and compatible servers.

[6]Only one in five Euro datacenters AI-ready as builders battle land and labor blues

[7]Qualcomm announces AI accelerators and mysterious racks they'll run in

[8]AI giants call for energy grid kumbaya

[9]Enterprises in for a shock when they realize power and cooling demands of AI

ORv3 racks have become the backbone for AI deployments for up to 80 to 100+ kW with extensions and heavy parallelization at 48V power distribution. For instance, Meta and Microsoft have converged around 48V rack designs as seen in OCP contributions.

The latest contribution from Nvidia to OCP shows an enhanced 48V busbar design rated for currents on the order of 1400 A per segment, highlighting how the community is extracting additional headroom from low-voltage architectures. These efforts also indicate that we are approaching the limits of low-voltage distribution in terms of current and heat.

The next logical step is the development of higher-voltage DC distribution standards. We are in a transition period with many racks that will continue to use 48V for a while, but new builds aimed at massive AI computing are already planning for HVDC. Companies like Eaton, Vertiv, and Delta are developing 800V-compatible rectifiers, converters, and power electronics in anticipation of these changes. ®

Get our [10]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2abGft-BacxEB6H7RLVPsYgAAANc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44abGft-BacxEB6H7RLVPsYgAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33abGft-BacxEB6H7RLVPsYgAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44abGft-BacxEB6H7RLVPsYgAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33abGft-BacxEB6H7RLVPsYgAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[6] https://www.theregister.com/2026/02/11/ai_datacenters_bcs/

[7] https://www.theregister.com/2025/10/28/qualcom_ai_accelerators/

[8] https://www.theregister.com/2025/08/22/microsoft_nvidia_openai_power_grid/

[9] https://www.theregister.com/2025/01/15/ai_power_cooling_demands/

[10] https://whitepapers.theregister.com/



Toasty

Pete 2

> peak electrical power of approximately 192 kW, corresponding to more than 3.8 kA. ... the resistive loss is significant. Using Joule resistive loss, the resistive loss comes to 625 W.

Which compared with the 192kW that the processing units consume (and that all gets converted into heat) is peanuts.

I presume the problem is not getting a several foot-long busbar to dissipate a few hundred Watts, but that the heat is concentrated at a few critical points

Re: Toasty

short

Yes, very much so - granted, copper's a good conductor of heat as well as electricity, so it spreads out a bit, but you still have to get it out. Hence water-cooled busbars, cables, connectors, and as much monitoring as you can shake a stick at.

These power densities are a PITA, when can we have proper (cheap, fast) optical interconnects?

short

The time, effort, energy, materials and money we're spending to pack it all in this tight, all feel like a solution to a short term problem that will be made vastly easier once we can shuffle data a few more meters / nanoseconds over fast fibre.

I'm designing stuff into this market but it's going to be a race between (this instance of) AI vanishing up its own fundament, and comms meaning we can back the densities off a bit, make it all feel a bit speculative (much like the whole enterprise). Still, emperor's got to have his invisible clothes, and people are prepared to pay...

Re: These power densities are a PITA, when can we have proper (cheap, fast) optical interconnects?

short

https://www.theregister.com/2026/03/11/ayar_labs_wiwynn_photonics/

Ah, here we go, this sort of thing. Come on, full production as soon as possible please.

Or, of course, we could just push our racks into a circle, like old Cray X-MP, so you can get more front panels within a meter's reach and stay with copper. Of course, if you pack, say, 6 racks in a circle, each running at a Megawatt, you should be able to hang-glide on the thermals coming off that chimney.

It's quite simple

frankvw

Given the need for water cooling, [1]xkcd has had the answer for many years.

[1] https://what-if.xkcd.com/91/

The fireworks should be spectacular

Bebu sa Ware

I can just barely begin to imagine a 800VDC copper vapour arc with heaven only know how many amps passing through it in the microseconds before the superheated metal vapour reacts rather violently with the oxygen.

To my my untutored mind I might have thought multiphase 800V AC at a couple hundred Hz to the rack units that convert down to 50VDC or whatever might be more manageable.

I guess it is a really good time to be a power engineer. The mechanical engineers that are the full bottle on massively scaleable cooling technologies are also getting season tickets for this gravy train. One is feeding the beast and the other carrying away the excrement. :)

Re: The fireworks should be spectacular

short

What's exciting is - these are similar voltages and currents that we're letting the general public poke into their EVs at fast chargers. Those connectors get a hard life, they live outside, and, did I mention, general public and all that that entails.

Sure, there are pilot connections, interlocks, some decent engineering, but still, 1000A and 800V DC. Water cooled cables and connectors, theoretical service and contact replacement intervals, but the dirt, the sticky fingers, the brutal yanking of cables, the driving over and into things. It's going to be a delight. (Yeah, yeah, petrol's fun too, no argument there)

Re: The fireworks should be spectacular

JWLong

Let's just go old school on this, we can supply what ever your needs be!.

https://kemptonsteam.org/wp-content/uploads/2020/11/Headerjjj-template.png

I've work on a few things using mercury arc rectification tech from the far past, things like overhead cranes, subway systems and such.

DC voltage regulation, I've heard of it. (LoL)

Now, just add a bit of water cooling to it and let the fireworks begin!.

A Non e-mouse

The current/power levels we're talking about are, to my mind, insane.

Would we be better off just taking a plasma feed direct from the warp core?

Safety?

Red Ted

One small issue that immediately springs to mind is one of safety as a high current supply at 800VDC is quite lethal.

If you accidentally touch a 48VDC bus bar, you will probably be fine. Touch an 800VDC bus bar and it will probably kill you.

For a sense of scale, these power systems are on a par with the sort of power system that is used for municipal trams!

Why bother?

vtcodger

Sheesh!!! Is there a point to all this? From what I've seen, AI looks to be kinda cute but more or less useless except perhaps for the entertainment industry. Build a few research centers to explore quietly and objectively whether AI has any practical, cost effective uses, let Disney et.al. see if it makes movies/TV production cheaper/better and move on to trying to solve real world problems like getting a reasonable standard of living for everyone on the planet that wants it, and the practice of electing absolutely horrible human beings to political office.

Would it really make all that much difference in the long run to anyone except Sam Altman if humanity held off on AI for a few decades?

Re: Why bother?

Nick Porter

Did you write this comment in 2020 and only just post it now? Take a look at Claude Code if you think that AI is only of interest to the entertainment industry. AI is replacing swathes of IT, programming and clerical jobs, causing a huge downturn in new graduate hiring, and destroying jobs from copyrighting, to commercial music writing, to document translation. It doesn't need to better than a human, it doesn't even need to be as good as a human, it just needs to be cheaper than a human - and pretty much every industry is buying into it.

Re: Why bother?

Roland6

I thing AI is turning the IT industry and in turn IT user industries into entertainment industries…

this will shift the power losses around

Timo

Increasing the supply voltage 10x will reduce the current by 10x and the (I^2 *R) loss by 100x, which is what you've shown.

But current computing equipment doesn't run on 800V, so somewhere in the chain there will need to be power converters or power supplies to get the voltage levels down to usable levels. These power converters will need to be extremely efficient to minimize that conversion loss that will also come out as heat. Go back and do some of those conversions: 15 kW at 90% efficiency means 1.5 kW is lost in the conversion process. 95% loses 750 W to heat.

Re: this will shift the power losses around

short

At more than trivial loads, I'd be expecting efficiencies of over 98%, which isn't too hard to manage spread over a couple of U of rack, especially with water cooling.

OCP V3 mandated 97.5% and that was with the extra inefficiency of a rectification stage (and was years ago).

You'll still have to do a couple of conversions, I don't think anyone's proposing a single step from 800V to 0.6V (or whatever). But big switchers are quite efficient.

The DC-ish currents down at these voltages will be outrageous, and I have some respect for people herding tens of kA around a PCB, in and out of stacks of silicon...

Between birth and death,
Three in ten are followers of life,
Three in ten are followers of death,
And men just passing from birth to death also number three in ten.
Why is this so?
Because they live their lives on the gross level.

He who knows how to live can walk abroad
Without fear of rhinoceros or tiger.
He will not be wounded in battle.
For in him rhinoceroses can find no place to thrust their horn,
Tigers no place to use their claws,
And weapons no place to pierce.
Why is this so?
Because he has no place for death to enter.