News: 1775563208

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

No-Nvidia interconnect club delivers 2.0 spec before v1.0 silicon ships

(2026/04/07)


The UALink Consortium, a group of tech giants working on GPU networking standards to provide an alternative to Nvidia's NVLink and NVSwitch, has released new specs, but is still months away from shipping silicon.

Nvidia has come to dominate the market for the high-speed networks and switches needed to run hundreds of GPUs in concert. The company's kit isn't cheap and doesn't always play nicely with GPUs from other suppliers. Ethernet can do the job of connecting diverse GPU fleets and is an appealing alternative because it's pervasive, although the venerable standard can't match the performance of Nvidia's networking products.

The UALink Consortium wants to create an alternative to Nvidia's interconnect that works with any accelerator and matches the jolly green giant's performance. The group thinks emerging "neoclouds" that specialize in hosting AI systems will appreciate the chance to build one interconnect capable of handling any GPUs they deploy.

[1]

The group's plan is to create open specs that members can build into silicon and devices. The end result will, in theory, look a lot like the Ethernet ecosystem – vendors and other stakeholders working together on a spec then building compatible products while each tries to make their products stand out.

[2]

[3]

Version 1.0 of the UALink spec [4]appeared in August 2025. The consortium published version 2.0 today.

The big change is the new 200G Data Link and Physical Layers (DL/PL) Specification, which splits the UALink Common Specification by creating one workstream for the group's protocol and the transport layer, and another for I/O tech. As explained to The Register by UALink Consortium chair Kurtis Bowman, this means the group can build for the 200G networks of today, the 400G networks that will soon be available, and whatever comes next at the physical layer.

[5]How Nvidia learned to embrace the light in its quest for scale

[6]Microsoft's Maia 200 promises Blackwell levels of performance for two-thirds the power

[7]Open ISA champ SiFive leaps aboard Nvidia's proprietary interconnect bandwagon

[8]HPE backs AMD's Helios AI rack with Juniper's scale-up switch

The group also delivered version 2.0 of its Common Specification, which adds support for in-network compute, a technique that reduces the number of messages that need to be sent between GPUs to schedule work. Less bandwidth expended on messages means more bandwidth available for data, and faster operation for AI workloads.

UALink Manageability Specification 1.0 is another new offering that means users of tools like the gRPC Network Management Interface, YANG, SAI, and Redfish can use them with UALink networks.

[9]

Also coming is a chiplet spec that sets out how to include UALink silicon in systems-on-a-chip, meaning it's possible to embed UALink in more devices without standalone silicon.

Not that vendors can get UALink silicon yet – Bowman told us chips for the group's 1.0 spec will reach labs in the second half of 2026, appear in 2027, and reach products later that year.

By then, UALink will have delivered version 3.0 specs – long before version 2.0 silicon debuts.

[10]

Bowman admits that versions 1.0 and 2.0 won't be full competitors to Nvidia, and that only by version 3.0 – due about this time next year – will UALink achieve parity in terms of performance and release cadence.

UALink can therefore seem a little quixotic, but Bowman thinks the tilt is worth it because many AI outfits don't want to build siloed systems or be tied to a single vendor.

Gross margins at Nvidia topped 70 percent last quarter, suggesting customers are willing to pay handsome prices for NVLink and NVSwitch. The UALink Consortium hopes its approach will deliver products that offer an alternative in terms of price and capability.

Nvidia, meanwhile, is not standing still. Last year, it [11]introduced NVLink Fusion, which broadens access to its interconnect technology beyond Nvidia-only GPU deployments. ®

Get our [12]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2adUqH7GxR8b1l53EiOBMHwAAAFY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44adUqH7GxR8b1l53EiOBMHwAAAFY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33adUqH7GxR8b1l53EiOBMHwAAAFY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://www.theregister.com/2025/04/08/ualink_200g_version_1/

[5] https://www.theregister.com/2026/04/05/nvidia_optical_scale_up/

[6] https://www.theregister.com/2026/01/26/microsoft_maia_200/

[7] https://www.theregister.com/2026/01/15/sifive_nvidia_nvlink/

[8] https://www.theregister.com/2025/12/02/hpe_amd_helios_racks/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44adUqH7GxR8b1l53EiOBMHwAAAFY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33adUqH7GxR8b1l53EiOBMHwAAAFY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[11] https://www.theregister.com/2025/05/19/nvidia_nvlink_fusion/

[12] https://whitepapers.theregister.com/



The most exciting phrase to hear in science, the one that heralds new
discoveries, is not "Eureka!" (I found it!) but "That's funny ..."
-- Isaac Asimov