Bandwidth hogs rejoice, Celestica's latest switch is bristling with 64 ports of 1.6 Tbps Ethernet
(2026/04/30)
- Reference: 1777573106
- News link: https://www.theregister.co.uk/2026/04/30/bandwith_hogs_rejoice_celesticas_latest/
- Source link:
If you thought 800 Gbps Ethernet was fast, just wait. Celestica's latest switches cram 64 1.6 Tbps ports into a single chassis.
The networking vendor this week began taking orders for its [1]DS6000 family of switches, which are aimed primarily at high-performance computing applications like AI training and inference.
The switches will be offered in both a 19-inch 3U air-cooled chassis and an OCP-compliant 21-inch design that uses a combination of air and liquid cooling.
[2]
At the heart of the switches lives Broadcom's 102.4 Tbps Tomahawk 6 ASIC, which we [3]looked at in detail late last spring. The chip is Broadcom's first to use the 200 Gbps serializer-deserializers (SerDes) required for 1.6 Tbps connectivity.
[4]
[5]
Each of Celestica's 64 OSFP224 ports is made up of eight 200 Gbps, links which are aggregated into a single port. These ports can also be expanded using breakout cables to boost the switch's radix if required.
[6]Microsoft levels up Azure Local to make it fit for large-scale sovereign clouds
[7]Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
[8]Google wants more Intel inside ... its datacenters, taps Chipzilla for more SmartNICs
[9]AMD threatens to go medieval on Nvidia with Epyc and Instinct: What we know so far
Celestica's latest switch arrives as Nvidia and others push for faster scale out networking. The GPU giant's latest generation of ConnectX-9 superNICs, [10]launching alongside its Vera Rubin rack systems later this year, boast 1.6 Tbps of connectivity. However, rather than exposing one high-speed port, that bandwidth appears likely to be split across two 800 Gbps links for added redundancy and path diversity.
AMD is also sticking with multiple 800 Gbps ports for its first rack-scale AI compute platform. Each MI455X GPU will be [11]paired with three 800 Gbps Pensando Vulcano NICs.
However, it may not be long before we see even faster port speeds as networking vendors race to bring 400 Gbps SerDes to market.
[12]
Earlier this year, Broadcom revealed an optical digital signal processor capable of 400 Gbps per lane of connectivity, clearing the way for 3.2 Tbps optical transceivers. But before you get too excited, it'll be a while before there are 204.8 Tbps switches to plug those transceivers into.
Even then, PCIe 6.0, which tops out at 800 Gbps on a standard x16 interface, will limit NIC port speeds for the foreseeable future. ®
Get our [13]Tech Resources
[1] https://corporate.celestica.com/news-releases/news-release-details/celestica-accelerates-ai-scale-networking-ds6000-series-16tbe
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://www.theregister.com/2025/06/04/broadcom_tomahawk_6/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2026/04/30/azure_local_upgrade/
[7] https://www.theregister.com/2026/04/22/google_tpu8_dual_track_training_inference/
[8] https://www.theregister.com/2026/04/09/google_intel_ipu/
[9] https://www.theregister.com/2026/01/07/mi500x_amd_ai/
[10] https://www.theregister.com/2026/01/05/ces_rubin_nvidia/
[11] https://www.theregister.com/2026/01/07/mi500x_amd_ai/
[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[13] https://whitepapers.theregister.com/
The networking vendor this week began taking orders for its [1]DS6000 family of switches, which are aimed primarily at high-performance computing applications like AI training and inference.
The switches will be offered in both a 19-inch 3U air-cooled chassis and an OCP-compliant 21-inch design that uses a combination of air and liquid cooling.
[2]
At the heart of the switches lives Broadcom's 102.4 Tbps Tomahawk 6 ASIC, which we [3]looked at in detail late last spring. The chip is Broadcom's first to use the 200 Gbps serializer-deserializers (SerDes) required for 1.6 Tbps connectivity.
[4]
[5]
Each of Celestica's 64 OSFP224 ports is made up of eight 200 Gbps, links which are aggregated into a single port. These ports can also be expanded using breakout cables to boost the switch's radix if required.
[6]Microsoft levels up Azure Local to make it fit for large-scale sovereign clouds
[7]Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
[8]Google wants more Intel inside ... its datacenters, taps Chipzilla for more SmartNICs
[9]AMD threatens to go medieval on Nvidia with Epyc and Instinct: What we know so far
Celestica's latest switch arrives as Nvidia and others push for faster scale out networking. The GPU giant's latest generation of ConnectX-9 superNICs, [10]launching alongside its Vera Rubin rack systems later this year, boast 1.6 Tbps of connectivity. However, rather than exposing one high-speed port, that bandwidth appears likely to be split across two 800 Gbps links for added redundancy and path diversity.
AMD is also sticking with multiple 800 Gbps ports for its first rack-scale AI compute platform. Each MI455X GPU will be [11]paired with three 800 Gbps Pensando Vulcano NICs.
However, it may not be long before we see even faster port speeds as networking vendors race to bring 400 Gbps SerDes to market.
[12]
Earlier this year, Broadcom revealed an optical digital signal processor capable of 400 Gbps per lane of connectivity, clearing the way for 3.2 Tbps optical transceivers. But before you get too excited, it'll be a while before there are 204.8 Tbps switches to plug those transceivers into.
Even then, PCIe 6.0, which tops out at 800 Gbps on a standard x16 interface, will limit NIC port speeds for the foreseeable future. ®
Get our [13]Tech Resources
[1] https://corporate.celestica.com/news-releases/news-release-details/celestica-accelerates-ai-scale-networking-ds6000-series-16tbe
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://www.theregister.com/2025/06/04/broadcom_tomahawk_6/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2026/04/30/azure_local_upgrade/
[7] https://www.theregister.com/2026/04/22/google_tpu8_dual_track_training_inference/
[8] https://www.theregister.com/2026/04/09/google_intel_ipu/
[9] https://www.theregister.com/2026/01/07/mi500x_amd_ai/
[10] https://www.theregister.com/2026/01/05/ces_rubin_nvidia/
[11] https://www.theregister.com/2026/01/07/mi500x_amd_ai/
[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/networks&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44afPQ_nNrPM4Jm3DymA3-zwAAAhI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[13] https://whitepapers.theregister.com/
Anonymous Coward
AI customers are paying top dollar for this class of gear right now. Marketing just isn't as oriented towards the customers who only need it for Patch Tuesday.
Throatwarbler Mangrove
In fairness, there's almost no other workload which craves as much bandwidth, certainly not in the mass market. AI models and their insatiable demand for data have really driven a revolution in bandwidth capability at all levels of the tech stack.
Jim Willsher
It might be a bit oversized for my home LAN but I probably can't afford one anyway.
> optimised for AI
Geez. It's a fucking *switch*.