Oxide plans new rack attack, packing in Zen 5 CPUs and DDR5 RAM
(2026/02/13)
- Reference: 1771017612
- News link: https://www.theregister.co.uk/2026/02/13/whats_next_for_oxide_computer/
- Source link:
Remember that giant green rack-sized blade server Oxide Computer showed off a couple of years back? Well, the startup is still at it, having raked in $200 million in Series-C funding this week as it prepares to bring a bevy of new hardware to market with updated processing power, memory, and networking.
Founded in 2019 by a gaggle of former Joyent and Sun Microsystems engineers, the company set out to make the rack, not the server, the new unit of compute for the datacenter.
The result was a 7.8-foot-tall, [1]2,518-pound rack system rated for 15 kW of total power draw that runs a completely custom open source software stack.
[2]
Inside this behemoth are 32 hyperscale-inspired compute sleds, each packing 64 EPYC cores, up to 1 TB of memory, and 32 TB of NVMe storage, all connected via a backplane that not only provides power but also delivers up to 12.8 Tbps of switching capacity.
[3]
An Oxide Computer rack - Click to enlarge
However, that system launched two years ago, and, while impressive for its time, many of the Oxide rack's core components are getting rather long in the tooth.
AMD's Milan generation of EPYC processors, which powered the original Oxide rack, dates back to March 2021.
[4]
[5]
As such, the rack system is overdue for an upgrade, and according to CEO Steve Tuck, we'll be getting one in short order.
The Oxide rack gets a Zen 5 makeover
The upcoming gear will include a new generation of compute blades powered by AMD's [6]Turin EPYC , which launched a little over a year ago, boasting core counts of up to 192 cores and support for faster DDR5 6400 MT/s memory — a rather big upgrade over the comparatively glacial DDR4 3200 MT/s memory that shipped in the OG Oxide rack back in 2023.
"Turin gets us back to a good sweet spot where you get lots of cores and with as little wattage drawn as possible," Tuck tells El Reg .
Oxide hasn't said which SKUs it has opted for yet, but regardless of core count, the architectural improvements from AMD's Zen 3 to Zen 5 will be substantial, with an instructions-per-clock increase of more than 30 percent.
[7]
Alone that'd be substantial, but it doesn't take into consideration that Turin clocks much higher than Milan. The 64-core Milan-based Epyc 7713P found in Oxide's compute blade topped out at 3.67 GHz. By contrast, at that same core count, Turin is capable of hitting 5 GHz, albeit using a fair bit more power and only on a few cores at any one time.
Turin will also bring AVX-512 support to Oxide's compute platform for the first time. These beefy vectors have become increasingly useful for agentic AI systems, but were completely absent for the Milan generation and were only partially supported on AMD's Zen 4 EPYC Genoa lineup.
Oxide CTO Bryan Cantrill, whom you may recognize for his time at Sun Microsystems, tells us Oxide ultimately opted to skip Genoa as it wasn't as compelling from a compute density standpoint compared to Turin, which we'll note was only about a year from launching when the company's first rack made its debut.
[8]
As we mentioned, the move to Turin will also see Oxide embrace DDR5, assuming it can find adequate supply. If you hadn't noticed, DDR5 might as well be gold right now.
Life after Tofino
Alongside the new compute and memory, Oxide is also evaluating new switch silicon to eventually replace the system's aging Tofino 2-based hardware.
You see, almost a year before the company revealed the Oxide rack to the world, Intel quietly ended development of the Tofino switch line. This posed a problem for Oxide, which had already invested heavily in developing software around the platform.
But rather than shelving the platform, Intel made a rather unusual decision to open source the Tofino P4 compiler.
"It's almost out of character with Intel, honestly. When Intel kills something, they just kind of want to forget that it ever happened," Cantrill said. "It was the dedication of the folks inside of Intel that really believed in the programmability [of the platform] that got the P4 compiler open sourced, and that has been essential for us."
Despite its age, the switch silicon is still more than enough to deliver 200 Gbps (2x 100 GbE links) of bandwidth to each of the Oxide rack's 32 compute blades.
Tuck also assures us that the company has no shortage of Tofino hardware to keep its existing racks running. "We have got no end in sight to where we will be able to build and deploy and support customers on that architecture," he said.
But ultimately, Tofino is a dead end platform, one Oxide is already evaluating long-term replacements for. One such option is Xsight Labs' [9]X2 switch silicon. Like Tofino, it is highly programmable while also consuming less than 200 watts of power under load.
However, Tuck tells us the company is also exploring other options as well, though given Oxide's preference for open hardware, we can't imagine there are all that many contenders.
More open hardware, software co-design
One of the things that sets Oxide apart from other hardware vendors is that it doesn't just rehash reference designs and call it a day. The company has taken a ground up systems approach to building their hardware.
To put into perspective just how weird Oxide really is, the company didn't just take an off-the-shelf ASPEED and strap it to a motherboard for lights out management like almost every other board maker. Instead, the company built its own service processor from scratch.
And while it initially planned to adapt reference boards for its compute sleds, the company ended up hiring an electrical engineering team to design its own, again from scratch.
"We believe more strongly than ever that what we actually need to develop reliable, scalable systems are our parts silicon that are clearly documented at the lowest layers of interface," Cantrill said.
Having said that, Oxide had to draw the line somewhere and there are still proprietary blobs inside the Oxide rack. "We haven't done our own SSD," Cantrill said.
But while it might not make sense for a startup to develop custom controllers for everything, Cantrill expects the Oxide rack will have fewer of these "proprietary blobs," not more as it evolves.
One of the driving forces behind Oxide's decision to build its hardware rather than buy it comes down to visibility.
"The problem that you often have with these proprietary layers is we don't know what's going on," Cantrill said. "For customers, it is deeply, deeply frustrating when you have these infrastructure issues."
With proprietary hardware, Oxide's options are limited if the supplier can't figure it out. By prioritizing open, well documented platforms, and building their own hardware where necessary, Oxide hopes to sidestep this problem.
[10]US is moving ahead with colocated nukes and datacenters
[11]Cisco hikes prices to cover memory cost rises, says you don't much care
[12]Microsoft touts far-off high-temperature superconducting tech for datacenter efficiency
[13]Micron finds a way to make more DRAM with $1.8bn chip plant purchase
Are GPUs finally on the menu?
Up to this point, Oxide has largely focused on addressing general purpose compute demand rather than chasing AI riches.
"GPUs are, for sure, on the radar. They've always been on the radar from the beginning," Cantrill said. "I think the thing that has been a big eye-opener for us is the role that general purpose CPU plays in these AI workloads."
While GPUs and AI accelerators are required to train and run the models, many of the agentic features now grabbing headlines run on CPUs.
"When you're on your chatbot of choice, and you're using it to search the web, and you get the little searching the web wheel going around, that is not a GPU that's searching the web."
Cantrill says Oxide will offer GPUs at some point, but he argues the company has plenty of work left to do with regard to CPU, storage, and networking to keep it busy for now. ®
Get our [14]Tech Resources
[1] https://www.theregister.com/2024/02/16/oxide_3000lb_blade_server/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://regmedia.co.uk/2024/02/16/handout_oxcide_computer_rack.jpg
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2024/10/10/amd_epyc_turin/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://xsightlabs.com/switches/x2
[10] https://www.theregister.com/2026/02/13/us_moving_ahead_with_colocated/
[11] https://www.theregister.com/2026/02/12/cisco_q2_2026/
[12] https://www.theregister.com/2026/02/10/microsoft_high_temperature_superconductors_hopium/
[13] https://www.theregister.com/2026/01/20/micron_powerchip_fab_acquisition/
[14] https://whitepapers.theregister.com/
Founded in 2019 by a gaggle of former Joyent and Sun Microsystems engineers, the company set out to make the rack, not the server, the new unit of compute for the datacenter.
The result was a 7.8-foot-tall, [1]2,518-pound rack system rated for 15 kW of total power draw that runs a completely custom open source software stack.
[2]
Inside this behemoth are 32 hyperscale-inspired compute sleds, each packing 64 EPYC cores, up to 1 TB of memory, and 32 TB of NVMe storage, all connected via a backplane that not only provides power but also delivers up to 12.8 Tbps of switching capacity.
[3]
An Oxide Computer rack - Click to enlarge
However, that system launched two years ago, and, while impressive for its time, many of the Oxide rack's core components are getting rather long in the tooth.
AMD's Milan generation of EPYC processors, which powered the original Oxide rack, dates back to March 2021.
[4]
[5]
As such, the rack system is overdue for an upgrade, and according to CEO Steve Tuck, we'll be getting one in short order.
The Oxide rack gets a Zen 5 makeover
The upcoming gear will include a new generation of compute blades powered by AMD's [6]Turin EPYC , which launched a little over a year ago, boasting core counts of up to 192 cores and support for faster DDR5 6400 MT/s memory — a rather big upgrade over the comparatively glacial DDR4 3200 MT/s memory that shipped in the OG Oxide rack back in 2023.
"Turin gets us back to a good sweet spot where you get lots of cores and with as little wattage drawn as possible," Tuck tells El Reg .
Oxide hasn't said which SKUs it has opted for yet, but regardless of core count, the architectural improvements from AMD's Zen 3 to Zen 5 will be substantial, with an instructions-per-clock increase of more than 30 percent.
[7]
Alone that'd be substantial, but it doesn't take into consideration that Turin clocks much higher than Milan. The 64-core Milan-based Epyc 7713P found in Oxide's compute blade topped out at 3.67 GHz. By contrast, at that same core count, Turin is capable of hitting 5 GHz, albeit using a fair bit more power and only on a few cores at any one time.
Turin will also bring AVX-512 support to Oxide's compute platform for the first time. These beefy vectors have become increasingly useful for agentic AI systems, but were completely absent for the Milan generation and were only partially supported on AMD's Zen 4 EPYC Genoa lineup.
Oxide CTO Bryan Cantrill, whom you may recognize for his time at Sun Microsystems, tells us Oxide ultimately opted to skip Genoa as it wasn't as compelling from a compute density standpoint compared to Turin, which we'll note was only about a year from launching when the company's first rack made its debut.
[8]
As we mentioned, the move to Turin will also see Oxide embrace DDR5, assuming it can find adequate supply. If you hadn't noticed, DDR5 might as well be gold right now.
Life after Tofino
Alongside the new compute and memory, Oxide is also evaluating new switch silicon to eventually replace the system's aging Tofino 2-based hardware.
You see, almost a year before the company revealed the Oxide rack to the world, Intel quietly ended development of the Tofino switch line. This posed a problem for Oxide, which had already invested heavily in developing software around the platform.
But rather than shelving the platform, Intel made a rather unusual decision to open source the Tofino P4 compiler.
"It's almost out of character with Intel, honestly. When Intel kills something, they just kind of want to forget that it ever happened," Cantrill said. "It was the dedication of the folks inside of Intel that really believed in the programmability [of the platform] that got the P4 compiler open sourced, and that has been essential for us."
Despite its age, the switch silicon is still more than enough to deliver 200 Gbps (2x 100 GbE links) of bandwidth to each of the Oxide rack's 32 compute blades.
Tuck also assures us that the company has no shortage of Tofino hardware to keep its existing racks running. "We have got no end in sight to where we will be able to build and deploy and support customers on that architecture," he said.
But ultimately, Tofino is a dead end platform, one Oxide is already evaluating long-term replacements for. One such option is Xsight Labs' [9]X2 switch silicon. Like Tofino, it is highly programmable while also consuming less than 200 watts of power under load.
However, Tuck tells us the company is also exploring other options as well, though given Oxide's preference for open hardware, we can't imagine there are all that many contenders.
More open hardware, software co-design
One of the things that sets Oxide apart from other hardware vendors is that it doesn't just rehash reference designs and call it a day. The company has taken a ground up systems approach to building their hardware.
To put into perspective just how weird Oxide really is, the company didn't just take an off-the-shelf ASPEED and strap it to a motherboard for lights out management like almost every other board maker. Instead, the company built its own service processor from scratch.
And while it initially planned to adapt reference boards for its compute sleds, the company ended up hiring an electrical engineering team to design its own, again from scratch.
"We believe more strongly than ever that what we actually need to develop reliable, scalable systems are our parts silicon that are clearly documented at the lowest layers of interface," Cantrill said.
Having said that, Oxide had to draw the line somewhere and there are still proprietary blobs inside the Oxide rack. "We haven't done our own SSD," Cantrill said.
But while it might not make sense for a startup to develop custom controllers for everything, Cantrill expects the Oxide rack will have fewer of these "proprietary blobs," not more as it evolves.
One of the driving forces behind Oxide's decision to build its hardware rather than buy it comes down to visibility.
"The problem that you often have with these proprietary layers is we don't know what's going on," Cantrill said. "For customers, it is deeply, deeply frustrating when you have these infrastructure issues."
With proprietary hardware, Oxide's options are limited if the supplier can't figure it out. By prioritizing open, well documented platforms, and building their own hardware where necessary, Oxide hopes to sidestep this problem.
[10]US is moving ahead with colocated nukes and datacenters
[11]Cisco hikes prices to cover memory cost rises, says you don't much care
[12]Microsoft touts far-off high-temperature superconducting tech for datacenter efficiency
[13]Micron finds a way to make more DRAM with $1.8bn chip plant purchase
Are GPUs finally on the menu?
Up to this point, Oxide has largely focused on addressing general purpose compute demand rather than chasing AI riches.
"GPUs are, for sure, on the radar. They've always been on the radar from the beginning," Cantrill said. "I think the thing that has been a big eye-opener for us is the role that general purpose CPU plays in these AI workloads."
While GPUs and AI accelerators are required to train and run the models, many of the agentic features now grabbing headlines run on CPUs.
"When you're on your chatbot of choice, and you're using it to search the web, and you get the little searching the web wheel going around, that is not a GPU that's searching the web."
Cantrill says Oxide will offer GPUs at some point, but he argues the company has plenty of work left to do with regard to CPU, storage, and networking to keep it busy for now. ®
Get our [14]Tech Resources
[1] https://www.theregister.com/2024/02/16/oxide_3000lb_blade_server/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://regmedia.co.uk/2024/02/16/handout_oxcide_computer_rack.jpg
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2024/10/10/amd_epyc_turin/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aY-tDRk8N3exCOs62g_vpQAAANc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://xsightlabs.com/switches/x2
[10] https://www.theregister.com/2026/02/13/us_moving_ahead_with_colocated/
[11] https://www.theregister.com/2026/02/12/cisco_q2_2026/
[12] https://www.theregister.com/2026/02/10/microsoft_high_temperature_superconductors_hopium/
[13] https://www.theregister.com/2026/01/20/micron_powerchip_fab_acquisition/
[14] https://whitepapers.theregister.com/