HPC won't be an x86 monoculture forever – and it's starting to show
(2025/11/27)
- Reference: 1764235811
- News link: https://www.theregister.co.uk/2025/11/27/arm_riscv_hpc/
- Source link:
Feature Remember when high-performance computing always seemed to be about x86? Exactly a decade ago, almost nine in ten supercomputers in the TOP500 (a list of the beefiest machines maintained twice yearly by academics) were Intel-based. Today, it's down to 57 percent.
Intel might once have ruled the HPC roost but its influence is waning. Today, other processors are making significant inroads.
Supercomputing development has evolved in waves since Cray pioneered vector processors (which were excellent at conducting single operations across large data sets) in the mid-1970s.
[1]
Later came reduced instruction set chip (RISC) architectures with chips like the 64-bit DEC Alpha, IBM POWER, Sun/Fujitsu SPARC, SGI MIPS, and HP PA-RISC. Each offered distinct performance characteristics. Their simpler instruction sets made for fast instruction decoding and pipelining, and also served more general-purpose use cases than vector-based systems.
The coming of the commodity cluster
The problem for RISC was economic. Chips manufactured in smaller volumes cost far more than commodity chips like x86. NASA realized this and began using Intel chips for its Beowulf supercomputing clusters as far back as 1994. It proved that running cheap chips in parallel could approach or match specialized hardware in performance terms while slashing costs.
Intel's ASCI Red followed that work in 1997, becoming the first teraFLOPS machine using 9,152 Pentium Pro processors designed for workstations.
[2]
[3]
Intel gained traction, but GPUs have become increasingly important. Nvidia's 2006 CUDA launch transformed graphics processors into general-purpose computing machines with dramatic speedups for parallel data workloads.
"It's that AI trend and what's going on with hyperscale that really opens up the opportunity for architectures on the CPU side beyond x86," says Addison Snell, CEO at market analyst Intersect360 Research. "A large, high-growth portion of the market is chasing the accelerators, mostly the GPUs from Nvidia, and that really is driving a lot of the architecture."
[4]
However, those GPUs still need CPUs to handle part of the workload.
That CPU-bound load includes job scheduling, workflow management, I/O, and scalar operations that don't parallelize well. "For example, taking an average of numbers, right? A GPU can't do that any faster than an Arm chip or an x86 chip," explains Karl Freund, founder and principal analyst at Cambria-AI Research. "So when you finish a layer and you then want to do an average across the nodes, yeah, just let Arm do it."
x86 chips, whether from Intel or AMD, grew quickly to outpace RISC chips in the market, increasingly working alongside GPUs to do the heavy parallel lifting. For example, 2012 saw Oak Ridge's Titan supercomputer top the TOP500 list by pairing AMD Opterons with Nvidia K20 GPUs across 18,688 nodes for 17.6 petaflops.
[5]
Nvidia's domination of the GPU space in HPC stems from its complete and tightly integrated stack, spanning hardware and software.
"The bigger advantage that Nvidia has is on the software side," says Snell's colleague, Steve Conway, senior analyst at Intersect360 Research. "They made, very early on, an investment in their software to manage this monster called CUDA."
That tech stack is the company's true moat, he says. It filled it wide and deep, investing in its use by current commercial developers along with upcoming generations of developers in the universities.
AMD's HPC play
AMD shows considerable promise in both the CPU and GPU sides. Its EPYC architecture, which targets servers and embedded systems, helped drive Oak Ridge to the top spot in 2023 again with Frontier, containing 9,472 of its CPUs along with 37,888 AMD Instinct GPUs (its datacenter GPU brand).
The company's Milan, Genoa, and Turin EPYC generations have progressively increased chip density, driving it to further big wins. November saw the El Capitan supercomputer from Lawrence Livermore National Laboratory (LANL) retain its top spot, sporting an AMD EPYC and Instinct combo.
Simon McIntosh-Smith, director of the Bristol Centre for Supercomputing, sees great promise in AMD. "AMD is increasingly viable. The hardware is really good, in the same sort of ballpark as Nvidia. Where they've traditionally not been as strong is on the software side," he says, calling for more investment there.
Arm's patient path from mobile to exascale
While AMD has gained considerable traction over Intel in the highly successful x86 HPC market, Arm is another strong contender in that space. The Mont-Blanc project, started by the Barcelona Supercomputing Center in 2011, provided European validation of the Arm architecture using embedded Arm chips in experimental clusters. That was among the first experiments with the Arm architecture in HPC machines.
Almost a decade later came Fugaku, a 2020 deployment at Japan's Riken Center for Computational Science that was really Arm's biggest achievement. This 442 petaFLOPS monster used 48-core A64FX processors to propel it to TOP500's top spot.
A year later, in 2021, Arm brought vector processing to its Neoverse datacenter processor designs with its Neoverse V1 CPU, which featured Scalable Vector Extensions.
A big strategic foothold in the HPC space for Arm came with its Nvidia partnership. Announced in 2021, it led to the creation of Grace, an Nvidia chip based on Arm, which it married with its Hopper GPU to create the Grace Hopper Superchip.
Over 40 supercomputer projects announced their support for Grace Hopper, including Germany's Jupiter system, which just became [6]Europe's first exascale system at 1 exaFLOPS .
Studies also show high energy efficiency for Arm chips. For example, a 2023 benchmarking exercise in AI systems found energy savings of around 25-30 percent when running Arm over comparable x86 chips.
The Bristol Centre for Supercomputing also opted for the Arm architecture, beginning with its first Isambard supercomputer in 2018. Now its Isambard-AI supercomputer is built on Nvidia Grace Hopper nodes. It is the UK's largest supercomputer, with more than 5,500 Grace Hopper nodes.
Nvidia looks set to develop its own CPU architecture. The company has a 20-year IP licensing arrangement with Arm and has already indicated that it will build its own cores using that IP, which could see a departure from off-the-shelf Neoverse cores.
The open architecture proposition
While Arm is making great gains today, there are other contenders on the horizon. One of them is RISC-V, which represents a big departure from Arm in its licensing strategy, which consists of giving it away for free. Conceived at the University of California, Berkeley, it's an open instruction set architecture with no licensing fees at all.
That's a huge advantage, says John Leidel, chief scientist and founder of Tactical Computing Labs (TCL). The Cray and Silicon Graphics veteran has a history in software development and hardware design. He now runs a small R&D firm specializing in novel hardware and software for HPC and high-performance data analytics.
"If you were to take an x86 processor and you wanted to customize it for a given scientific application, you would need to license that from Intel," he says. "And then go through a very arduous process that costs billions of dollars."
The same goes for Arm processors, of course. However, this isn't RISC-V's only advantage over x86 in particular, he says. That venerable architecture has a lot of baggage.
"x86 is a legacy architecture that by definition has to support every legacy instruction that the x86 processor has ever had," Leidel points out. That application written in 1989 to run someone's desktop accounting system still has to run on the same modern x86 chips that sit inside a TOP500 machine.
"RISC-V backed away from that standard. They said this is absolutely insane," he explains. "Why don't we do a clean from-scratch design, clear the slate, clear the room, clear the whiteboard, and do things right from the get-go?"
The idea behind RISC-V is to provide a baseline instruction set and then allow people to build their own optional extensions on top of it, he says. That way, they can build custom chips tailored for their own unique applications.
McIntosh-Smith isn't convinced. There's a reason that you pay for an Arm license, he points out, and a lot of it has to do with more advanced tooling.
"The quality and performance of the free implementations is not equivalent to, say, a top-end Arm core that you would find in an Apple device or any of the clouds," he explains. "The things in the open source are not going to be competitive state-of-the-art. They'll be textbook-style good enough, but not really competitive."
[7]The exascale offensive: America's race to rule AI HPC
[8]India has satisfied its supercomputing needs, but not its ambitions
[9]UK lines up £250M cloud procurement to feed its growing AI research appetite
[10]How high-end supercomputer filesystem DAOS can break out of its niche
He also points to testing and verification suites, which take decades of investment. "You don't get that for free with RISC-V," he says. By the time you've developed all that stuff yourself, the advantage of a free open system might fade.
European initiatives and sovereignty
But there is another advantage to RISC-V that Etienne Walter is eager to talk about. He's director of the European Processor Initiative (EPI), which launched in 2018 to develop HPC technology using RISC-V for accelerators. The initiative has 27 partners across 10 countries.
It pursued a dual architecture strategy: Arm for general-purpose processors and RISC-V for specialized accelerators. The latter includes a CPU based on vector extensions in the RISC-V instruction set architecture. The EPI taped out functional RISC-V accelerator test chips in 2021.
Along with the vector accelerator, which came from research at the Barcelona Supercomputing Center, the EPI also worked on variable precision acceleration and tensor accelerators.
The EPI is now winding up, handing the baton over to the Digital Autonomy with RISC-V in Europe (DARE) project, which launched in March. It has a €240 million budget across 38 partners from 13 countries.
Coordinated by the Barcelona Supercomputing Center, the initiative is currently set to extend through 2030. It will develop a general-purpose processor, a vector accelerator, and an AI processing unit.
Why bother with all this? A quick look at US foreign policy is perhaps reason enough. Sovereignty is becoming increasingly important as political and economic ties unravel.
"That's the point for us. We have to keep in mind this concern and have some potential solutions just in case," Walter says, "even if we know Europe is not at the same level as in the US and we do not cover the same level of expertise and solutions."
Conway sympathizes with regional governments that understand HPC will become increasingly important for economic development, and who consequently don't want to find themselves beholden to a foreign power. But there are nuances. It's hard for him to imagine complete HPC sovereignty.
"You're reliant on lithium from China or somewhere else, you're reliant on the advanced lithography stuff from the Netherlands," he says. "Even the US, in that sense, is not sovereign at the processor level. They talk about it in every country, as if that were a reasonable goal, but it probably isn't in the short run."
It took Arm around a decade to stand up a robust example of supercomputing with its chip design. Launching 64-bit processors in 2011 wasn't enough; it needed the right software stack and verification ecosystem.
Now RISC-V must do the same. "The ecosystem is not here yet or not as mature, for sure," says Walter. "There is still a lot of work to do to have a stable and mature environment, but I have no doubt that this may happen at the end. It's a matter of time."
How much time? DARE's first stage, SGA-1, is shooting for "a fully European supercomputing hardware/software stack for HPC and AI" within three years. Then it has to persuade people to use it.
Snell is cautiously optimistic. "I think RISC-V does have a lot of potential over the next five years," he says. "We see it as being just a little bit behind where Arm was, and it could really use a champion who's going to carry it in."
There is some forward movement for RISC-V. In October, Meta acquired RISC-V startup Rivos. This would give Meta, which relies on third parties for its silicon, an in-house CUDA-compatible hybrid CPU-GPU RISC-V architecture. Meta has reportedly also been working on its own RISC-V chips internally.
HPC processors have been through a cycle, beginning with a diverse range of proprietary chips that thinned out in the commodity chip era. Today, things seem to be going the other way again. There are several key players, including some waiting in the wings. There are hyperscalers that are markets unto themselves and are doing interesting things. Microsoft has Maia, AWS has Inferentia and Trainium, Google has its TPU, and they're all custom ASICs.
Looking further, things get even more weird and wonderful. Cerebras has wafer-scale engines that bypass interconnect bottlenecks by keeping everything on a single die. Then there are silicon photonics projects designed to cut power consumption with optical compute interconnects directly on the chip.
With so much money at stake, the tide turns slowly in the HPC space. But with so many interesting options now, and more in the wings, it's unlikely to be an x86 world forever. ®
Get our [11]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/11/17/europe_jupiter_supercomputer/
[7] https://www.theregister.com/2025/11/26/the_exascale_offensive/
[8] https://www.theregister.com/2025/11/26/india_supercomputing_state_of/
[9] https://www.theregister.com/2025/11/25/uk_cloud_ai_research/
[10] https://www.theregister.com/2025/11/24/highend_supercomputer_daos/
[11] https://whitepapers.theregister.com/
Intel might once have ruled the HPC roost but its influence is waning. Today, other processors are making significant inroads.
Supercomputing development has evolved in waves since Cray pioneered vector processors (which were excellent at conducting single operations across large data sets) in the mid-1970s.
[1]
Later came reduced instruction set chip (RISC) architectures with chips like the 64-bit DEC Alpha, IBM POWER, Sun/Fujitsu SPARC, SGI MIPS, and HP PA-RISC. Each offered distinct performance characteristics. Their simpler instruction sets made for fast instruction decoding and pipelining, and also served more general-purpose use cases than vector-based systems.
The coming of the commodity cluster
The problem for RISC was economic. Chips manufactured in smaller volumes cost far more than commodity chips like x86. NASA realized this and began using Intel chips for its Beowulf supercomputing clusters as far back as 1994. It proved that running cheap chips in parallel could approach or match specialized hardware in performance terms while slashing costs.
Intel's ASCI Red followed that work in 1997, becoming the first teraFLOPS machine using 9,152 Pentium Pro processors designed for workstations.
[2]
[3]
Intel gained traction, but GPUs have become increasingly important. Nvidia's 2006 CUDA launch transformed graphics processors into general-purpose computing machines with dramatic speedups for parallel data workloads.
"It's that AI trend and what's going on with hyperscale that really opens up the opportunity for architectures on the CPU side beyond x86," says Addison Snell, CEO at market analyst Intersect360 Research. "A large, high-growth portion of the market is chasing the accelerators, mostly the GPUs from Nvidia, and that really is driving a lot of the architecture."
[4]
However, those GPUs still need CPUs to handle part of the workload.
That CPU-bound load includes job scheduling, workflow management, I/O, and scalar operations that don't parallelize well. "For example, taking an average of numbers, right? A GPU can't do that any faster than an Arm chip or an x86 chip," explains Karl Freund, founder and principal analyst at Cambria-AI Research. "So when you finish a layer and you then want to do an average across the nodes, yeah, just let Arm do it."
x86 chips, whether from Intel or AMD, grew quickly to outpace RISC chips in the market, increasingly working alongside GPUs to do the heavy parallel lifting. For example, 2012 saw Oak Ridge's Titan supercomputer top the TOP500 list by pairing AMD Opterons with Nvidia K20 GPUs across 18,688 nodes for 17.6 petaflops.
[5]
Nvidia's domination of the GPU space in HPC stems from its complete and tightly integrated stack, spanning hardware and software.
"The bigger advantage that Nvidia has is on the software side," says Snell's colleague, Steve Conway, senior analyst at Intersect360 Research. "They made, very early on, an investment in their software to manage this monster called CUDA."
That tech stack is the company's true moat, he says. It filled it wide and deep, investing in its use by current commercial developers along with upcoming generations of developers in the universities.
AMD's HPC play
AMD shows considerable promise in both the CPU and GPU sides. Its EPYC architecture, which targets servers and embedded systems, helped drive Oak Ridge to the top spot in 2023 again with Frontier, containing 9,472 of its CPUs along with 37,888 AMD Instinct GPUs (its datacenter GPU brand).
The company's Milan, Genoa, and Turin EPYC generations have progressively increased chip density, driving it to further big wins. November saw the El Capitan supercomputer from Lawrence Livermore National Laboratory (LANL) retain its top spot, sporting an AMD EPYC and Instinct combo.
Simon McIntosh-Smith, director of the Bristol Centre for Supercomputing, sees great promise in AMD. "AMD is increasingly viable. The hardware is really good, in the same sort of ballpark as Nvidia. Where they've traditionally not been as strong is on the software side," he says, calling for more investment there.
Arm's patient path from mobile to exascale
While AMD has gained considerable traction over Intel in the highly successful x86 HPC market, Arm is another strong contender in that space. The Mont-Blanc project, started by the Barcelona Supercomputing Center in 2011, provided European validation of the Arm architecture using embedded Arm chips in experimental clusters. That was among the first experiments with the Arm architecture in HPC machines.
Almost a decade later came Fugaku, a 2020 deployment at Japan's Riken Center for Computational Science that was really Arm's biggest achievement. This 442 petaFLOPS monster used 48-core A64FX processors to propel it to TOP500's top spot.
A year later, in 2021, Arm brought vector processing to its Neoverse datacenter processor designs with its Neoverse V1 CPU, which featured Scalable Vector Extensions.
A big strategic foothold in the HPC space for Arm came with its Nvidia partnership. Announced in 2021, it led to the creation of Grace, an Nvidia chip based on Arm, which it married with its Hopper GPU to create the Grace Hopper Superchip.
Over 40 supercomputer projects announced their support for Grace Hopper, including Germany's Jupiter system, which just became [6]Europe's first exascale system at 1 exaFLOPS .
Studies also show high energy efficiency for Arm chips. For example, a 2023 benchmarking exercise in AI systems found energy savings of around 25-30 percent when running Arm over comparable x86 chips.
The Bristol Centre for Supercomputing also opted for the Arm architecture, beginning with its first Isambard supercomputer in 2018. Now its Isambard-AI supercomputer is built on Nvidia Grace Hopper nodes. It is the UK's largest supercomputer, with more than 5,500 Grace Hopper nodes.
Nvidia looks set to develop its own CPU architecture. The company has a 20-year IP licensing arrangement with Arm and has already indicated that it will build its own cores using that IP, which could see a departure from off-the-shelf Neoverse cores.
The open architecture proposition
While Arm is making great gains today, there are other contenders on the horizon. One of them is RISC-V, which represents a big departure from Arm in its licensing strategy, which consists of giving it away for free. Conceived at the University of California, Berkeley, it's an open instruction set architecture with no licensing fees at all.
That's a huge advantage, says John Leidel, chief scientist and founder of Tactical Computing Labs (TCL). The Cray and Silicon Graphics veteran has a history in software development and hardware design. He now runs a small R&D firm specializing in novel hardware and software for HPC and high-performance data analytics.
"If you were to take an x86 processor and you wanted to customize it for a given scientific application, you would need to license that from Intel," he says. "And then go through a very arduous process that costs billions of dollars."
The same goes for Arm processors, of course. However, this isn't RISC-V's only advantage over x86 in particular, he says. That venerable architecture has a lot of baggage.
"x86 is a legacy architecture that by definition has to support every legacy instruction that the x86 processor has ever had," Leidel points out. That application written in 1989 to run someone's desktop accounting system still has to run on the same modern x86 chips that sit inside a TOP500 machine.
"RISC-V backed away from that standard. They said this is absolutely insane," he explains. "Why don't we do a clean from-scratch design, clear the slate, clear the room, clear the whiteboard, and do things right from the get-go?"
The idea behind RISC-V is to provide a baseline instruction set and then allow people to build their own optional extensions on top of it, he says. That way, they can build custom chips tailored for their own unique applications.
McIntosh-Smith isn't convinced. There's a reason that you pay for an Arm license, he points out, and a lot of it has to do with more advanced tooling.
"The quality and performance of the free implementations is not equivalent to, say, a top-end Arm core that you would find in an Apple device or any of the clouds," he explains. "The things in the open source are not going to be competitive state-of-the-art. They'll be textbook-style good enough, but not really competitive."
[7]The exascale offensive: America's race to rule AI HPC
[8]India has satisfied its supercomputing needs, but not its ambitions
[9]UK lines up £250M cloud procurement to feed its growing AI research appetite
[10]How high-end supercomputer filesystem DAOS can break out of its niche
He also points to testing and verification suites, which take decades of investment. "You don't get that for free with RISC-V," he says. By the time you've developed all that stuff yourself, the advantage of a free open system might fade.
European initiatives and sovereignty
But there is another advantage to RISC-V that Etienne Walter is eager to talk about. He's director of the European Processor Initiative (EPI), which launched in 2018 to develop HPC technology using RISC-V for accelerators. The initiative has 27 partners across 10 countries.
It pursued a dual architecture strategy: Arm for general-purpose processors and RISC-V for specialized accelerators. The latter includes a CPU based on vector extensions in the RISC-V instruction set architecture. The EPI taped out functional RISC-V accelerator test chips in 2021.
Along with the vector accelerator, which came from research at the Barcelona Supercomputing Center, the EPI also worked on variable precision acceleration and tensor accelerators.
The EPI is now winding up, handing the baton over to the Digital Autonomy with RISC-V in Europe (DARE) project, which launched in March. It has a €240 million budget across 38 partners from 13 countries.
Coordinated by the Barcelona Supercomputing Center, the initiative is currently set to extend through 2030. It will develop a general-purpose processor, a vector accelerator, and an AI processing unit.
Why bother with all this? A quick look at US foreign policy is perhaps reason enough. Sovereignty is becoming increasingly important as political and economic ties unravel.
"That's the point for us. We have to keep in mind this concern and have some potential solutions just in case," Walter says, "even if we know Europe is not at the same level as in the US and we do not cover the same level of expertise and solutions."
Conway sympathizes with regional governments that understand HPC will become increasingly important for economic development, and who consequently don't want to find themselves beholden to a foreign power. But there are nuances. It's hard for him to imagine complete HPC sovereignty.
"You're reliant on lithium from China or somewhere else, you're reliant on the advanced lithography stuff from the Netherlands," he says. "Even the US, in that sense, is not sovereign at the processor level. They talk about it in every country, as if that were a reasonable goal, but it probably isn't in the short run."
It took Arm around a decade to stand up a robust example of supercomputing with its chip design. Launching 64-bit processors in 2011 wasn't enough; it needed the right software stack and verification ecosystem.
Now RISC-V must do the same. "The ecosystem is not here yet or not as mature, for sure," says Walter. "There is still a lot of work to do to have a stable and mature environment, but I have no doubt that this may happen at the end. It's a matter of time."
How much time? DARE's first stage, SGA-1, is shooting for "a fully European supercomputing hardware/software stack for HPC and AI" within three years. Then it has to persuade people to use it.
Snell is cautiously optimistic. "I think RISC-V does have a lot of potential over the next five years," he says. "We see it as being just a little bit behind where Arm was, and it could really use a champion who's going to carry it in."
There is some forward movement for RISC-V. In October, Meta acquired RISC-V startup Rivos. This would give Meta, which relies on third parties for its silicon, an in-house CUDA-compatible hybrid CPU-GPU RISC-V architecture. Meta has reportedly also been working on its own RISC-V chips internally.
HPC processors have been through a cycle, beginning with a diverse range of proprietary chips that thinned out in the commodity chip era. Today, things seem to be going the other way again. There are several key players, including some waiting in the wings. There are hyperscalers that are markets unto themselves and are doing interesting things. Microsoft has Maia, AWS has Inferentia and Trainium, Google has its TPU, and they're all custom ASICs.
Looking further, things get even more weird and wonderful. Cerebras has wafer-scale engines that bypass interconnect bottlenecks by keeping everything on a single die. Then there are silicon photonics projects designed to cut power consumption with optical compute interconnects directly on the chip.
With so much money at stake, the tide turns slowly in the HPC space. But with so many interesting options now, and more in the wings, it's unlikely to be an x86 world forever. ®
Get our [11]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/202511supercomputingmonth&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aSgvSm2OehbTn8EZkAWcZgAAAIw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/11/17/europe_jupiter_supercomputer/
[7] https://www.theregister.com/2025/11/26/the_exascale_offensive/
[8] https://www.theregister.com/2025/11/26/india_supercomputing_state_of/
[9] https://www.theregister.com/2025/11/25/uk_cloud_ai_research/
[10] https://www.theregister.com/2025/11/24/highend_supercomputer_daos/
[11] https://whitepapers.theregister.com/
Feature Remember when high-performance computing always seemed to be about x86?
Not related to the story, but that has just made me feel very very old as I remember when high-performance computing considered x86 a joke.....