Artificial brains could point the way to ultra-efficient supercomputers
- Reference: 1768002124
- News link: https://www.theregister.co.uk/2026/01/09/artificial_brains_supercomputer/
- Source link:
Running on around 20 watts, the human brain is able to process vast quantities of sensory information from our environment without interrupting consciousness. For decades, researchers have been trying to replicate these processes in silicon, in what is commonly referred to as neuromorphic computing.
Sandia has been at the center of much of this research. The lab has deployed numerous neuromorphic systems from the likes of Intel, SpiNNaker, and IBM over the past several years.
[1]
Much of the research around these systems has focused on things like artificial intelligence and machine learning. But as it turns out, these brain-inspired chips are much more versatile.
[2]
[3]
The brain is performing complex computations even if we don't realize it, researchers James Aimone and Brad Theilman explained in a recent Sandia news [4]release .
"Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball. These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply," Aimone explained.
[5]
In a paper recently [6]published in the journal Nature Machine Intelligence, the boffins at Sandia demonstrated a novel algorithm for efficiently running a class of problems called partial differential equations (PDEs) on neuromorphic computers, including Intel's Loihi 2 neurochips.
PDEs are at the heart of some of the most complex scientific computing workloads today. They're used to model all manner of phenomena including electrostatic forces between molecules, the flow of water through a turbine, and the way radio frequencies propagate through buildings, the researchers explain.
These problems can be incredibly computationally demanding, often requiring the full grunt of modern supercomputers to solve. Neuromorphic computing presents a potential alternative that promises to be far more efficient if it can be made to scale reliably.
[7]
While still in their infancy, neuromorphic computers have already demonstrated strong efficiency gains over conventional CPU- and GPU-based systems. Intel's Loihi 2 systems deployed in Sandia's Hala Point and Oheo Gulch systems are reportedly capable of delivering 15 TOPS per watt of efficiency, around 2.5x that of modern GPUs like Nvidia's Blackwell chips.
More modern systems, such as the SpiNNaker2-based system deployed at Sandia last summer, tout even greater efficiency, claiming 18x higher performance per watt than modern GPUs.
As exciting as that might sound, the in-memory compute architecture inherent to neuromorphics is notoriously difficult to program, often requiring researchers to invent new algorithms for existing processes.
Here, the researchers were able to develop an algorithm called NeuroFEM, which implements the finite element method (FEM) commonly used to solve PDEs on spiking neuromorphic hardware. Perhaps more importantly, this research wasn't just theoretical, though, as we understand it the PDEs being solved here are intended more as a proof of concept than to demonstrate neuromorphic superiority.
The researchers were able to solve PDEs using actual neuromorphic hardware, specifically, Intel's Oheo Gulch system, which features 32 of its Loihi 2 neurochips.
In testing, the lab demonstrated near ideal strong scaling, which means that each time the core count is doubled, the time to solution is halved. This scaling isn't immune to Amdahl's law, which describes the limit to which workloads can be efficiently parallelized. But in testing, NeuroFEM was still shown to be 99 percent parallelizable.
What's more, the paper's authors argue the algorithm mitigates many of the programmability problems with neuromorphic systems.
"An important benefit of this approach is that it enables direct use of neuromorphic hardware on a broad class of numerical applications with almost no additional work for the user," they wrote. "The user friendliness of spiking neuromorphic hardware has long been recognized as a serious limitation to broader adoption and our results directly mitigate this problem."
[8]Ultimate camouflage tech mimics octopus in scientific first
[9]Trump's AI 'Genesis Mission' emerges from Land of Confusion
[10]We'll beat China to the Moon, NASA nominee declares
[11]Norway's most powerful supercomputer will use waste heat to raise salmon
By moving to an analog-based neuromorphic system - Loihi 2 is still a digital computer - the researchers speculate that even more complex PDEs could be solved even faster while also using less power.
With that said, neuromorphics may not be the only path forward. Researchers are increasingly exploring ways to use machine learning and generative AI surrogate models to accelerate conventional HPC problems.
"It remains an open question whether neuromorphic hardware can outperform GPUs on deep neural networks, which have largely evolved to benefit from GPUs' single instruction, multiple data architecture," the researchers wrote. ®
Get our [12]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/hpc&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aWHc7zTVGpasd3I8Rgi1tAAAAtE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/hpc&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aWHc7zTVGpasd3I8Rgi1tAAAAtE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/hpc&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aWHc7zTVGpasd3I8Rgi1tAAAAtE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://newsreleases.sandia.gov/nature-inspired-computers-are-shockingly-good-at-math/
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/hpc&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aWHc7zTVGpasd3I8Rgi1tAAAAtE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.nature.com/articles/s42256-025-01143-2
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/hpc&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aWHc7zTVGpasd3I8Rgi1tAAAAtE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[8] https://www.theregister.com/2026/01/08/camouflage_tech_mimics_octopus/
[9] https://www.theregister.com/2025/12/11/doe_genesis_mission_funding/
[10] https://www.theregister.com/2025/12/04/beat_china_moon_nasa_nominee/
[11] https://www.theregister.com/2025/11/26/norway_salmon_super/
[12] https://whitepapers.theregister.com/
If the Human Brain
... is so good at solving partial differential equations, why do they give so many people difficulties in their maths classes?
Speaking of hardware/problem fit, it's always bugged me that fad-followers did so much to destroy and suppress analogue computers, when they are an excellent, and better fit than digital computers, for certain classes of problems and simulations .
My uni had an old, multi-rack analogue computer in the E.E. department, but I had no idea how to work it; I saw no docs for it in that room.