Bill Gates-backed startup aims to revive Moore's Law with optical transistors
- Reference: 1769254214
- News link: https://www.theregister.co.uk/2026/01/24/neurophos_hopes_to_revive_moores_law/
- Source link:
Neurophos is among those trying to upend Moore's Law and make good on analog computing's long-promised yet largely untapped potential.
The Austin, Texas-based AI chip startup says it's developing an optical processing unit (OPU) that in theory is capable of delivering 470 petaFLOPS of FP4 / INT4 compute — about 10x that of Nvidia's newly unveiled Rubin GPUs — while using roughly the same amount of power.
[1]
Neurophos CEO Patrick Bowen tells El Reg this is possible in part because of the micron-scale metamaterial optical modulators, essentially photonic transistors, that the company has spent the past several years developing.
[2]
[3]
"The equivalent of the optical transistor that you get from Silicon Photonics factories today is massive. It's like 2 mm long. You just can't fit enough of them on chip in order to get a compute density that remotely competes with digital CMOS today," he explained.
Neurophos' optical transistors, Bowen says, are roughly 10,000x smaller. "We got our first silicon back in May demonstrating that we could do that with a standard CMOS process, which means it's compatible with existing foundry technologies."
[4]
Using these transistors, Neurophos claims to have developed the optical equivalent of a tensor core. "On chip, there is a single photonic tensor core that is 1,000 by 1,000 [processing elements] in size," he said.
This is quite a bit bigger than what's typically seen in most AI accelerators and GPUs, which employ matrix multiplication engines that are at most 256x256 processing elements in size.
However, rather than having dozens or even hundreds of these tensor cores, like we see in Nvidia's GPUs, Neurophos only needs one. Bowen tells us the tensor core on its first-gen accelerator will occupy roughly 25 mm 2 .
[5]
The rest of the reticle-sized chip is "the boondoggle of what it takes to support this insane tensor core," Bowen said.
Specifically, Neurophos needs a whack-ton of vector processing units and SRAM to keep the tensor core from starving for data. This is because the tensor core itself — and yes, again, there'll only be one of them on the entire reticle-sized die — is operating at around 56 gigahertz.
But because the matrix-matrix multiplication is done optically, Bowen notes that the only power consumed by the tensor core is what's needed to drive the opto-electrical conversion from digital to analog and back again.
Neurophos says its first OPU, codenamed the [6]Tulkas T100 , will feature a dual reticle design equipped with 768 GB of HBM that's capable of 470 petaOPS while consuming 1 to 2 kilowatts of power under load.
As impressive as all this sounds, it's important to remember that these figures are more like goal posts at this point. The chip is still in active development with full production not expected to begin until mid-2028. Even then, Bowen doesn't expect it to ship in large volumes. "We're talking thousands of chips. Not tens of thousands of chips."
While Neurophos believes its optical tensor cores can address a broad array of AI inference workloads, it expects its first chip will be used primarily as a prefill processor.
As we've previously [7]discussed , LLM inference can be broken into two phases: a compute intensive prefill stage in which input tokens are processed, and a memory bandwidth bound stage in which output tokens are generated.
Over the past year or so, we've seen chip designers like Nvidia disaggregate prefill and decode into separate pools of GPUs. For its latest generation of GPUs, Nvidia has developed a dedicated prefill accelerator that it calls Rubin CPX.
Bowen envisions the Tulkas T100 filling a similar role as Rubin CPX. "The current vision, which is subject to change, is basically we would put one rack of ours, which is 256 of our chips, and that would be paired with something like an NVL576 rack," he said.
[8]AI is rewriting how power flows through the datacenter
[9]Silicon photonics won't matter 'anytime soon' says Broadcom CEO
[10]GPUs aren't worth their weight in gold – it just feels like they are
[11]Nvidia-backed photonics startup Ayar Labs eyes hyperscale customers with GUC design collab
Long-term, Bowen aims to tackle the decode phase as well, but notes that a variety of technologies, including co-packaged optics, will need to be developed before the startup is ready to take on token generation.
While the Tulkas T100 won't ship until at least 2028, Bowen says the company is actively working on a proof of concept (PoC) chip to validate the compute and power densities it's claiming.
This week, Neurophos [12]completed a $110 million Series-A funding round led by Gates Frontier, with participation from Microsoft's venture fund and other investors, which Bowen says will fund development of this PoC. ®
Get our [13]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXT6rixKUgfwiUgmI0weoAAAAk8&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXT6rixKUgfwiUgmI0weoAAAAk8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXT6rixKUgfwiUgmI0weoAAAAk8&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXT6rixKUgfwiUgmI0weoAAAAk8&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/systems&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXT6rixKUgfwiUgmI0weoAAAAk8&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.neurophos.com/product
[7] https://www.theregister.com/2025/09/10/nvidia_rubin_cpx/
[8] https://www.theregister.com/2025/12/22/ai_power_datacenter/
[9] https://www.theregister.com/2025/12/12/broadcom_q4_fy2025/
[10] https://www.theregister.com/2025/11/28/gold_gpu_weights/
[11] https://www.theregister.com/2025/11/16/ayar_guc_collab/
[12] https://www.neurophos.com/110m-raise
[13] https://whitepapers.theregister.com/
reticle
I hadn't encountered this word outside crosswords as an alternative to reticule so the article's reference to a lady's handbag sized die was a little confusing.
[1]Wikichip: mask explains.
The Latin root means a net (as in the retiarius - a type of gladiator) which makes visual sense in the context of a IC die.
Hopefully the photonics will advance in spite of the likelihood that the bubble will have burst before 2028.
[1] https://en.wikichip.org/wiki/mask
Bill Gates, you say?
I say it's another "pump & dump" scheme, which may or may not get AI-grade speculators to invest before it fizzles away (after a couple of years of "we're nearly there, just give us some more money!")
Re: Bill Gates, you say?
The AI bubble will have burst by the time this is due to ship. So if it"s really vapourwear, they just have to ride that wave until it crashes.
Micron scale ... transistors
Is what stood out to me. Perhaps the previous best were mm sized, but micron scale is like 40 years ago for regular CMOS. Given today is what 5nm-ish and it is squared that comes out to 40,000X less trans per sq mm of space. I think great that they are shrinking optical and doing it on a regular fab line. However, that 40Kx density loss will be hard to compensate for compute.
Re: Micron scale ... transistors
Got to start somewhere.
Running at 56Ghz and reduced power consumption will cover for a lot of the increase in size.
It's also certain that the size can be reduced in future - both TTL and CMOS started much larger and rapidly shrank as the processes were refined. Same will happen here, unless it dies on the vine when the LLM bubble bursts.
Epstein
Article about Bill Gates without mention of Epstein. Come on.
"We're talking thousands of chips. Not tens of thousands of chips."
It's a bit of shame that one thing we've learned the hard way is that to get really good and repeatable quality and reliability in a product, the best approach is high volume automated production.