Sandia boffins let three AI agents loose in the lab. Science, not chaos, ensued
(2026/01/26)
- Reference: 1769436012
- News link: https://www.theregister.co.uk/2026/01/26/sandia_ai_agents_feature/
- Source link:
Boffins at the Department of Energy's Sandia National Labs are working to develop cheap and power efficient LEDs to replace lasers. One day, they let a trio of AI assistants loose in their lab.
Unlike most of AI agents, the lab's tools aren't just making API calls to third party models
Five hours later, the bots had churned through more than 300 tests and uncovered a novel approach for steering LED light that is four times better than methods the researchers developed using their own wetware.
The work, detailed in a paper published in the journal [1]Nature Communications underscores how AI agents are changing the way scientists work.
"We are one of the leading examples of how a self-driving lab could be set up to aid and augment human knowledge," Sandia researcher Prasad Iyer said in a recent [2]blog post .
The experiment builds on a [3]2023 paper in which Iyer and his team demonstrated a method for steering LED light that has applications in everything from autonomous vehicles to holographic projectors. The trick was finding the right combination of parameters to steer the light in the desired manner, a process researchers expected to take years.
[4]
To speed this process up, Iyer enlisted the help of his colleague Saaketh Desai to develop a series of artificially intelligent lab assistants.
[5]
[6]
Unlike most of AI agents, the lab's tools aren't just making API calls to third party models. Instead, the team developed a trio of domain-specific models based on well-established machine-learning algorithms.
"We didn't do any LLMs. There is significant interest in that. There are lots of people trying those ideas out, but I think they're still in the exploratory phase," Desai told El Reg .
[7]
As it turned out, the researchers didn't need them. "We used a simpler model called a variational auto encoder (VAE). This model was established in 2013. It's one of the early generative models," Desai said.
By sticking with domain-specific models based on more mature architectures, Sandia also avoided hallucinations – the errors that arise when AI makes stuff up – which have become one of the biggest headaches associated with deploying generative AI.
"Hallucinations were not that big a concern here because we build a generative model that is tailored for this very specific task," Desai explained.
[8]
The first of these models utilized a VAE architecture, a type of model commonly used to generate images before diffusion models came on the scene in 2015. That model pre-processed the lab's data sets.
Researchers then fed the outputs of that model into a second model that was connected directly to the optical equipment used to conduct the experiments.
This active learning model is based on a Bayesian optimization algorithm which was responsible for generating and running an experiment then analyzing the results. This process was conducted in a closed loop, with the models repeatedly refining the experiments.
However, it wasn't enough to know which combination of parameters results in the best results. The real science is in uncovering why that particular configuration works at all.
The team therefore added a third model to the loop, to essentially act as a fact-checker. Researchers tasked this simple feed-forward neural network with devising the formula for the data generated, to later verify results.
And while many AI models are trained on hundreds of thousands of GPUs, the team managed to do all of this using comparatively modest hardware in the form of a Lambda Labs workstation equipped with three RTX A6000 graphics cards.
Together these models not only resulted in a speed-up in testing, but also surfaced approaches to LED beam steering the researchers hadn't previously considered.
[9]Nvidia leans on emulation to squeeze more HPC oomph from AI chips in race against AMD
[10]Congress throws NASA a lifeline, leaves Mars sample mission to die in the dust
[11]Artificial brains could point the way to ultra-efficient supercomputers
[12]NextSilicon Maverick-2 promises to blow away the HPC market Nvidia left behind
While the research focused on the application of AI to steering light emitted by LEDs, Desai believes the underlying approach may have applications in materials design for things like alloys or printable electronics.
For other scientists interested in replicating this kind of "self-driving lab," Desai says it's important to have equipment that's tightly integrated into the model framework.
"There's progress and development there, but there's still a long way to go in terms of making sure that the tools that are in the lab, the physical tools, are able to interact with these models," he said. "If you're using a piece of equipment from 1975 you're already in a tough place to start."
As for the models themselves, he emphasizes the importance of skepticism. "If you're going into more advanced architectures and machine learning — transformer based LLM and what not — I would say my advice is to be really skeptical of what it gives you." ®
Get our [13]Tech Resources
[1] https://www.nature.com/articles/s41467-025-66916-0
[2] https://newsreleases.sandia.gov/physicists-employ-ai-labmates-to-supercharge-led-light-control/?thunderberg_uid=489
[3] https://newsreleases.sandia.gov/steering_light/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2026/01/18/nvidia_fp64_emulation/
[10] https://www.theregister.com/2026/01/16/nasa_science_budget/
[11] https://www.theregister.com/2026/01/09/artificial_brains_supercomputer/
[12] https://www.theregister.com/2025/10/22/nextsilicon_maverick2_fill_nvidia_hpc_void/
[13] https://whitepapers.theregister.com/
Unlike most of AI agents, the lab's tools aren't just making API calls to third party models
Five hours later, the bots had churned through more than 300 tests and uncovered a novel approach for steering LED light that is four times better than methods the researchers developed using their own wetware.
The work, detailed in a paper published in the journal [1]Nature Communications underscores how AI agents are changing the way scientists work.
"We are one of the leading examples of how a self-driving lab could be set up to aid and augment human knowledge," Sandia researcher Prasad Iyer said in a recent [2]blog post .
The experiment builds on a [3]2023 paper in which Iyer and his team demonstrated a method for steering LED light that has applications in everything from autonomous vehicles to holographic projectors. The trick was finding the right combination of parameters to steer the light in the desired manner, a process researchers expected to take years.
[4]
To speed this process up, Iyer enlisted the help of his colleague Saaketh Desai to develop a series of artificially intelligent lab assistants.
[5]
[6]
Unlike most of AI agents, the lab's tools aren't just making API calls to third party models. Instead, the team developed a trio of domain-specific models based on well-established machine-learning algorithms.
"We didn't do any LLMs. There is significant interest in that. There are lots of people trying those ideas out, but I think they're still in the exploratory phase," Desai told El Reg .
[7]
As it turned out, the researchers didn't need them. "We used a simpler model called a variational auto encoder (VAE). This model was established in 2013. It's one of the early generative models," Desai said.
By sticking with domain-specific models based on more mature architectures, Sandia also avoided hallucinations – the errors that arise when AI makes stuff up – which have become one of the biggest headaches associated with deploying generative AI.
"Hallucinations were not that big a concern here because we build a generative model that is tailored for this very specific task," Desai explained.
[8]
The first of these models utilized a VAE architecture, a type of model commonly used to generate images before diffusion models came on the scene in 2015. That model pre-processed the lab's data sets.
Researchers then fed the outputs of that model into a second model that was connected directly to the optical equipment used to conduct the experiments.
This active learning model is based on a Bayesian optimization algorithm which was responsible for generating and running an experiment then analyzing the results. This process was conducted in a closed loop, with the models repeatedly refining the experiments.
However, it wasn't enough to know which combination of parameters results in the best results. The real science is in uncovering why that particular configuration works at all.
The team therefore added a third model to the loop, to essentially act as a fact-checker. Researchers tasked this simple feed-forward neural network with devising the formula for the data generated, to later verify results.
And while many AI models are trained on hundreds of thousands of GPUs, the team managed to do all of this using comparatively modest hardware in the form of a Lambda Labs workstation equipped with three RTX A6000 graphics cards.
Together these models not only resulted in a speed-up in testing, but also surfaced approaches to LED beam steering the researchers hadn't previously considered.
[9]Nvidia leans on emulation to squeeze more HPC oomph from AI chips in race against AMD
[10]Congress throws NASA a lifeline, leaves Mars sample mission to die in the dust
[11]Artificial brains could point the way to ultra-efficient supercomputers
[12]NextSilicon Maverick-2 promises to blow away the HPC market Nvidia left behind
While the research focused on the application of AI to steering light emitted by LEDs, Desai believes the underlying approach may have applications in materials design for things like alloys or printable electronics.
For other scientists interested in replicating this kind of "self-driving lab," Desai says it's important to have equipment that's tightly integrated into the model framework.
"There's progress and development there, but there's still a long way to go in terms of making sure that the tools that are in the lab, the physical tools, are able to interact with these models," he said. "If you're using a piece of equipment from 1975 you're already in a tough place to start."
As for the models themselves, he emphasizes the importance of skepticism. "If you're going into more advanced architectures and machine learning — transformer based LLM and what not — I would say my advice is to be really skeptical of what it gives you." ®
Get our [13]Tech Resources
[1] https://www.nature.com/articles/s41467-025-66916-0
[2] https://newsreleases.sandia.gov/physicists-employ-ai-labmates-to-supercharge-led-light-control/?thunderberg_uid=489
[3] https://newsreleases.sandia.gov/steering_light/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXedx3_y7R55PK-AJ0bbvwAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2026/01/18/nvidia_fp64_emulation/
[10] https://www.theregister.com/2026/01/16/nasa_science_budget/
[11] https://www.theregister.com/2026/01/09/artificial_brains_supercomputer/
[12] https://www.theregister.com/2025/10/22/nextsilicon_maverick2_fill_nvidia_hpc_void/
[13] https://whitepapers.theregister.com/
ComputerSays_noAbsolutelyNo
Furthermore, and importantly, they applied their own intelligence in the setup.
So, the polar opposite of some vibey prompt "hey AI, do that", and hoping for the best
Jason Bloomberg
Calling it "AI" is probably a trick to get spotlight and money - and I say, good for them.
Solid science, then using computer tools and modelling to achieve what no scientist or engineer could do in their lifetime, was what used to get called "AI" before Bullshit Generators entered the scene.
I am all for going back to that definition even if it isn't strictly AI.
> Instead, the team developed a trio of domain-specific models based on well-established machine-learning algorithms.
In other words, this isn't the thing that launched the current AI hype. This is old and trusted tech. Similar methods have been used successfully for decades. Calling it "AI" is probably a trick to get spotlight and money - and I say, good for them.