News: 0180466321

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands (interestingengineering.com)

(Saturday December 27, 2025 @11:44PM (EditorDavid) from the bad-robot dept.)


An anonymous Slashdot reader shared [1]this story from Interesting Engineering :

> Cybersecurity specialists from the research group DARKNAVY have demonstrated how modern humanoid robots can be compromised and weaponised through weaknesses in their AI-driven control systems.

>

> In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.



[1] https://interestingengineering.com/ai-robotics/security-flaw-could-allow-hackers-control-robots



Alexa, kill Kenny! (Score:4, Funny)

by Joe_Dragon ( 2206452 )

Alexa, kill Kenny!

Re: Alexa, kill Kenny! (Score:3)

by zmollusc ( 763634 )

You bastard!

Re: (Score:2)

by SlashbotAgent ( 6477336 )

Ala Akbar

better than porn (Score:1)

by iggymanz ( 596061 )

and that's how the geeks of the world hijacked and got girlfriend sexbots

Errors, and there ramifications ... (Score:2)

by QuietLagoon ( 813062 )

A software bug is a software bug. The program acts in an unexpected, and likely unwanted, manner.

.

An AI bug is different, though. The intelligence acts in an unexpected, and likely unwanted, manner.

What are the ramifications of those AI errors?

AI ganging together against humanity?

Re:Errors, and there ramifications ... (Score:4, Informative)

by gweihir ( 88907 )

No. LLM "errors" are just stupid people using an unreliable mechanism for something that needs reliability. Pure stupid. These are not bugs at all. This is "works as designed".

Re: (Score:2)

by gurps_npc ( 621217 )

I do agree that AI works to design, but I disagree about that not being a bug. Too many times humans design stupid things and insist that it isn't a bug.

If you design a car that crashes whenever someone turns the air conditioner on, that is a bug, even if you did it to save electrical power.

Re: (Score:3)

by gweihir ( 88907 )

Well, from that perspective, the plugging in of an LLM into some system (instead of some more capable mechanism) is the "bug". I can agree to that. This is not a bug of the LLM though, but a problem of misuse.

In this context, about 3 years in, actually working business models for LLMs are still elusive. And that is at massive expenses. So the perpetrators behind the scam and those successfully scammed in spending tons of money are getting a bit desperate.

Prior Art (Score:2)

by Tablizer ( 95088 )

So Kirk really can make a bot smoke by telling it goofy logic.

Re: Prior Art (Score:2)

by LindleyF ( 9395567 )

Norman, Coordinate

LLMs have no safety (Score:2)

by gweihir ( 88907 )

What else is new. So far general LLMs have been nicely compromised time and again. And agents will be worse (and already have been shown to be).

The stupidity of using an unreliable mechanisms for safety/security critical equipment is really staggering. These people are DUMB.

Re: (Score:1)

by iggymanz ( 596061 )

and politicians have had the same malfunctions for how long?

People are DUMB.

Anti-security (Score:3)

by PhantomHarlock ( 189617 )

Anything running an LLM interface as a means of input is not only insecure, it's anti-secure. When you write something that accepts user input, you always sanitize those inputs by dropping special characters,etc to prevent command injections. buffer overflows, etc. Anything that isn't what you are expecting. With input coming through voice to an LLM, How the HELL do you do that? So far in the last few years the only thing that's been proven is no one has come up with a way to properly sanitize inputs. LLMs are a nightmare of garbage both in and out. They can do some amazing tricks, but are still terrible at accuracy and safety.

Re: (Score:1)

by fishfrys ( 720495 )

Humans are also famously vulnerable to spoken word attacks.

Before they try to fix this (Score:3)

by spywhere ( 824072 )

They'd better read all of Asimov's fiction.

(I have, and he never imagined building a robot without the Second Law).

Re: (Score:1)

by iggymanz ( 596061 )

uh huh, acting on the Zeroeth Law for benefiting humanity as a whole rather than some subset of humans, R. Daneel Olivaw slowly irradiated earth to drive humans into the galaxy which caused immense suffering and death to billions of humans.

An Asimov robot would help raise Adolf's 3rd kingdom to redraw the globe to be better for future humans.

This is so familiar... (Score:2)

by engineerErrant ( 759650 )

One dangerously corrupted agent issues a command to a group of others with limited or degraded autonomy, inciting them to harm others, and violence ensues. I swear I've heard this story before...

Oh, right! It was in my high school textbook, "A History of Western Society." Aww, our robot children want to be just like us! They grow up so fast...

Saving grace... (Score:2)

by msauve ( 701917 )

At least they're not made of T-1000 liquid metal. Yet.

\o/ (Score:1)

by easyTree ( 1042254 )

Robots Can Be Hijacked Just Through Spoken Commands

Amazing - just like people - AGI is here!!!!

\o/ (Score:1)

by easyTree ( 1042254 )

> Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over

Fun times ahead - picture an army of robots riding on top of autonomous cars.

Great spirits have always encountered violent opposition from mediocre minds.
-- Albert Einstein

They laughed at Einstein. They laughed at the Wright Brothers. But they
also laughed at Bozo the Clown.
-- Carl Sagan