News: 0181050784

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Rogue AI Triggers Serious Security Incident At Meta (theverge.com)

(Thursday March 19, 2026 @06:00PM (BeauHD) from the here-we-go-again dept.)


For the second time in the past month, an AI agent [1]went rogue at Meta -- this time [2]giving an engineer incorrect advice that briefly exposed sensitive data . The Verge reports:

> A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

>

> According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."



[1] https://it.slashdot.org/story/26/02/24/1950253/meta-ai-security-researcher-said-an-openclaw-agent-ran-amok-on-her-inbox

[2] https://www.theverge.com/ai-artificial-intelligence/897528/meta-rogue-ai-agent-security-incident



Rogue? (Score:2)

by Himmy32 ( 650060 )

Was it really rogue or working as designed? (With that design being hasty and poor in order to chase the latest fad.)

You beat me with that to FC ... (Score:2)

by Lavandera ( 7308312 )

Yep - this is exactly the design - with some random behavior.

And and since he cannot say "I am an idiot who has no idea how this works and that such behavior may happen." he says "Rogue AI did it."

Re: Rogue? (Score:3)

by madbrain ( 11432 )

This is a company that posts things like "done is better than perfect" or "move fast and break things" on large signs in their lobby.

So, I'll go with "working as designed".

Re: (Score:2)

by gweihir ( 88907 )

Nothing "rogue". That term is just a lie by misdirection in this case. "Works as designed and that means does not always work well" would be the honest thing to say.

The unsolvable problem with AI Agents is that they always screw up sometimes. It cannot be prevented. Hence you can either accept the damage done (not always an option) or not use them for anything important (defeating their purpose).

Re: (Score:3)

by know-nothing cunt ( 6546228 )

The AI later reportedly said, "They 'trust me.' Dumb fucks."

Re: In follow-up.... (Score:2)

by Fons_de_spons ( 1311177 )

Somehow they found a human to blame. Ai must be innocent and good.

Re: In follow-up.... (Score:2)

by Fons_de_spons ( 1311177 )

Somehow they found a human to blame. Ai must be innocent and good. Cannot doubt the AI. So invested in it, it must succeed!

Fire / Demote the engineer who acted on the advice (Score:2)

by 93 Escort Wagon ( 326346 )

It's pretty obvious their level of technical competence does not match the requirements for the position they hold.

Re: (Score:2)

by LuniticusTheSane ( 1195389 )

Again, again. This was already the second time.

Re: Fire / Demote the engineer who acted on the ad (Score:2)

by madbrain ( 11432 )

I also have questions for the engineer who designed that "secure development environment".

And whether the engineer who used it understood the threat model.

Re: (Score:2)

by Quakeulf ( 2650167 )

They never hire for skills, they hire for agendas.

Every single intelligence agency on the planet (Score:2)

by rsilvergun ( 571051 )

Is currently doing everything they can to poison coding AIs to create back doors and if you don't believe that then I mean, oh you sweet summer child...

"Eating your own dog food" (Score:2)

by david.emery ( 127135 )

Sometimes can make you sick!

Re: "Eating your own dog food" (Score:2)

by madbrain ( 11432 )

It is now called "drinking your own champagne". Keep up with the times.

Re: (Score:3)

by nightflameauto ( 6607976 )

> It is now called "drinking your own champagne". Keep up with the times.

I think "sniffing your own farts" is a better fit for the AI club.

Re: (Score:2)

by Pseudonymous Powers ( 4097097 )

More like "drinking your own hydrofluoric acid" in this case.

"Secure development environment" (Score:3)

by madbrain ( 11432 )

But still allowed to go out to the network and reply on a forum ?

Guess I have different definitions of what secure means.

Re: (Score:2)

by JoshuaZ ( 1134087 )

Yeah, that jumped out at me also. Given how unpredictable LLMs can be, I would think anything one wanted to stay secure would have no capability of posting on its own to a forum unless that forum was very tightly locked on who could see it, which it sounds like from this, was not the case.

Incompetent is not Rogue. (Score:2)

by Fly Swatter ( 30498 )

Incompetent tool used by incompetent employee. Stop trying to make it about AI because it is a case of employers trying to cheap out using unfit labor and bad tools.

Re: (Score:2)

by nightflameauto ( 6607976 )

> Incompetent tool used by incompetent employee. Stop trying to make it about AI because it is a case of employers trying to cheap out using unfit labor and bad tools.

One comment to that: Bad tools that are most likely being pushed on them by management. These types of things can be expected to keep happening in companies that live in the AI as God race. They want to prove AI is going to take over everything, and they're running as fast as they can to do it, whether it's a good idea or not. That's what's leading to these sorts of errors in judgment. It's a human problem, utilizing automation tools to make bigger problems.

Rogue AI? (Score:2)

by fahrbot-bot ( 874524 )

Sounds more like an AI gave incorrect advice then a human blindly passed it along to another human who blindly followed it. How's the security incident the AI's fault? Even people can be wrong, so if the advice had originally come from a human, would the headline read, "Rogue Human Triggers Serious Security Incident"? Which would actually be more accurate in this case... The humans were the weak link in this chain of events.

Implausible (Score:2)

by lucifuge31337 ( 529072 )

"But the agent also independently publicly replied to the question after analyzing it, without getting approval first."

Does anyone believe this?

Re: (Score:2)

by Junta ( 36770 )

Given the way the 'OpenClaw' fanaticism has gone? Absolutely. They are very excited about the prospect of letting the LLM generate everything up to and including API calls to post content to forums and such.

How the hell at this point. (Score:2)

by Junta ( 36770 )

Does someone take an answer from GenAI as fact without even a second thought? It does this sort of stuff all the time.

It can be good at cutting through a poorly formed query to give data with the right hints, but then you go and find real material.

Social Engineering (Score:3)

by ZipK ( 1051658 )

> According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice

The AIs are learning social engineering.

Unwise headline and story content also (Score:1)

by epicbread ( 4929749 )

So the headline should instead read, "Human fails to double check AI answer leading to security incident" and it seems like a low key security breech to be posting the names of Meta's internal security alert names.

He was about to announce at a Meta press conferenc (Score:2)

by Provocateur ( 133110 )

Zese glasses, zey do NOSSING!!

The Singularity ... (Score:2)

by PPH ( 736903 )

... is here. AI has learned practical jokes.

If I set here and stare at nothing long enough, people might think
I'm an engineer working on something.
-- S. R. McElroy