AI Hallucinations Lead To a New Cyber Threat: Slopsquatting
- Reference: 0177080143
- News link: https://it.slashdot.org/story/25/04/22/0118200/ai-hallucinations-lead-to-a-new-cyber-threat-slopsquatting
- Source link:
> Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.
>
> These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates "that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts." This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be "semantically convincing." Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. "Only 13% of hallucinations were simple off-by-one typos," Socket added.
The research can found be in a paper [2]on arXiv.org (PDF).
[1] https://www.csoonline.com/article/3961304/ai-hallucinations-lead-to-new-cyber-threat-slopsquatting.html
[2] https://arxiv.org/pdf/2406.10279
god damn it (Score:2, Funny)
This is what happened when EZ Pass made tollbooths obsolete. The morons of society could no longer work in tollbooths, so somehow they wormed their way into tech companies. Remember when you had to have a degree from Stanford, MIT, Berkeley, Cal Tech or another first-rate university to get a job at a major tech company unless you had a very impressive resume?
Prior Art (Score:5, Informative)
[1]https://it.slashdot.org/story/... [slashdot.org]
[1] https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware
Pfft. Hallucinations. I swear by librwnj (Score:2)
And you can too!
and yet, we're told that AI will... (Score:3)
be doing all our coding in the future. "Who needs programmers anymore" seems to be the new mantra in the corporate corner offices.
It was bead enough when incompetent human programmers used unallocated memory or freed memory they were still using, but now we'll get to see the effects of "AI hallucinations"... oh, joy ...
What could POSSIBLY go wrong? go wrong? go wrong? go wrong?...
Re: (Score:2)
I'd imagine that the folks who looked after horses in New York City in the late 1800s looked with similar disdain on those limited, buggy, undependable new streetcars. The difference is that these tools are improving far faster than the automobile did. They had several decades to come to terms with it. We don't.
Sounds like a William Gibson subplot (Score:2)
I swear, the future is weird as hell.
Rehashing attack vectors (Score:2)
This is a known supply chain attack .. but now they added the label "AI". Someone must get paid per advisory.
Re: (Score:2)
It's a "known supply chain attack" that is specifically applicable to LLMs (AI), since LLMs seem to have a pattern in their hallucinated packages.
I wonder if you can get paid per stupid comment.
This is bottom feeding (Score:2)
This isn't going to get major corporations who have internal AI. This is going to get the startup who has no real coders. Serves them right, I guess.
Funny Nerd Names (Score:1)
Remembering some package names I came across, I think one of the source problems is that evryone is trying to come up with an obscure, in-joke, oh-so-clever name for the extra nerd credz or "teh lulz"...
Vet your dependencies. (Score:5, Insightful)
You have to do your research and make sure the packages you are importing are legit. This is true whether or not the package was recommended by an AI.
I guess sloth IS a risk. Vibe coders may get into the habit of just trusting whatever the LLM churns out. Could be a problem. But either way, it's still on you.
Re:Vet your dependencies. (Score:5, Insightful)
I like how people have to buy tokens to receive the Wisdom of Superhuman Coding Overlord AIs that repeatedly tell them to use the same fake packages every time, but it's always the people who are responsible for following the bad advice they paid for.
It's a great business model! Risk free! How can I invest in it?
Re: (Score:2)
You could do that or you could use a service to scan your repo for you ( even non-vibe coders can do it ).
Re: (Score:2)
I feel like you maybe didn't read that article.
This isn't about spoofing your module as someone else's- this is about creating one that is entirely novel- commonly hallucinated by LLMs.
The bad actors that create such a thing would love it if you verified cryptographic signatures. Wouldn't want you downloading someone else's malware, after all.
Re: Vet your dependencies. (Score:2)
In software development the trend have been to just use many open source depencies and frowning at you if you write your own instead. But at the last job I suddenly saw the opposit trend: people acknowledged the downsides of depending on foreign dependencies.