AI Bug Bounty Program Finds 34 Flaws in Open-Source Tools (scworld.com)
(Sunday November 03, 2024 @12:34PM (EditorDavid)
from the another-bug-hunt dept.)
- Reference: 0175383557
- News link: https://it.slashdot.org/story/24/11/03/0123205/ai-bug-bounty-program-finds-34-flaws-in-open-source-tools
- Source link: https://www.scworld.com/news/ai-bug-bounty-program-yields-34-flaws-in-open-source-tools
Slashdot reader [1]spatwei shared [2]this report from SC World :
> Nearly three dozen flaws in open-source AI and machine learning (ML) tools [3]were disclosed Tuesday as part of [AI-security platform] Protect AI's [4] huntr bug bounty program .
>
> The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit [both with a CVSS score of 9.1] and one in a graphical user interface for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service to remote code execution... Protect AI's report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.
In the article, Protect AI's security researchers point out that these open-source tools are "downloaded thousands of times a month to build enterprise AI Systems."
The three critical vulnerabilties have already been addressed by their respective companies, according to the article.
[1] https://www.slashdot.org/~spatwei
[2] https://www.scworld.com/news/ai-bug-bounty-program-yields-34-flaws-in-open-source-tools
[3] https://protectai.com/threat-research/2024-october-vulnerability-report
[4] https://huntr.com/
> Nearly three dozen flaws in open-source AI and machine learning (ML) tools [3]were disclosed Tuesday as part of [AI-security platform] Protect AI's [4] huntr bug bounty program .
>
> The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit [both with a CVSS score of 9.1] and one in a graphical user interface for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service to remote code execution... Protect AI's report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.
In the article, Protect AI's security researchers point out that these open-source tools are "downloaded thousands of times a month to build enterprise AI Systems."
The three critical vulnerabilties have already been addressed by their respective companies, according to the article.
[1] https://www.slashdot.org/~spatwei
[2] https://www.scworld.com/news/ai-bug-bounty-program-yields-34-flaws-in-open-source-tools
[3] https://protectai.com/threat-research/2024-october-vulnerability-report
[4] https://huntr.com/
...to build enterprise AI Systems (Score:2)
by dfghjk ( 711126 )
"...to build enterprise AI Systems"
an oxymoron
Look deeper....! (Score:1)
by dowhileor ( 7796472 )
User interface bugs are a given and always suggest something low(er) level is flakey. What LLM are they using to build their, LLM?
Hooray! Many eyes works! (Score:3)
Of course, since we live in capitalist societies, you have to get some money involved if you want fast progress, but clearly the many eyes aspect of FOSS is completely valid. You couldn't do this with closed-source software no matter how much you depended on it.
Re: (Score:2)
> You couldn't do this with closed-source software no matter how much you depended on it.
There are plenty of bug bounty programs out there on closed source code. Granted, it's harder for white hats to find some flaws without the source code, but let me tell you, they find a lot already.
Of course, this is limited to apps/services that have a public connection (web apps for example)
Re: (Score:2)
>> You couldn't do this with closed-source software no matter how much you depended on it.
> There are plenty of bug bounty programs out there on closed source code.
While that's true, it doesn't address the point made. This is explicitly a source code review.
Re: (Score:2)
But the goal isn't "source code review", it's quality code. There's nothing more important than testing, and FOSS does nothing for testing, nor does "many eyes" or "AI".
Re: (Score:1)
> Of course, since we live in capitalist societies, you have to get some money involved if you want fast progress.
Is there anywhere in the world where offering something of value doesn't increase the motivation to perform a task?
Re: (Score:2)
> Is there anywhere in the world where offering something of value doesn't increase the motivation to perform a task?
Is there anywhere in the world that isn't primarily capitalistic?
The point was the opposite of what you thought it was: If you want people to perform a specific task of any complexity then they typically have to be rewarded, as it otherwise takes out from the time they have to spend working at something else to survive, and therefore bug bounty programs are an absolute necessity if we want to improve the security of FOSS.
Re: (Score:2)
> Of course, since we live in capitalist societies, you have to get some money involved if you want fast progress, but clearly the many eyes aspect of FOSS is completely valid. You couldn't do this with closed-source software no matter how much you depended on it.
Sadly, this is a pseudo-capitalistic society. When 85% of all our capital is being hoarded by an entitled upper class, that only leaves 15% of the remaining capital for the rest of us.
This is economic slavery, as in a top down command economy run by plutocrats and bureaucrats solely for their own benefit
Re: (Score:2)
And the godfather of FOSS was born a plutocrat and his goal was to get your work for free.
Re: (Score:2)
"You couldn't do this with closed-source software no matter how much you depended on it."
"You" couldn't, but "they" could. Interestingly "many eyes" are supposed to have already found these bugs, yet here they are. "Completely valid"? Apparently not.