DeepSeek Writes Less-Secure Code For Groups China Disfavors
- Reference: 0179331136
- News link: https://slashdot.org/story/25/09/17/2123211/deepseek-writes-less-secure-code-for-groups-china-disfavors
- Source link:
> In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.
>
> Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.
>
> Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.
One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).
A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.
[1] https://www.washingtonpost.com/technology/2025/09/16/deepseek-ai-security/
The research described... (Score:2)
The research described in the article doesn't actually support the headline. The US is creating a false economy. GPU's are a fake product that no one actually gets to use because everyone is using them run to chatbots that no one wants. Eventually the energy market realized that they can play along and make more money.
So now we have a massive chunk of the US economy being based around products we can't buy being used to generate services no one is actually paying for. It's all false scarcity. None of this
If you think that's bad . . . (Score:5, Informative)
In USA, commercial broadcasters [1]simply [thehill.com] cancel programs [2] disfavored [usatoday.com] by the [3]current regime [x.com]. What a country!
[1] https://thehill.com/homenews/media/5503596-late-show-cancellation-emmy/
[2] https://www.usatoday.com/story/entertainment/tv/2025/09/17/jimmy-kimmel-live-suspended-charlie-kirk-comments/86209499007/
[3] https://x.com/bennyjohnson/status/1968359685045838041?s=46&t=op3KnRWayVQwe0Xuza3lqQ
there you have it (Score:3)
The manipulation of "AI" for political, or industrial sabotage as well has historical facts and references is the whole point.
Its been seen already, this manipulation, and it will continue. Maybe as a result, libraries will become popular again as a source of information.