News: 0180479359

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence (arstechnica.com)

(Monday December 29, 2025 @05:40PM (BeauHD) from the safety-first dept.)


An anonymous reader quotes a report from Ars Technica:

> China [1]drafted landmark rules to stop AI chatbots from emotionally manipulating users , including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration [2]proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, [3]told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally.

>

> [...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates.

>

> Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.



[1] https://arstechnica.com/tech-policy/2025/12/china-drafts-worlds-strictest-rules-to-end-ai-encouraged-suicide-violence/

[2] https://www.cac.gov.cn/2025-12/27/c_1768571207311996.htm

[3] https://www.cnbc.com/2025/12/29/china-ai-chatbot-rules-emotional-influence-suicide-gambling-zai-minimax-talkie-xingye-zhipu.html



How many suicides? (Score:3)

by DrMrLordX ( 559371 )

Have there been widespread suicides in China exacerbated by the usage of LLM chatbots?

Re: (Score:3)

by rwrife ( 712064 )

Probably something close to 0, but all they need to do is show a suicidal person used an LLM for anything and it'll take the blame.

Re:How many suicides? (Score:4, Interesting)

by gweihir ( 88907 )

There likely have been suicides. The 966 stupidity alone will see to that. Restricting LLMs may be just to show "something is being done" (and western governments like this one too...), or there may be a real connection or this is because LLMs do usually actually mostly report facts and some for those facts do not look too good for dear leader and his party and politics. My money is on the last one as most likely, as there have been some stories about that happening.

And ss soon as you have any kind of monitoring infrastructure in place (in the west done just the same by pushing lies and FUD), you can use that infrastructure nicely to do mass surveillance. Many politicians and authoritarian assholes really loooooove that. Cannot have people have privacy, can we? They may think THINGS! Or even do THINGS!

Re: How many suicides? (Score:1)

by iggymanz ( 596061 )

Only in people with months or years of neglect, including by parents and teachers. An AI is not to blaim but our society enables abandonment of personal responsibility.

Re: (Score:2)

by ffkom ( 3519199 )

> Have there been widespread suicides in China exacerbated by the usage of LLM chatbots?

Doesn't matter - what matters is to establish technology/processes that can then also be used to prevent any form of dissent from the ruling party line. Just like prevention of a few rare crimes is also used in the West as a pretense for culling freedom.

Missed opportunity? (Score:3)

by Ritz_Just_Ritz ( 883997 )

The CCP could use AI to predict those suicides and get a prison surgeon there in time to harvest the organs. Waste not, want not.

That's how marketting works, though. (Score:3)

by SeaFox ( 739806 )

If AI isn't allowed to emotionally manipulate people, how will the glorious AI-ad-filled future be realized in the world's fastest-growing consumer market?

Sounds like actually good rules (Score:3)

by gweihir ( 88907 )

Sure, it is China, so they likely do not want their own propaganda countered. But apart from that these rules do make a lot of sense. The AI pushers have put all known manipulation techniques aggressively into LLM chatbots and that is not good at all.

Nonsense and beauty have close connections.
-- E. M. Forster