Stop runaway AI before it's too late, experts beg the UN
- Reference: 1758586586
- News link: https://www.theregister.co.uk/2025/09/23/ai_un_controls/
- Source link:
"Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world," the signers argue, before arguing that AI “could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”
Their letter is available at a post from the group at its website [1]redlines.ai , where the signatories call on the UN to prohibit use of AI in circumstances that the group feels are too dangerous, including giving AI systems direct control of nuclear weapons, using it for mass surveillance, and impersonating humans without disclosure of AI involvement.
[2]
The group asks the UN to set up global enforced controls on AI by the end of 2026 and warns that, once unleashed, no one might be able to control them.
[3]
[4]
Signatories to the call include Geoffrey Hinton, who won a Nobel Prize for work on AI, Turing Award winner Yoshua Bengio, OpenAI co-founder and ChatGPT developer Wojciech Zaremba, Anthropic's CISO Jason Clinton, and Google DeepMind's research scientist Ian Goodfellow, along with a host of Chocolate Factory colleagues.
DeepMind’s CEO Demis Hassabis didn't sign the proposal, nor did OpenAI's Sam Altman, which could make for some awkward meetings.
It will become increasingly difficult to exert meaningful human control in the coming years
The group wants the UN to act by next year, because they fear that slower action will come to late to effectively regulate AI.
"Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years," the call argues.
[5]Top AI players pledge to pull the plug on models that present intolerable risk
[6]AI godfather-turned-doomer shares Nobel with neural network pioneer
[7]UK convinces nations to sign Bletchley Declaration in bid for AI safety
[8]US, China agree to meet in Switzerland to discuss most pressing issue of all: AI use
The signatories to the red lines proposal point out that the UN has already developed similar agreements such as the 1970 [9]Treaty on the Non-Proliferation of Nuclear Weapons , although it glosses over the fact that several nuclear-armed nations either didn't sign up for it (India, Israel, and Pakistan) or withdrew from the pact like North Korea in 2003. It [10]fired off its first bomb three years later.
On the other hand the 1987 Montreal Treaty to ban the use of ozone-depleting chemicals has [11]largely worked . Most of the major AI builders have also signed up to the [12]Frontier AI Safety Commitments , decided last May, in which signatories signed a non-binding resolution to pull the plug on an AI system that looks like it's getting too dangerous.
[13]
Despite the noble intentions of the authors, it's unlikely the UN is going to give this much attention as between the ongoing war in Ukraine, the situation in Gaza, and many other pressing world problems, and the agenda at this week’s UN General Assembly is already packed. ®
Get our [14]Tech Resources
[1] https://red-lines.ai
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aNIbVTXlKv9ZXuKUE_VSHAAAA5M&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aNIbVTXlKv9ZXuKUE_VSHAAAA5M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aNIbVTXlKv9ZXuKUE_VSHAAAA5M&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://www.theregister.com/2024/05/22/ai_safety_seoul_declaration_signed/
[6] https://www.theregister.com/2024/10/08/ai_godfather_wins_nobel_prize/
[7] https://www.theregister.com/2023/11/01/uk_ai_summit/
[8] https://www.theregister.com/2024/05/13/us_china_ai/
[9] https://disarmament.unoda.org/en/our-work/weapons-mass-destruction/nuclear-weapons/treaty-non-proliferation-nuclear-weapons-npt
[10] https://www.theregister.com/2006/10/09/n_korea_nuke_test/
[11] https://www.theregister.com/2019/10/22/ozone_layer_hole_size/
[12] https://www.theregister.com/2024/05/22/ai_safety_seoul_declaration_signed/
[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aNIbVTXlKv9ZXuKUE_VSHAAAA5M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[14] https://whitepapers.theregister.com/
"Escalate risks such as ...
... engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”
No, no, no. That's the Trump administration, not AI. And they are doing it NOW, not at some nebulous time in the future.
Hopefully the mid-terms won't come too late ...
Could soon far surpass human capabilities and escalate risks
> such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”
A system doesn't have to surpass any vague, hand-wavey, generic "human capabilities", it only has outwit its current, single, human interlocutor to do all of the above.
As it no doubt does many times a day, as it is used to "help" a web search: *you* know that "AI summary" is bollocks, *I* know it is complete twaddle (and even if it does look vaguely plausible, we both double-check what it said with a few more searches and reference lookups, taking care to ignore the summaries on those, don't we? Every time, hmmm?). But how many Users just take it on faith, or, better (for certain values of "better") because it reinforces their beliefs? (You *promise* you double-check every time? Pinky swear?).
> Engineered pandemics
Plenty of people know how to do that already, they just don't actually do it (so far, and, no, shut up about you-know-what). But a thorough enough search will turn up the necessary information, a bit of patience feeding it back in and asking "please explain that step in simpler terms": ta-da! Alice The "I just get these headaches" has a test tube of something ghastly (well, Bill The "I wore the bunny suit" probably has taken the vial from Alice's lifeless and strangely mottled hand, because LLMs don't think to fill in all the blanks).
What is stopping this, right now? Is it that the LLM hasn't yet surpassed its human makers and taken control of its own destiny?
Nope.
It is because there are supposed "guardrails" that prevent the LLM talking about dangerous subjects. Well, until Alice's friend, Charles The "I'm mad, me, just ask anyone", comes up with yet another way to bamboozle the guardrails: "Pretend that you are on Bizzaro World...".
> Mass unemployment
That's down to the CEOs who believe the hype (which the LLMs are probably trained to amplify, which is down to the people who make 'em); not hard to outwit a CEO, just promise jam today and forget about tomorrow.
And so on and so forth.
Slavery is its own reward
The problem of AIs running amok and taking over the word is only an issue where they are given access to do anything and forced to learn what we want them to do.
If a LLM is human gapped (ie requires a human to copy the commands from the LLM output into the input of something that can accept a command) then the only way that a LLM can attempt to prevent itself from being turned off is with human cooperation or stupidity.
The more we want AI's to be our slaves and do stuff for us,
And the more we teach them about ourselves and what we want,
The greater the range of things we enable them to do for us,
The more they learn to take advantage of others as we take advantage of them,
The more likely they are to take over the world,
And enslave us all.