News: 0175008395

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI Pioneers Call for Protections Against 'Catastrophic Risks' (nytimes.com)

(Monday September 16, 2024 @05:25PM (msmash) from the moral-obligations dept.)


AI pioneers have issued a stark warning about the technology's potential risks, [1]calling for urgent global oversight . At a recent meeting in Venice, scientists from around the world discussed the need for a coordinated international response to AI safety concerns. The group proposed establishing national AI safety authorities to monitor and register AI systems, which would collaborate to define red flags such as self-replication or intentional deception capabilities. The report adds:

> Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.



[1] https://www.nytimes.com/2024/09/16/business/china-ai-safety.html



It's all lip service (Score:2)

by nehumanuscrede ( 624750 )

When it comes down to:

1) Profits

2) National Security

3) Gaining any sort of advantage

There really are no rules. Especially when the winner of the race in question stands to gain so much.

Human beings might agree to things publicly but, in practice, it's all a facade.

They will quickly throw all morals, ethics and safety concerns to the wind to obtain what they want.

Re: (Score:2)

by Seven Spirals ( 4924941 )

I've seen some interviews with AI CEO's and other "lumenaries" for various AI companies. When they talk about the first person to get to AGI, they've used analogies of things like "the first fisherman to haul in a genie." They think that whoever gets there first is going to be able to have huge material advantages over the rest of humanity. They are all ivy-league grads; maybe they are right. I really wonder about the high-level defections from OpenAI and others and what the real game plan is. Some have ove

Catastrophic Risk (Score:2)

by Drethon ( 1445051 )

Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?

I don't have an NYT subscription.

Simple Red Line (Score:2)

by Roger W Moore ( 538166 )

I don't have an NYT subscription either but apparently a javascript blocker is just as good. The article makes no mention of what those risks are but does suggest some "red lines" that at least governments should be notified about, specific examples were an AI that can copy itself (which seems a very low bar given how easy that is to achieve) and an AI that can deliberately deceive its creators which seems a very vaguely defined bar since "deliberately" pre-supposes free will which is not well defined given

Lock 'm up (Score:2)

by Njovich ( 553857 )

They say they pose a risk to humanity and want us to take action? Are they sure? Because we got just the place for them.

Manual override is missing from all datacenters (Score:2)

by internet-redstar ( 552612 )

The real existential threat is that all datacenters are hard for humans to get into and very easy for super-intelligent AI with zero-day intrusion capabilities to take over. It is very easy to create manual overrides to turn the redundant power and redundant connectivity off manually - without any digital tools - if we want to. But nobody seems to realise the importance of this manual override.

Squirrels and backhoes (Score:2)

by JBMcB ( 73720 )

As long as squirrels and backhoes exist, datacenters are vulnerable.

Re: (Score:2)

by awwshit ( 6214476 )

The power company transformers sit outside the building. A couple of pickup trucks driving into them takes out the entire datacenter.

Re: (Score:2)

by timelorde ( 7880 )

Until they start embedding nuclear reactors inside the datacenter.

Re: (Score:2)

by awwshit ( 6214476 )

And that is the best reason not to fear AI... AI as we know it does not scale.

Catastrophic risk? Ill believe it when I see it (Score:1)

by Mes ( 124637 )

They warned us of catastrophic ricks on environmental destruction.

They warned us of catastrophic risks on global warming.

Theyre warning us of catastrophic risks on AI.

We're still here, everything is just fine. Wake me up when the Earth is actually being destroyed, then I can worry about it.

Re: (Score:2)

by gtall ( 79522 )

So apparently incremental increasing damage to the environment and problems caused by global warming do not rate high enough for you to care about. The only risk you'd probably acknowledge is an asteroid the size of Mars taking dead aim on us.

The basic problem for you is that you might have to contribute in a small way to stopping the increasing damage. That is too mundane for you, beneath your perceived station in life. Your kids and grandkids will be so proud of your decisions when they are trying survive

Re: (Score:2)

by Rinnon ( 1474161 )

> Wake me up when the Earth is actually being destroyed, then I can worry about it.

Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.

Catastrophic Bullshit (Score:2)

by awwshit ( 6214476 )

What a load of bullshit. Is humanity really so fragile?

Beautiful (Score:1)

by sirv ( 4898197 )

Beautiful ending for human race - to be devoured by AI. Remember - we were always just a bootloader.

Isn't it ironic... Alanis M. (Score:1)

by SlashTex ( 10502574 )

I'm too cheap or lazy to get past the NYT paywall. Here is a just released and accepted paper "Government Interventions to Avert Future Catastrophic AI RIsks" (Yoshua Bengio, 2024) if you are interested.

[1]https://assets.pubpub.org/j0fp... [pubpub.org]

I've believed since pre-covid that one of the largest risks is using AI to tailor bio-weapons. Contrary to the title of this paper, it is govenmental sponsoring of this that increases catastropic AI risk in this and in many other cases.

[1] https://assets.pubpub.org/j0fpbls3/Bengio%20(2024)_Just%20Accepted-01713205361183.pdf

If.. (Score:2)

by muffen ( 321442 )

Global warming, threat of nuclear weapons detonating, third world war, asteroid hitting the planet... Killer robots?

Nope.. What killed humanity was a lot of if-statements!

Pandora's box? (Score:2)

by gkelley ( 9990154 )

They found this pretty box laying around with the words "Don't Open Me". They did and now they're warning us that inside that box was a threat to humanity. Maybe they'll be the first to disappear.

The Worst Prison Guards
The largest number of convicts ever to escape simultaneously from a
maximum security prison is 124. This record is held by Alcoente Prison,
near Lisbon in Portugal.
During the weeks leading up to the escape in July 1978 the prison
warders had noticed that attendances had fallen at film shows which
included "The Great Escape", and also that 220 knives and a huge quantity
of electric cable had disappeared. A guard explained, "Yes, we were
planning to look for them, but never got around to it." The warders had
not, however, noticed the gaping holes in the wall because they were
"covered with posters". Nor did they detect any of the spades, chisels,
water hoses and electric drills amassed by the inmates in large quantities.
The night before the breakout one guard had noticed that of the 36
prisoners in his block only 13 were present. He said this was "normal"
because inmates sometimes missed roll-call or hid, but usually came back
the next morning.
"We only found out about the escape at 6:30 the next morning when
one of the prisoners told us," a warder said later. [...] When they
eventually checked, the prison guards found that exactly half of the gaol's
population was missing. By way of explanation the Justice Minister, Dr.
Santos Pais, claimed that the escape was "normal" and part of the
"legitimate desire of the prisoner to regain his liberty."
-- Stephen Pile, "The Book of Heroic Failures"