AI Pioneers Call for Protections Against 'Catastrophic Risks' (nytimes.com)
- Reference: 0175008395
- News link: https://slashdot.org/story/24/09/16/198242/ai-pioneers-call-for-protections-against-catastrophic-risks
- Source link: https://www.nytimes.com/2024/09/16/business/china-ai-safety.html
> Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
[1] https://www.nytimes.com/2024/09/16/business/china-ai-safety.html
Catastrophic Risk (Score:2)
Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?
I don't have an NYT subscription.
Simple Red Line (Score:2)
I don't have an NYT subscription either but apparently a javascript blocker is just as good. The article makes no mention of what those risks are but does suggest some "red lines" that at least governments should be notified about, specific examples were an AI that can copy itself (which seems a very low bar given how easy that is to achieve) and an AI that can deliberately deceive its creators which seems a very vaguely defined bar since "deliberately" pre-supposes free will which is not well defined given
Lock 'm up (Score:2)
They say they pose a risk to humanity and want us to take action? Are they sure? Because we got just the place for them.
Manual override is missing from all datacenters (Score:2)
The real existential threat is that all datacenters are hard for humans to get into and very easy for super-intelligent AI with zero-day intrusion capabilities to take over. It is very easy to create manual overrides to turn the redundant power and redundant connectivity off manually - without any digital tools - if we want to. But nobody seems to realise the importance of this manual override.
Squirrels and backhoes (Score:2)
As long as squirrels and backhoes exist, datacenters are vulnerable.
Re: (Score:2)
The power company transformers sit outside the building. A couple of pickup trucks driving into them takes out the entire datacenter.
Re: (Score:2)
Until they start embedding nuclear reactors inside the datacenter.
Re: (Score:2)
And that is the best reason not to fear AI... AI as we know it does not scale.
Catastrophic risk? Ill believe it when I see it (Score:1)
They warned us of catastrophic ricks on environmental destruction.
They warned us of catastrophic risks on global warming.
Theyre warning us of catastrophic risks on AI.
We're still here, everything is just fine. Wake me up when the Earth is actually being destroyed, then I can worry about it.
Re: (Score:2)
So apparently incremental increasing damage to the environment and problems caused by global warming do not rate high enough for you to care about. The only risk you'd probably acknowledge is an asteroid the size of Mars taking dead aim on us.
The basic problem for you is that you might have to contribute in a small way to stopping the increasing damage. That is too mundane for you, beneath your perceived station in life. Your kids and grandkids will be so proud of your decisions when they are trying survive
Re: (Score:2)
> Wake me up when the Earth is actually being destroyed, then I can worry about it.
Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.
Catastrophic Bullshit (Score:2)
What a load of bullshit. Is humanity really so fragile?
Beautiful (Score:1)
Beautiful ending for human race - to be devoured by AI. Remember - we were always just a bootloader.
Isn't it ironic... Alanis M. (Score:1)
I'm too cheap or lazy to get past the NYT paywall. Here is a just released and accepted paper "Government Interventions to Avert Future Catastrophic AI RIsks" (Yoshua Bengio, 2024) if you are interested.
[1]https://assets.pubpub.org/j0fp... [pubpub.org]
I've believed since pre-covid that one of the largest risks is using AI to tailor bio-weapons. Contrary to the title of this paper, it is govenmental sponsoring of this that increases catastropic AI risk in this and in many other cases.
[1] https://assets.pubpub.org/j0fpbls3/Bengio%20(2024)_Just%20Accepted-01713205361183.pdf
If.. (Score:2)
Global warming, threat of nuclear weapons detonating, third world war, asteroid hitting the planet... Killer robots?
Nope.. What killed humanity was a lot of if-statements!
Pandora's box? (Score:2)
They found this pretty box laying around with the words "Don't Open Me". They did and now they're warning us that inside that box was a threat to humanity. Maybe they'll be the first to disappear.
It's all lip service (Score:2)
When it comes down to:
1) Profits
2) National Security
3) Gaining any sort of advantage
There really are no rules. Especially when the winner of the race in question stands to gain so much.
Human beings might agree to things publicly but, in practice, it's all a facade.
They will quickly throw all morals, ethics and safety concerns to the wind to obtain what they want.
Re: (Score:2)
I've seen some interviews with AI CEO's and other "lumenaries" for various AI companies. When they talk about the first person to get to AGI, they've used analogies of things like "the first fisherman to haul in a genie." They think that whoever gets there first is going to be able to have huge material advantages over the rest of humanity. They are all ivy-league grads; maybe they are right. I really wonder about the high-level defections from OpenAI and others and what the real game plan is. Some have ove