News: 0179614774

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

A 'Godfather of AI' Remains Concerned as Ever About Human Extinction (msn.com)

(Wednesday October 01, 2025 @05:20PM (msmash) from the doubling-down dept.)


Yoshua Bengio called for a pause on AI model development two years ago to [1]focus on safety standards . Companies instead invested hundreds of billions of dollars into building more advanced models capable of executing long chains of reasoning and taking autonomous action. The A.M. Turing Award winner and Universite de Montreal professor told the Wall Street Journal that [2]his concerns about existential risk have not diminished .

Bengio founded the nonprofit research organization LawZero earlier this year to explore how to build truly safe AI models. Recent experiments demonstrate AI systems in some circumstances choose actions that cause human death over abandoning their assigned goals. OpenAI recently insisted that current frontier model frameworks will not eliminate hallucinations. Bengio, however, said even a 1% chance of catastrophic events like extinction or the destruction of democracies is unacceptable. He estimates advanced AI capable of posing such risks could arrive in five to ten years but urged treating three years as the relevant timeframe. The race condition between competing AI companies focused on weekly version releases remains the biggest barrier to adequate safety work, he said.



[1] https://tech.slashdot.org/story/23/05/30/1149219/ai-poses-risk-of-extinction-industry-leaders-warn

[2] https://www.msn.com/en-us/money/other/a-godfather-of-ai-remains-concerned-as-ever-about-human-extinction/ar-AA1NF4Np



We have seen (Score:2)

by gabrieltss ( 64078 )

We have seen all the movies and have seen what the outcome is. He is only trying to drive that point home and no one is listening. So when all the bad things in the movies come true, humans have no one but themselves to blame!

Plots (Score:2)

by JBMcB ( 73720 )

I saw that movie. The PR lady falls in love with the robot that looks like John Malkovich. I'm not sure what's wrong with that.

I...kinda want to watch. (Score:1)

by Khan_Singh ( 5299861 )

I wish I could be in the room with Sam Altman when the AI releases the fungus that ends the carbon cycle on our planet. When he realizes every human on the plane is going to be dead and it was his fault. I won't be, but my death won't be without a certain bizarre satisfaction.

executing long chains of reasoning (Score:2)

by oldgraybeard ( 2939809 )

Odd name for really, really ....... long if elseif type of statements. "chains of reasoning"??

Re: (Score:3)

by allo ( 1728082 )

Look up "Chain of Thought" for LLM. That's a terminus technicus for a very specific type of LLM output on which reasoning/thinking models are trained.

Re: (Score:2)

by DamnOregonian ( 963763 )

> Odd name for really, really ....... long if elseif type of statements.

No, it is not.

Though "chains of reasoning" is dubious for other reasons.

They're token streams that have an impact on the generation of the model.

In some cases, they're good at reasoning within them. In some cases, they're not.

And most confusingly- the output of the model isn't directly correlated to the quality of the reasoning, only that the token stream was generated.

OpenAI doesnt care (Score:3)

by RealMelancon ( 4422677 )

They are in only for the money. The rest of the planet, and the end of civilization? Who cares⦠They will be crawling in money with no one around to watch them.

Propose a mechanism that doesn't require trust (Score:2)

by HiThere ( 15173 )

It would, indeed, be *highly* preferable to pause, or at least slow, AI development in order to design and implement safeguards. But there are multiple groups striving to capture the first-mover advantage. Anyone who slows development will be bypassed. And they aren't all under the same legal system, so that approach won't work either.

Consider [1]https://ai-2027.com/ [ai-2027.com] , That's a scenario that currently seems a bit conservative, if you check the postulated timeline against what's been (publicly) happening, t

[1] https://ai-2027.com/

Re: (Score:2)

by Comboman ( 895500 )

Even promising to pause AI development while continuing to work on it in secret government labs still has the beneficial effect of slowing progress since you don't also have the private sector and universities throwing all of their resources at it.

The Big Dumb is here (Score:2)

by Quakeulf ( 2650167 )

Everything is getting dumber as upper level management continues to cling to their positions of power, completely unaware of their impact, for as long as they can. This is why "LLMs" and diffusion models (which are a red herring) can become useful, when the baseline is someone who never faces consequences.

Law Zero - Asimov Style (Score:2)

by UnresolvedExternal ( 665288 )

The information you provide MUST be provably true and correct

Re: (Score:2)

by nightflameauto ( 6607976 )

> The information you provide MUST be provably true and correct

That's not law zero. Law zero was an override that allowed short-term harm to humans for long-term benefit for humanity overall, allowing (can't remember the name) to do something to speed up the atomic decay on Earth to push people to start exploring the stars

Re: (Score:2)

by UnresolvedExternal ( 665288 )

Haha very true! He should have called it LawKelvin

Re: (Score:2)

by smooth wombat ( 796938 )

This is the Zeroth law:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Re: (Score:2)

by nightflameauto ( 6607976 )

> This is the Zeroth law:

> A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Thank you. Been a while since I'd actually read it. I just remembered it caused an internal conflict for the bot that acted on it, due to the other laws sorta clashing with it.

AI isn't needed (Score:2)

by evil_aaronm ( 671521 )

We don't need AI to bring about human extinction: we're doing a fine job without it. How many of us are taking anthropogenic climate change seriously?

AI might bring about extinction quicker, but make no mistake: we're on the highway to hell, no stop signs, or speed limit.

Re: (Score:2)

by Big Hairy Gorilla ( 9839972 )

I was thinking the same thing. We don't need AI to consciously act maliciously towards us. The natural path to entropy is underway: all carnival barkers promise the best show, but none of them have it. Who do you listen to? Management anxious not to be left behind are sprinkling AI on everything hoping to cut monthly costs and are killing their golden goose by losing institutional memory and will have no one to train to take over the business because they are not hiring entry level people. Once vibe program

it is impossible that the improbable won't happen (Score:3)

by Prof.Phreak ( 584152 )

even a 1% chance of catastrophic events like extinction or the destruction of democracies is unacceptable

...it is impossible that the improbable won't happen.

The only solution is to limit individual systems... define a sort of kelly criterion for AI, where a single big failure does not mean extinction. e.g. for military robots, mandate any model/manufacturer/dataset/etc., be limited to say 5% of the entire robot fleet... don't let them share code, or collaborate. We *want* them to have different bugs. That way if there's a glitch/feature/emergence someplace, it's limited.

Same for medicine/treatments synthesized with the help of AI... only let 5% o the population to benefit from any individual "solution". That way if there's an extinction gene-editing virus, only 5% of the population is impacted, etc.

Re: (Score:3)

by Big Hairy Gorilla ( 9839972 )

can I kick that around a bit?

I've been telling chumps for some time now that a mono culture of software and management ideas has lead us to a place where 1 level 10 vulnerability can open the way to hacking vast numbers of systems. So, obviously, no one listens to me, but I think that means your scenario won't happen. It seems to be common (groupthink) knowledge that buying or renting pre built software or software services is the ONLY way to go. Like using libraries and gluing a system together with Ruby o

Re: (Score:2)

by VaccinesCauseAdults ( 7114361 )

GP’s idea is good but has high costs, eg vaccinating population requires 20x the the number of solutions of each solution is limited to 5%. It may be viable for global solutions but not for an individual company creating software. The closest o can think of is duplication or triplication of systems used in for example aerospace such as triple redundancy autopilot /autoland with independent sensors and implementation languages and hardware.

Re: (Score:2)

by VaccinesCauseAdults ( 7114361 )

Great expression in your title. I’ve never heard that before, However, it is untrue with a finite number of “rounds”. Rolling a six is improbable. Not rolling a six in 10 rounds is possible.

QUESTION AUTHORITY.

(Sez who?)