News: 0176622057

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Turing Award Winners Sound Alarm on Hasty AI Deployment (ft.com)

(Wednesday March 05, 2025 @11:00AM (msmash) from the more-warnings dept.)


Reinforcement learning pioneers Andrew Barto and Richard Sutton have [1]warned against the unsafe deployment of AI systems

[2]alternative source

after winning computing's prestigious $1 million Turing Award Wednesday. "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.

Barto and Sutton developed reinforcement learning in the 1980s, inspired by psychological studies of human learning. The technique, which rewards AI systems for desired behaviors, has become fundamental to advances at OpenAI and Google. Sutton, a University of Alberta professor and former DeepMind researcher, dismissed tech companies' artificial general intelligence narrative as "hype."

Both laureates also criticized President Trump's proposed cuts to federal research funding, with Barto calling it "wrong and a tragedy" that would eliminate opportunities for exploratory research like their early work.



[1] https://www.ft.com/content/d8f85d40-2c5b-4a2b-b113-87fa8e30f61b

[2] https://www.theverge.com/news/624485/turing-award-andrew-barto-richard-sutton-ai-dangers



AGI hype?! (Score:3)

by greytree ( 7124971 )

I, for one, welcome our new fast-wildlife-identification overlords.

Not good engineering practice (Score:2)

by Viol8 ( 599362 )

No shit. Since when did tech bros desperate for results and hence investor cash give a damn about those kind of (for them) irrelevances? Move fast and break things remember! Let someone else clear up the mess so long as there's a mansion and lambo with their name on it waiting.

Caution is a luxury (Score:2)

by nehumanuscrede ( 624750 )

Were nations not involved in a race to see who can reach the finish line first on this then, sure, they would probably

be far more cautious in the work.

However, like the race for the atomic bomb, the winner of this one will have similar strategic effects.

The short version: Whomever gets there first will have a significant advantage over those who do not.

Thus, regardless of what anyone says publicly, I'm pretty sure all the safeties are off and the folks developing this

stuff aren't going to let anything stan

Re: (Score:2)

by OrangeTide ( 124937 )

There are a lot of things these days where nobody is willing to lift their foot off the gas for fear of losing the race.

The difference between an AI and an Atom Bomb. Is you can freely duplicate one while the other still requires a huge industrial efforts to re-develop and produce.

safeguards (Score:1)

by Iamthecheese ( 1264298 )

"Safeguards" on LLMs are useless, and pointless as well. There isn't a thing someone can learn from an LLM they can't learn from the open internet and there isn't a thing an LLM can do that any 15 year old can't do on his own. While AIs may present a danger in the future when LLMs can self-improve, that self-improving system will not be run be a member of the public at large but by an engineer or team of engineers who has already read countless documents about these dangers and who could easily bypass the "

Multi-faceted lack of caution. (Score:2)

by nightflameauto ( 6607976 )

> "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.

There are several reasons why the companies behind AI aren't at all interested in taking the cautious approach to any of this.

The big one, and the main driver of our current technological trajectory, which could be called "Unintentional Stagnation of Progress," is that software companies have adopted a very firmly entrenched policy of never really finishing software before shoving it out to the public. Since the dawn of the Internet as an always there part of computing life, companies feel zero pressure at

HOW TO PROVE IT, PART 4

proof by personal communication:
'Eight-dimensional colored cycle stripping is NP-complete
[Karp, personal communication].'

proof by reduction to the wrong problem:
'To see that infinite-dimensional colored cycle stripping is
decidable, we reduce it to the halting problem.'

proof by reference to inaccessible literature:
The author cites a simple corollary of a theorem to be found
in a privately circulated memoir of the Slovenian
Philological Society, 1883.

proof by importance:
A large body of useful consequences all follow from the
proposition in question.