Turing Award Winners Sound Alarm on Hasty AI Deployment (ft.com)
- Reference: 0176622057
- News link: https://slashdot.org/story/25/03/05/1330242/turing-award-winners-sound-alarm-on-hasty-ai-deployment
- Source link: https://www.ft.com/content/d8f85d40-2c5b-4a2b-b113-87fa8e30f61b
[2]alternative source
after winning computing's prestigious $1 million Turing Award Wednesday. "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.Barto and Sutton developed reinforcement learning in the 1980s, inspired by psychological studies of human learning. The technique, which rewards AI systems for desired behaviors, has become fundamental to advances at OpenAI and Google. Sutton, a University of Alberta professor and former DeepMind researcher, dismissed tech companies' artificial general intelligence narrative as "hype."
Both laureates also criticized President Trump's proposed cuts to federal research funding, with Barto calling it "wrong and a tragedy" that would eliminate opportunities for exploratory research like their early work.
[1] https://www.ft.com/content/d8f85d40-2c5b-4a2b-b113-87fa8e30f61b
[2] https://www.theverge.com/news/624485/turing-award-andrew-barto-richard-sutton-ai-dangers
Not good engineering practice (Score:2)
No shit. Since when did tech bros desperate for results and hence investor cash give a damn about those kind of (for them) irrelevances? Move fast and break things remember! Let someone else clear up the mess so long as there's a mansion and lambo with their name on it waiting.
Caution is a luxury (Score:2)
Were nations not involved in a race to see who can reach the finish line first on this then, sure, they would probably
be far more cautious in the work.
However, like the race for the atomic bomb, the winner of this one will have similar strategic effects.
The short version: Whomever gets there first will have a significant advantage over those who do not.
Thus, regardless of what anyone says publicly, I'm pretty sure all the safeties are off and the folks developing this
stuff aren't going to let anything stan
Re: (Score:2)
There are a lot of things these days where nobody is willing to lift their foot off the gas for fear of losing the race.
The difference between an AI and an Atom Bomb. Is you can freely duplicate one while the other still requires a huge industrial efforts to re-develop and produce.
safeguards (Score:1)
"Safeguards" on LLMs are useless, and pointless as well. There isn't a thing someone can learn from an LLM they can't learn from the open internet and there isn't a thing an LLM can do that any 15 year old can't do on his own. While AIs may present a danger in the future when LLMs can self-improve, that self-improving system will not be run be a member of the public at large but by an engineer or team of engineers who has already read countless documents about these dangers and who could easily bypass the "
Multi-faceted lack of caution. (Score:2)
> "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.
There are several reasons why the companies behind AI aren't at all interested in taking the cautious approach to any of this.
The big one, and the main driver of our current technological trajectory, which could be called "Unintentional Stagnation of Progress," is that software companies have adopted a very firmly entrenched policy of never really finishing software before shoving it out to the public. Since the dawn of the Internet as an always there part of computing life, companies feel zero pressure at
AGI hype?! (Score:3)
I, for one, welcome our new fast-wildlife-identification overlords.