News: 0176788803

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com)

(Saturday March 22, 2025 @12:34PM (EditorDavid) from the unlikely-to-succeed dept.)


Founded in 1979, the [1]Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers [2]surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

[3] Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware:

> You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

>

> "The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, [4]told New Scientist . "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model [5]displayed significantly less improvement , and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai [6]went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

>

> Cheaper, more efficient approaches are being explored. OpenAI has used a method known as [7]test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, [8]researchers claimed . But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.



[1] https://en.wikipedia.org/wiki/Association_for_the_Advancement_of_Artificial_Intelligence

[2] https://aaai.org/about-aaai/presidential-panel-on-the-future-of-ai-research/

[3] https://futurism.com/ai-researchers-tech-industry-dead-end

[4] https://www.newscientist.com/article/2471759-ai-scientists-are-sceptical-that-modern-models-will-lead-to-agi/

[5] https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows

[6] https://futurism.com/the-byte/google-ceo-easy-ai-over

[7] https://openai.com/index/learning-to-reason-with-llms/

[8] https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/



At the stage of Alchemy (Score:1)

by Anonymous Coward

Know enough to do some useful stuff and get some things done.

But it still ain't like the science of Chemistry yet.

Deepseek (Score:4, Insightful)

by mspohr ( 589790 )

I think that Deepseek showed that the current AI approach of just throwing more hardware at AI is a dead end.

Unfortunately, the wizards at the tech monopolist companies haven't understood the message.

Re: (Score:2)

by dvice ( 6309704 )

You are now talking about OpenAI. Google Deepmind invented ChatGPT (the technology it is based on), but they abandoned it, because they knew already years ago, that it is a dead end. They are constantly trying to make an AI that doesn't need to read the whole internet in order to learn.

Re: (Score:2)

by Pinky's Brain ( 1158667 )

It's hard to know what the closed models are doing. Open models getting stuck in dense or very unambitious levels of dynamic sparsity might not be representative of the closed models. For all we know most of the major closed models are just as cheap to train and run.

the longer it takes for bubble to pop (Score:3, Insightful)

by Tablizer ( 95088 )

...the bigger the pop. Form a rainy-day fund.

Re: the longer it takes for bubble to pop (Score:1)

by RightwingNutjob ( 1302813 )

Not really. The pops leave a lot of leave-behinds that form the foundation of subsequent innovation and growth.

The broadband internet of the late 90s and early 2000s was motivated by telecoms cashing in on the first dotcom bubble, for example.

I'm Not Surprised (Score:4, Interesting)

by crunchy_one ( 1047426 )

This is exactly what anyone the least bit conversant in machine learning could have told you. Many already have. "AI" in it's present incarnation is a scam, a cheap party trick at best. Execs pouring billions into it are fools chasing an illusion out of fear that some other exec might get "there" first. They won't.

Not surprising at all (Score:2)

by jrnvk ( 4197967 )

Rarely does just throwing money at a technology problem ever bear fruit. You still need to manage goals, expectations and provide solutions to problems - which many of these AI companies are lacking at the moment.

Re: (Score:2)

by dvice ( 6309704 )

Really? Moon landing and human genome project were expensive, and they solved the problem. And even if problem has not been fully solved with money, research has given more information about the problem and partial solutions, cancer research as an example. Military projects could perhaps be an exception where money is often just wasted when $1000 drone destroys your $1B high tech machine.

I've been saying this since the bubble started (Score:4, Informative)

by Jeslijar ( 1412729 )

We're in for a giant market crash for everything ML. It's just a matter of time for investors to start panicking.

The ROI on AI doesn't seem to be here to the tune of hundreds of billions of dollars of yearly revenue like is being spent on hardware. There's a market, sure, but it's not that big.

Re: (Score:1)

by Iamthecheese ( 1264298 )

hundreds of billions per year in revenue for AI isn't any kind of far fetched for existing systems. IMO half of all programming jobs where people only need to write one well-defined module at a time are going away. (as opposed to the 25% which have already gone away) Then you have advertising copy, report writing and summarization where it doesn't need to be perfect, Junior high and high school level tutoring for any basic thing that can be done on a computer, a lot (more) of text to speech, and a number of

Re: (Score:3)

by luvirini ( 753157 )

Yes, but no.

I think that the crash as you describe will happen, but then it will grow again and be bigger than before the crash but with new use cases and so on.

In that way similar to the dot com bust in 2000

Dead end yes and no (Score:1)

by dvice ( 6309704 )

I think Deepmind is the only company that can actually make real progress in AI research, if our end goal is AGI. In this sense, everyone else is just wasting money.

But, other companies are making some interesting implementations that is based on existing research. For example application that can identify birds based on what they sound like. It doesn't help AI research in any way, but it doesn't mean that it doesn't benefit the field of biology. Even things like optimizing 3D graphics with the help AI are

Programmed (Score:5, Insightful)

by fluffernutter ( 1411889 )

If there is no provision in the AI to think independently or create on its own and everything is just a calculation on something that someone already did, then obviously you will always be bound to that no matter how much money you spend. It's funny how some people think it will become something else at some point. It can never escape what it was programmed to do.

Re: (Score:2)

by dvice ( 6309704 )

You could make an AI that follows some simple rules and creates something bigger and more complex based on those rules. Ants are a good example. They follow really simple rules, like "follow the path that has strongest smell" and "if you see x ants coming to the nest within t seconds, go out". Just with these simple rules and a few random numbers, they can locate food sources in a maze, gather all the troops there and bring food back home using the best path. Despite the fact that they don't have any progra

Re: Programmed (Score:2)

by Big Hairy Gorilla ( 9839972 )

I rarely defend LLMs, but, a llm may contain the experience of many individuals... so it could offer advice you never thought of ... looks superficially like creativity. It's not, but it could be useful

False headline (Score:3)

by DogFoodBuss ( 9006703 )

The headline is not representative of what the survey says AT ALL, which is that current AI cannot be scaled up to AGI. Anybody with half a brain could have told you that more research is still needed. Even Sam Altman has said as much. Saying that current AI is a âoedead endâ is completely ignoring all of the demonstrably useful stuff it can do now.

Not a dead end (Score:2)

by Artem S. Tashkinov ( 764309 )

It's not a dead end considering how useful it's already become but it could be a dead end in terms of achieving AGI/ASI.

Yes, but.... (Score:2)

by MpVpRb ( 1423381 )

..it's complicated

The early success of LLMs surprised its developers and excited investors

While it's true that simply throwing more data and compute power at LLMs is at the point of diminishing returns, other techniques are being developed

LLMs are now being used as the text module with other strategies being developed on top of them, like reasoning and deep research

These approaches are yielding useful results. I use perplexity and more often than not, find it useful, if imperfect

It appears that many researc

Re: (Score:2)

by Artem S. Tashkinov ( 764309 )

I've long noticed that for /. and Ars Technica it's either full-blown AGI/ASI or bust (current LLMs).

They barely care that "stupid" text generators are smarter than 99% of people on Earth including themselves: "They cannot discover new physics thus they are nothing but next word predictors". Never mind that as of now on the front page we have a news piece that shows that LLMs have already been disruptive when it comes to programming.

Neural networks anybody? (Score:3)

by kencurry ( 471519 )

What happened to them ? How did LLMs win out? Did imitation beat out rational design?

Meanwhile, you know the quantum computer guys are scared they will never make it; they had to talk Nvidia into having a pep rally for them lol.

Re: Neural networks anybody? (Score:2)

by DogFoodBuss ( 9006703 )

Language models ARE neural networks. What would make you doubt that?

Where's the report? (Score:2)

by GrahamJ ( 241784 )

Interesting but where's the report? At least a summary would be nice.

It seems to me that scale isn't going to help LLMs much more but discoveries of how to improve results and make them more efficient are still being made so I wouldn't say we're done with them yet. Whether that jives with the investments is another matter.

On the other hand LLMs are just one use of transformer models and I think machine learning in general is still in its infancy so it's possible those investments will pay off in ways we can

It's about replacing white collar workers (Score:2)

by rsilvergun ( 571051 )

They spent the last 45 years replacing blue collar workers with robots. The ones they couldn't replace they use slave labor in third world countries for. If you have a fancy satellite dish there's probably somebody in a literal mud hut in India that made it using primitive tools in incredibly dangerous conditions. I mentioned that one because it's so bizarre to see something so high tech made with such low tech manufacturing techniques but when you're literally paying for it with just enough food and shelte

more lies from the usual suspects (Score:2)

by dfghjk ( 711126 )

"In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up.""

No reason it couldn't, it just won't help. Great insight from the ultra-rich CEO of one of the world's most powerful companies.

"OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution."

More utter bul

There's a lesson that I need to remember
When everything is falling apart
In life, just like in loving
There's such a thing as trying too hard

You've gotta sing
Like you don't need the money
Love like you'll never get hurt
You've gotta dance
Like nobody's watching
It's gotta come from the heart
If you want it to work.
-- Kathy Mattea