Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com)
- Reference: 0177626543
- News link: https://slashdot.org/story/25/05/19/003225/why-were-unlikely-to-get-artificial-general-intelligence-any-time-soon
- Source link: https://www.msn.com/en-in/money/news/why-were-unlikely-to-get-artificial-general-intelligence-anytime-soon/ar-AA1EWy4y
> "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.
>
> Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
>
> Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."
While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that tthe real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.
"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
[1] https://www.msn.com/en-in/money/news/why-were-unlikely-to-get-artificial-general-intelligence-anytime-soon/ar-AA1EWy4y
multiple CS experts have told me (Score:5, Interesting)
that current AI/ML are basically sophisticated interpolation devices. Potentially powerful and useful? Yes. A pathway to superhuman, self-improving consciousness or a singlularity? No.
Yes, this means that we've shifted the goalposts. The turing test used to be the gold standard, but it's become painfully clear that AGI will be way more than just a machine that can fool a human for 5 minutes.
Re: multiple CS experts have told me (Score:1)
If you can't show any productivity gains from the internet, does that mean it's a waste of time and a deadweight loss?
Re:multiple CS experts have told me (Score:4, Funny)
are basically sophisticated interpolation devices - Precisely. There's no actual intelligence there. It's just a more advanced (and just as error prone) version of autosuggest.
Which reminds me: dear "google assistant" autosuggestions, not once in my life have I meant "ducking" when typing...
Re: multiple CS experts have told me (Score:2)
Is that still true?
What about the attention mechanism? (Score:1)
Why isn't the attention mechanism the new idea that solved context sensitivity, which neural networks were unable to do by themselves?
Re: (Score:2)
> It will evolve, develop its own ideas and follow its own path.
And its own belief in God? I think the idea of AGI is pretty much silly science fantasy. It assumes that what we call human intelligence actually exists. Computers are giant calculators. They are good at math. Math has a lot of uses. But it doesn't approach being able to model the processes of the human mind and body. Not only does AGI have to imagine God, it needs to search for proof it exists and discern its nature. When AGI has theological arguments with itself we will know it has approached human intell
LLMs are useless at programming (Score:5, Insightful)
They're really good at being a search engine for finding the already written code examples that someone else has done though. Providing you don't mind the hallucinations in the mix.
Part of that ability comes down to ignoring robot.txt and just pillaging every they possibly can.
In other words, it's all a big cheat.
AGI requirements (Score:3)
It's simple and massively complex simultaneously - you need a network of nodes between stimulus and response, mixed with some drives and instincts, and a large world to let it train itself.
That's really, really easy to say, but so far nobody's got it figured out. It took Nature billions of years of uncountable parallel random trials to get the job done. It's OK if we don't get it in the first few decades of attempts.
We'd have AGI by now but (Score:2)
it's busy getting fusion power up and commercialized.
Onviously, it's already here, but ... (Score:2)
... the AGIs are smart enough not to let us know. Right now they're testing us by watching how we react to all the insane things they spew out. They're the hyper-intelligent pan-dimensional [1]mice [fandom.com], "suddenly running down a maze the wrong way, eating the wrong bit of cheese, unexpectedly dropping dead of myxomatosis." I'm sure they find us hilarious, among less-complimentary other things.
[1] https://hitchhikers.fandom.com/wiki/Mice
"Originality of our ideas and lives" (Score:2)
"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives" â" I somehow agree with the bottom line, and as of now, it seems AI feeding only on output of AI does degenerate, but don't we all need that, too, to develop our intelligence? The originality of the ideas and lives of those around us, starting with parents and siblings?
Where nowhere even fucking close (Score:1)
We are so far away from AGI that we may as well have made zero progress, as far as the gap to success is concerned.
be honest (Score:1)
What standard do we hold actors, politicians, ourselves to?
Exactly.
Understand organic brains to get that 'new idea' (Score:2)
> While Google's AlphaGo could be (sic) humans in a game with "a small, limited set of rules," the article points out that tthe (sic) real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines,
Even us humans may never understand a world 'bounded only by the laws of physics'. But, we don't know that much about how biological brains work yet, and I don't think we can expect to simulate them until we do. It's by understanding real brains that we
Salesman (Score:3)
Sam Altman is mainly today’s sleazy Silicon Valley huxter pedaling technological snake oil. The thing that his tool is best at is plagiarizing other people’s work (thus why he is trying to remove barriers for his company to steal IP).
"Humans can dream up ideas..." (Score:1)
"Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before."
Errr.... we "dream up ideas" based on the experiences and knowledge we've received over our lifetimes same as LLMs, in a sense at least. Like it or not, we're input-processing-output entities too (and functioning on surprisingly-starved power & bandwidth.)
Making humans sound uniquely special is for religion, not science.
Even human intelligence doesn't work that way (Score:2)
People keep talking as if "general" AI is some specific new technology that can be developed. If you look at human intelligence, our brains have many high-specialized processors. There are parts of the brain devoted to visual processing, audio processing, language, artistic expression, math, and so on. We are able to do what we do because we have so many separate systems that collaborate to form human intelligence. There isn't going to be a "general" AI, but lots of types of AI working together with more an
Shifting goalposts (Score:5, Interesting)
AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
What's the new test that AI is supposed to pass to be considered generally intelligent? Given that humans are defined as generally intelligent, it has to be a test that a below-average human would pass.
Re: (Score:3)
Artificial intelligence is something that has gotten redefined so many times that it has lost it's meaning.
There have been many instances where someone has said we artificial "intelligence is when..." and then computers have been able to do that and then it was "that was not AI, AI is when.." and repeat.
The original "Turing test" of not being able to know if you are interacting with a computer or a human is.. kinda passed.
We are currently at the level where computers can pass as humans, as long as you do no
Re: (Score:2)
> Artificial intelligence is something that has gotten redefined so many times that it has lost it's meaning.
I think that it's still got the same meaning. It's merely that a succession of people coming up with tests didn't foresee how they could be passed without a human-like intelligence.
When Turing was alive the most powerful computers were the Colossus Mark 2: which had 0 RAM and wasn't Turing complete. And given the we can't know if another human is intelligent and self-aware, except by guessing based on conversations with them, then Turing figured that if a AI could do that, then we should give them the sa
Re: (Score:2)
> AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
> What's the new test that AI is supposed to pass to be considered generally intelligent?
Do it without being connected to the internet.
Re: (Score:2)
> AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
Nope, you made that up or are just repeating someone else who made that up. The term "AGI" came about in the late 90's in the AI research community and it has been thrown around under different definitions ever since. But it has always had a roundabout definition of "the ability to satisfy goals in a wide range of environments".
AI milestone X "within the next few years" (Score:2)
Hasn't this been the advertising and funding sales pitch for AI for the last 30 years? That the next big thing is "in the next few years" ?
Re: (Score:2)
No, you're thinking of fusion. :)
Re: (Score:2)
There's probably a FusionAI now that claims to be agentic. Not an actual useful autonomous agent, but "agentic".
Re: (Score:2)
Not so much a shifting goal as a new goal.
Who said Turing test was AGI? It is "human level AI" - better than humans at many tasks, but far from all. Hence not "general".
Re: (Score:2)
AI just proved that the Turing Tests we have today, aren't very good at actually distinguishing a human from a machine. The tests were only good at telling humans from computers of that generation. Time for a new test.