Worry not. China's on the line saying AGI still a long way off
- Reference: 1741172470
- News link: https://www.theregister.co.uk/2025/03/05/boffins_from_china_calculate_agi/
- Source link:
Generative AI models [1]have passed the Turing Test and now the tech industry is focused on Artificial General Intelligence (AGI), the hypothetical point at which a computer can understand or learn any intellectual task as well as a human.
Presently, AGI is vaguely defined and does not exist, though there are people already [2]trying to prevent its emergence. Among AI boosters, AGI is a bit like quantum computing – a distant goal [3]cited for funding .
[4]
Citing intelligence tests devised by Turing and others – though disappointingly not the [5]Voight-Kampff test from Blade Runner – researchers in China have proposed a method called the Survival Game to determine whether AI models qualify as AGI.
The Survival Game is essentially a simplified form of natural selection
Authors Jingtao Zhan, Jiahao Zhao, Jiayu Li, Yiqun Liu, Bo Zhang, Qingyao Ai, Jiaxin Mao, Hongning Wang, Min Zhang, and Shaoping Ma – affiliated with Tsinghua University and Renmin University of China – describe their approach in a preprint [6]paper titled: "Evaluating Intelligence via Trial and Error."
"The main idea behind this paper is to assess whether current AI systems can find solutions through continuous trial and error," Jingtao Zhan, a PhD student in computer science at Tsinghua University and corresponding author, told The Register .
[7]
[8]
"If an AI system can find a solution within a limited number of attempts, it is considered to 'survive'; otherwise, it 'goes extinct.'"
Models that survive are allowed to progress to other tests; ones that don't pass get retrained until they do, which is a significant process.
[9]
The Survival Game covers various knowledge domains. In image classification, for example, the test assesses how many trial-and-error attempts are required before the model comes up with a correct classification. In question answering, models are tested against three well-known datasets: MMLU-Pro, NQ, and TriviaQA. In mathematics, the test measures performance using three math datasets: CMath, GSM8K, and the MATH competition dataset.
[10]Supporting code has been published to GitHub.
"The Survival Game is essentially a simplified form of natural selection, and we aim to use this approach to test whether AI can adapt and learn through such a mechanism," said Zhan.
[11]
"If an AI system passes this test, it means it can autonomously find solutions without human supervision and operate independently. This serves as both my perspective on AGI and a way to evaluate it."
[12]Why making pretend people with AGI is a waste of energy
[13]As China embraces Big Tech again, Alibaba plans vast spend to push for artificial general intelligence
[14]Microsoft warns Trump: Where the US won't sell AI tech, China will
[15]Phantom of the Opera: AI agent now lurks within browser, for the lazy
The researchers' results suggest that even if Moore's Law – the projected doubling of chip transistor density every two years – were to continue beyond its [16]arguable demise in 2016 , the cost to build a neural network capable of passing the above AGI tests would be exorbitant and it would take 70 years for hardware to be able to support the anticipated model.
"Projections suggest that achieving the autonomous level for general tasks would require 10 26 parameters," the paper says.
That's a huge number: "Five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined," the authors observe, where a human brain has 10 11 neurons and population is approaching about 10^10 people for a neuron total of 10^21.
Setting aside computation costs such as training and inference, just loading a model with that many parameters onto Nvidia H100 GPUs would be an untenable extravagance.
They struggle significantly when faced with problems that require continuous trial and error to find solutions
"Since the memory of an H100 GPU is 80GB, we would need 5 × 10 15 GPUs," the paper says. "Based on the cost of H100 GPUs ($30,000) and the market value of Apple Inc ($3.7 trillion) in February 2025, the total value of these GPUs would be equivalent to 4 × 10 7 times the market value of Apple. As we can see, without breakthroughs in hardware and AI technology, it is infeasible to afford scaling for autonomous-level intelligence."
Zhan argues these results indicate AI technology has a long way to go before it can autonomously solve unknown problems, particularly in an open environment where it must adapt through natural selection.
"While current AI systems may perform well in certain benchmarks, achieving high accuracy in predefined tasks, they struggle significantly when faced with problems that require continuous trial and error to find solutions," said Zhan.
The study, Zhan observes, shows that when AI models fail, they rarely adapt to come up with a correct solution through iterative attempts.
"In the Survival Game, this means it cannot survive," said Zhan. "Such trial-and-error learning is crucial in real-world applications, particularly in areas like tool use, autonomous agents, and self-driving cars. If AI can truly learn to solve problems through trial and error, it will mark a significant step toward widespread real-world deployment."
Food for thought. Whether you agree with the team's methodology and approach or not, and some of us here are a little skeptical of the study, we welcome people trying to calculate the trajectory of AI technology without the hype or grift. ®
Get our [17]Tech Resources
[1] https://www.nature.com/articles/d41586-023-02361-7
[2] https://www.theregister.com/2025/02/19/ai_activists_seek_ban_agi/
[3] https://www.theregister.com/2024/08/22/gartner_agi_hype_cycle/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2Z8iDNFPLBgOPLAjC-o5HWwAAAEg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[5] https://www.youtube.com/watch?v=Umc9ezAyJv0
[6] https://arxiv.org/abs/2502.18858
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44Z8iDNFPLBgOPLAjC-o5HWwAAAEg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33Z8iDNFPLBgOPLAjC-o5HWwAAAEg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44Z8iDNFPLBgOPLAjC-o5HWwAAAEg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[10] https://github.com/jingtaozhan/IntelligenceTest
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33Z8iDNFPLBgOPLAjC-o5HWwAAAEg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[12] https://www.theregister.com/2024/04/14/agi_development_kognitos/
[13] https://www.theregister.com/2025/02/23/asia_tech_news_roundup/
[14] https://www.theregister.com/2025/02/28/microsoft_trump_ai_exports/
[15] https://www.theregister.com/2025/03/03/phantom_of_the_opera_browser/
[16] https://cap.csail.mit.edu/death-moores-law-what-it-means-and-what-might-fill-gap-going-forward
[17] https://whitepapers.theregister.com/
That's a huge number: "Five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined," the authors observe
Here I am, brain the size of a planet and you have me parking cars eliminating humans!!!
Five (Fifteen) orders of magnitude
That's a huge number: "Five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined," the authors observe
The numbers printed differ by 15 orders of magnitude.
That is rather incomparable as the LLM's use feed forward networks that require many, many passes during learning, but only a single pass during inference. A feedback network with loops, like biological neurons, can easily use the same connections many, and can do so with "memory". A feed forward network has to do all the work using every connection only once.
Also, the relevant parameter for human brains are not the number of neurons, but the number of synapses, which is already 4 orders of magnitudes larger than the number of neurons (which they got wrong anyway). And each synapse is governed by more than one parameter.
In short, this comparison is utterly meaningless.
Generative AI models have passed the Turing Test ...
... which was proposed in 1950! (see https://www.britannica.com/technology/Turing-test )
That's a lot of technological change for those "electronic thinking machines"!
Re: Generative AI models have passed the Turing Test ...
So what is the normal AI IQ level? Looking around at AI everywhere is looks like it's IQ level is about 55 to 85, never any higher. We're using AI everywhere these days but never see anything indicating that its' IQ level have been determined, like ours was in school when we were kids.
I'd probably be happier with AI if it was closer to my IQ level or even higher. I expect we'd all feel happier if we knew AI IQ was close to ours.
Vacuum
One fundamental mistake they all seem to be making is evaluating their models in isolation, as if intelligence exists in a vacuum.
For an AGI entity to function in a meaningful way, it must not only process information but also contextualise success and failure within a competitive and cooperative framework. Intelligence does not emerge from raw computation alone - it arises from interactions, competition, and the need to adapt. To achieve this, AGI must operate within an environment where millions of models are evaluated simultaneously, each observing a partition of others’ results, learning from their successes and failures. Intelligence is not just about solving problems but about deciding which problems are worth solving based on observed outcomes.
Additionally, mere survival (avoiding failure) is not a sufficient driving force. Evolution has shown that species driven only by survival tend to develop just enough intelligence to maintain existence but not necessarily to innovate or generalise. Consider nature: organisms that focus solely on avoiding death (such as simple prey animals) develop survival strategies, not higher reasoning. Intelligence capable of abstraction, generalisation, and long-term planning arises when an entity has a driving incentive beyond mere survival - whether that be dominance, curiosity, power, or wealth.
Humans exemplify this: our intelligence is not a direct by product of survival but of outcompeting others for resources, status, and influence. This is why human ambition often follows maxims like "Get rich or die tryin'" - where the objective is not simply to avoid death, but to achieve an aspirational goal, even at great risk. An AGI trained without such a pressure system will stagnate at the level of an adaptive but ultimately narrow intelligence.
For AGI to truly generalise, it must exist in a framework where:
- It learns from a vast, evolving ecosystem of competing and cooperating models.
- It is driven by an incentive beyond mere function or survival - an incentive tied to achieving, not just existing.
- It actively shapes its own objectives, rather than being restricted to static, pre-programmed tasks.
Without these elements, AGI development risks becoming an endless loop of solving narrow, predefined tasks rather than evolving into an autonomous, self-directed intelligence.
Re: Vacuum
You make it sound as if evolution is a choice.
Every organism fills a niche and has just the required amount of faculties to perform in that niche.
I would suggest that instead of survival being a sole motivator, there are at least 3.
Food
Sex
Survival
TV....
It's not critical to a species for an individual to survive if they have already successfully mated. Longer lived animals have greater "free time" and can afford to experiment.
Then other thing I think is missing from this studys' approach is competition. Not in the sense of one against another separately scoring points, but where one intelligence is directly threatened by another. That is where learning and innovation come to the fore, in unforeseen challenges.
Re: Vacuum
But then following your logic the AGI first ambition is to acquire all the electricity it can. Good luck trying to turn it off as it will have thought of that - stopping you of depriving it of its primary goal.
Reverse Turing
How soon can we hope for a Reverse Turing, to check a human is not a robot?
Like, a representative in order to become MP (or equivalent)?
Re: Reverse Turing
Please click on any image which includes a motorbike...
Re: Reverse Turing
"How soon can we hope for a Reverse Turing, to check a human is not a robot?"
That has won the Oscars last week:
I'm Not a Robot (film)
https://en.wikipedia.org/wiki/I%27m_Not_a_Robot_(film)
"Such trial-and-error learning is crucial in real-world applications, particularly in areas like ... self-driving cars"
Trial and error is not a good idea for self-driving cars.
Can't remember who said it ...
... but they said something to the effect that what if we have now in the 'AI' sphere is like getting a person into Earth orbit, then AGI is like interstellar FTL spaceflight.