Google DeepMind promises to help you evolve your algos
- Reference: 1747294269
- News link: https://www.theregister.co.uk/2025/05/15/google_deepmind_debuts_algorithm_evolving/
- Source link:
Computer algorithms are sets of instructions used to solve complex problems. AlphaEvolve is pitched as a useful tool for mathematicians, scientists, and engineers working on algorithmic tasks, ranging from abstract mathematical proofs to scheduling jobs across datacenters. It promises to evaluate the performance of code using automated metrics, then proposes improvements by evolving new versions of the algorithm.
For example, in an effort to improve matrix multiplication - a core operation in machine learning - AlphaEvolve discovered a new algorithm for multiplying 4×4 complex-valued matrices using just 48 scalar multiplications, surpassing [1]Strassen’s 1969 result, Google explains.
[2]
Because AlphaEvolve focuses on code improvement and evaluation rather than representing hypotheses in natural language like Google's [3]AI co-scientist system, hallucination is less of a concern.
[4]
[5]
Inside Google, researchers say AlphaEvolve has improved the efficiency of data center scheduling, chip design, and AI training. They also credit it with helping design faster matrix multiplication algorithms and generating new solutions to long-standing math problems.
"AlphaEvolve pairs the creative problem-solving capabilities of our [6]Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas," the AlphaEvolve team explains in a [7]blog post .
[8]
DeepMind's use of the term "evolve" makes the coding agent's technological process sound organic. The accompanying [9]paper [PDF] also uses terms with biological associations: "AlphaEvolve extends a long tradition of research on evolutionary or genetic programming, where one repeatedly uses a set of mutation and crossover operators to evolve a pool of programs."
this is one more sign that neurosymbolic techniques that combine neural networks with ideas from classical AI, is the way of the future
Asked whether Google's description of agent is overly anthropomorphic, Gary Marcus, an AI expert, author, and critic, told The Register that the terminology is fair enough.
"The use of the term is fine, standard in that field and not unreasonable," he said. "It’s great to see DeepMind think outside the box of pure large language models, and this is one more sign that [10]neurosymbolic techniques that combine neural networks with ideas from classical AI, is the way of the future."
Stuart Battersby, CTO of AI firm Chatterbox Labs, expressed optimism about AlphaEvolve's potential, while also emphasizing the need to keep security in mind during any AI deployment.
"The development of AI algorithms needs to happen at pace, and so it is great to see AlphaEvolve helping to automate this process," he told The Register . "This means that AI solutions not only get through the development cycle quicker, but hopefully produce better results too – it seems that the AlphaEvolve team have provided evidence of this."
[11]
Google has used AlphaEvolve to optimize the performance of its [12]Borg compute cluster management system in its datacenters. According to the researchers, the coding agent proposed a heuristic function for online compute job scheduling that outperformed one running in production.
"This solution, now in production for over a year, continuously recovers, on average, 0.7 percent of Google’s worldwide compute resources," the researchers claim.
[13]The future of LLMs is open source, Salesforce's Benioff says
[14]Meta's still violating GDPR rules with latest plan to train AI on EU user data, says noyb
[15]Europe plots escape hatch from the enshittification of search
[16]Everyone's deploying AI, but no one's securing it – what could go wrong?
The DeepMind team also note that AlphaEvolve helped optimize matrix multiplication operations involved in the training of Google's Gemini model family by speeding up its [17]Pallas kernel 23 percent, for a training time reduction of 1 percent.
To evaluate AlphaEvolve's utility, the DeepMind team gave it more than 50 open problems in mathematical analysis, geometry, combinatorics, and number theory.
"In roughly 75 percent of cases, it rediscovered state-of-the-art solutions, to the best of our knowledge," the researchers claim. "And in 20 percent of cases, AlphaEvolve improved the previously best known solutions, making progress on the corresponding open problems."
Google is planning to offer early access to academics. Those interested can apply [18]here . ®
Get our [19]Tech Resources
[1] https://en.wikipedia.org/wiki/Strassen_algorithm
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aCW7QDV_RFd2ktglDe6xwgAAAo0&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aCW7QDV_RFd2ktglDe6xwgAAAo0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aCW7QDV_RFd2ktglDe6xwgAAAo0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://deepmind.google/technologies/gemini/
[7] https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aCW7QDV_RFd2ktglDe6xwgAAAo0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[9] https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
[10] https://arxiv.org/abs/2305.00813
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aCW7QDV_RFd2ktglDe6xwgAAAo0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[12] https://research.google/pubs/large-scale-cluster-management-at-google-with-borg/
[13] https://www.theregister.com/2025/05/14/future_of_llms_is_open/
[14] https://www.theregister.com/2025/05/14/metas_still_violating_gdpr_rules/
[15] https://www.theregister.com/2025/05/14/openwebsearch_eu/
[16] https://www.theregister.com/2025/05/14/cyberuk_ai_deployment_risks/
[17] https://docs.jax.dev/en/latest/pallas/index.html
[18] https://forms.gle/WyqAoh1ixdfq6tgN8
[19] https://whitepapers.theregister.com/
Correctness
You can measure whether or not the optimised version is more efficient, but how do you know its actually correct?
Amid all this hoopla about AI writing code I've yet to see a single report of AI writing *tests*. Maybe because that's just fundamentally impossible. Turning fuzzy requirements into a concrete specification in the form of test cases requires some actual intelligence.
Get the tests right and the code is easy, that's the core of TDD. If there are no tests to support the AI generated slop then the overhead of proving its right before accepting it greatly outweighs any possible advantage.
Re: Correctness
FWIW for a new algorithm, you don't write a test, you write a proof.
And a proof for Strassen's algorithm, or any of the others that followed[1], is well within the capabilities of non-LLM-type programs (all you need is a good old symbolic maths package - but before we got those working that was also one of the items on the AI researchers' list)[2].
If you are setting an LLM - or any other statistical search - then you'd (hopefully) just bolt on a symbolic maths prover on the end, to generate the carrot-or-stick reinforcement feedback.
[1 ]Strassen was not the last to improve it, he is the one that pointed out it was possible, and useful, to do so
[2] If it works, it isn't AI (anymore)
Improved on Strassen's 1969 result
Now go and read the linked to Wikipedia page - even it points out that:
1 - Strassen's version has been improved a number of times; his importance is not that he claimed to have found *the* optimal solution.
2 - in practice, you don't really bother with such tricks, because the costs from movement of data swamp the gains from removing a couple more multiplications[0].
So, to be *useful* and (even vaguely) worth the costs of using LLMs (instead of, say, improving upon the techniques from the 1970s & '80s that looked for and found new identities[1]) we need AT LEAST a demonstration that the new algorithm takes into account load/store AND decomposition of larger arrays into the 4x4 subunits AND recombination (including elimination of the excess zero rows and columns for a non 4^k x 4^j matrix) into the final result.
Then we start talking about deriving comprehensive test suites for the code that demonstrates this algorithm.
[0] consider if the demonstration code was written "the obvious way" where element A1,1 was always written as matrixA[1][1], A1,2 as... and all the pointer arithmetic involved. Then the work needed to improve on that...
[1] curses, don't recall the name - there was a novel proof for triangle identities and ... nope, gone, and all Google talks about today is DeepMind; funny that.
Matt Parker already mentioned this....
How remarkable - I watched Matt Parker talking about AlphaEvolve yesterday....
[1]Stand-up Maths - fascinating stuff. Especially the bit about matrix multiplication and how AlphaEvolve improved on (a little) or equalled human attempts at efficiency but in two cases was a little worse.
[1] https://youtu.be/sGCmu7YKgPA?si=o70k-bYgvUyEputk