News: 1769447805

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Keep it simple, stupid: Agentic AI tools choke on complexity

(2026/01/26)


Agents may be the next big thing in AI, but they have limits beyond which they will make mistakes, so exercise extreme caution, a recent research paper says.

According to a [1]definition by IBM , agentic AI consists of software agents that mimic human decision-making to solve problems in real time, and this builds on generative AI techniques by using large language models (LLMs) to function in dynamic environments.

But while the industry hype machine pushes agentic AI as the next big thing, potential adopters should be wary, as the paper, " [2]Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models " [PDF] argues that LLMs are incapable of carrying out computational and agentic tasks beyond a certain complexity level, above which they will deliver incorrect responses.

[3]

The paper uses mathematical reasoning to show that if a prompt to an LLM specifies a computational task whose complexity is higher than that of the LLM's own core operations, then the LLM will in general respond incorrectly.

[4]

[5]

Essentially, the argument of the authors is that it is possible to present an LLM with an input specifying a task that requires more calculations than it is capable of performing.

This has relevance to agentic AI because there is considerable interest of late in the technology's potential role in automating various tasks across a range of applications, from those that simply involve providing information, to others that have real-world effects, such as making financial transactions or controlling and managing industrial equipment.

[6]

Furthermore, the paper claims to show that deploying agents to verify the correctness of another agent's solution for a given task will also fail for the same reasons, because verification of a task is often more complex than the task itself.

"We believe this case to be especially pertinent since one of the most prevalent applications of LLMs is to write and verify software," the authors state.

The paper, which was published last year but seems to have gone largely unnoticed until flagged by tech publication Wired, was written by Varin Sikka and Vishal Sikka. The latter was formerly CTO of SAP and CEO of Infosys, and currently the founder of AI company Vianai Systems.

[7]

The conclusion the paper reaches is that "despite their obvious power and applicability in various domains, extreme care must be used before applying LLMs to problems or use cases that require accuracy, or solving problems of non-trivial complexity."

[8]Palo Alto Networks security-intel boss calls AI agents 2026's biggest insider threat

[9]Sandia boffins let three AI agents loose in the lab. Science, not chaos, ensued

[10]How to answer the door when the AI agents come knocking

[11]Cursor used agents to write a browser, proving AI can write shoddy code at scale

In other words, this doesn't mean that AI agents will necessarily be a disaster, but anyone developing and deploying such solutions needs to be mindful of whether assigned tasks exceed the underlying model's effective complexity limits.

As The Register reported recently, for example, scientists at the US Department of Energy's Sandia National Labs [12]made use of AI assistants to develop a novel approach for steering LED light, showing that the technology has promise.

However, the risks posed by such agents are top of mind for many top execs, even featuring in a [13]panel discussion on cyber threats at the WEF in Davos recently.

Last year, research firm Gartner even forecast that [14]more than 40 percent of agentic AI projects are set to be cancelled by the end of 2027, citing reasons including escalating costs, unclear business value, and insufficient risk controls.

Work on mitigating these unfortunate limitations of LLMs is ongoing, the paper notes, with approaches including composite systems and constraining the models. ®

Get our [15]Tech Resources



[1] https://www.ibm.com/think/topics/agentic-ai

[2] https://arxiv.org/pdf/2507.07505

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXfyIU7lnxrSRDd2pRlVOgAAABY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXfyIU7lnxrSRDd2pRlVOgAAABY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXfyIU7lnxrSRDd2pRlVOgAAABY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXfyIU7lnxrSRDd2pRlVOgAAABY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_specialfeatures/agenticai&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXfyIU7lnxrSRDd2pRlVOgAAABY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[8] https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/

[9] https://www.theregister.com/2026/01/26/sandia_ai_agents_feature/

[10] https://www.theregister.com/2025/12/09/okta_agent_control/

[11] https://www.theregister.com/2026/01/22/cursor_ai_wrote_a_browser/

[12] https://www.theregister.com/2026/01/26/sandia_ai_agents_feature/

[13] https://www.theregister.com/2026/01/21/davos_ai_agents_security/

[14] https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

[15] https://whitepapers.theregister.com/



Tea?

Eclectic Man

It is not just complexity that gets AI to fail

"And do you know why I would like a cup of tea?"*

(Re: Nutrimatic Drinks Machine)

I'll get my coat, it's a dressing gown with a towel in the pocket ...

*I'm an ignorant ape descendent who doesn't know any better.

Lovely.

nematoad

...making financial transactions or controlling and managing industrial equipment.

Who do you sue when things got tits up? As they will.

A clearer recipe for disaster would be difficult to find.

It might be worth reflecting that this group was originally created
back in September of 1987 and has exchanged over 1200 messages. The
original announcement for the group called for an all inclusive
discussion ranging from the writings of Gibson and Vinge and movies
like Bladerunner to real world things like Brands' description of the
work being done at the MIT Media Lab. It was meant as a haven for
people with vision of this scope. If you want to create a haven for
people with narrower visions, feel free. But I feel sad for anyone
who thinks that alt.cyberpunk is such a monstrous group that it is in
dire need of being subdivided. Heaven help them if they ever start
reading comp.arch or rec.arts.sf-lovers.
-- Bob Webber