News: 1766583732

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

One real reason AI isn't delivering: Meatbags in manglement

(2025/12/24)


Feature Every company today is doing AI. From boardrooms to marketing campaigns, companies proudly showcase new generative AI pilots and chatbot integrations. Enterprise investments in GenAI are growing to about [1]$30-40 billion , yet research indicates 95 percent of organizations report zero measurable returns on these efforts.

In fact, only about 5 percent of custom AI initiatives ever [2]make it from pilot into widespread production , according to a widely shared report over the summer from MIT. This is the paradox of the current AI boom. Adoption is high, hype is higher, but meaningful business impact remains elusive. AI is everywhere except on the bottom line.

Kaseya CEO: Why AI adoption is below industry expectations [3]READ MORE

Why this disconnect? It's not that AI technology suddenly hit a wall. The models are more powerful than ever. The problem is how companies are using AI, not what AI can or cannot do. Organizations have treated AI like just another software deployment, expecting a plug-and-play solution. But AI behaves less like software and more like a new form of labor, one that requires training, context, and workflow integration.

The GenAI divide is between companies that install AI tools and those that build the capability to use them. Many enterprises are on the wrong side of this divide, convinced that buying an AI tool is equivalent to having an AI solution.

At the same time, employees often get more value from [4]shadow AI than officially sanctioned AI projects. Businesses are deploying plenty of AI, but only a handful have figured out how to extract real value from it.

Why companies fail

Simply bolting AI onto old processes doesn't work. Yet that's what most companies do. They treat AI as a plug-in to existing workflows that were never designed for predictive or adaptive tools. The result is that pilot projects abound, but they die on the vine. In fact, companies on average run tens of AI experiments, but few ever make it past the proof-of-concept stage.

MIT research shows that the vast majority of the pilots were executed in isolation, without rethinking how the work itself should change.

[5]

An AI agent might generate accurate outputs in a demo, but in the real world, it breaks the moment it encounters an edge case or an outdated procedure. Enterprises need to note that if they don't redesign the workflow around the AI – for example, to catch its errors, use its predictions, and complement its strengths – the AI will remain a science experiment rather than a production tool.

[6]

[7]

Another issue stems from the AI model data and context. When AI pilots fail, executives blame the technology. But research found a deeper problem: the AI tools didn't learn. They couldn't retain context or improve over time. In simple terms, the AI was intelligent but suffered from amnesia after every interaction. This is the illusion many firms fall into. They think they have a smart system, but what they really have is a stateless algorithm that never improves.

Companies keep focusing on better models or more training data, but what they need is AI that accumulates context like an employee. It could be learning company terminology, remembering past decisions, and getting better with each task. Lacking this, even a state-of-the-art model will disappoint in practice.

[8]

The standout successes did something different. They brought in people who understood processes, not just models, employing or contracting out to process designers, workflow architects, and domain experts who could translate AI capabilities into day-to-day operations.

OpenAI's ChatGPT is so popular that almost no one will pay for it [9]READ MORE

The MIT study found that companies trying to build everything in-house had much lower success rates. Internal AI projects succeeded only about a third of the time, whereas collaborations with external partners, who often bring in domain-specific solutions, doubled the chances of success.

Another striking pattern with many successful deployments was a bottom-up approach. They often began with employees on the front lines tinkering with AI to solve real problems. When these experiments showed promise, management then supported them and scaled them up. This meant that AI was solving a felt need, rather than a top-down mandated solution in search of a problem.

[10]Top companies ground Microsoft Copilot over data governance concerns

[11]AI agents get office tasks wrong around 70% of the time, and a lot of them aren't AI at all

[12]McKinsey wonders how to sell AI apps with no measurable benefits

[13]Thousands of AI agents later, who even remembers what they do?

The bottom line is that the 5 percent focus on capability, not just tech. They align projects to real business goals, partner for domain expertise, and continuously adapt.

Where AI actually works

Another contrarian finding here is that the real ROI from AI isn't coming from the shiny, customer-facing projects everyone talks about. It's in the back office, in the "boring" stuff companies often overlook.

There is a massive investment bias that plagues many enterprises, with considerable AI budgets allocated to marketing and sales because those initiatives are visible and excite executives. Yet ironically, the biggest payoffs were being realized in corners like operations, finance, and supply chain.

In fact, some of the most significant cost savings come from automating back-office workflows, such as invoice processing, compliance monitoring, and report generation. One reason is low-hanging fruit, as many back-office processes involve manual drudgery or are outsourced to BPO firms, so an AI that can handle those tasks yields immediate savings.

UK government trial of M365 Copilot finds no clear productivity boost [14]READ MORE

So why do companies keep throwing money at AI for sales, marketing, and customer chatbots instead? It's a case of visibility over value. Front-office projects have easily observable metrics, which make for great headlines and happy board members. On the other hand, the back-office improvements often go unnoticed outside of CFO circles.

In the end, the story of AI in 2025 is a mirror to every major technology upheaval we've seen. Technology alone changes nothing unless organizations shift too. The grand irony is that we have powerful AI models at our fingertips, yet most businesses are stuck in pilot purgatory, scratching their heads at the lack of ROI.

The evidence is clear that this isn't a tech failure. It's a management failure. The divide between the AI winners and laggards is not driven by model quality or regulation, but by approach. AI won't transform business until the enterprise is willing to transform itself. That is the crux of the paradox, and the challenge that forward-looking leaders must answer. ®

Get our [15]Tech Resources



[1] https://www.theregister.com/2025/08/18/generative_ai_zero_return_95_percent/

[2] https://www.theregister.com/2025/08/18/generative_ai_zero_return_95_percent/

[3] https://www.theregister.com/2025/06/26/kaseya_ceo_why_ai_adoption/

[4] https://www.theregister.com/2025/10/14/microsoft_warns_of_the_dangers/

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aUwcJ5UDMMRSFcaI87j6JwAAAUU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aUwcJ5UDMMRSFcaI87j6JwAAAUU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aUwcJ5UDMMRSFcaI87j6JwAAAUU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aUwcJ5UDMMRSFcaI87j6JwAAAUU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[9] https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/

[10] https://www.theregister.com/2024/08/21/microsoft_ai_copilots/

[11] https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

[12] https://www.theregister.com/2025/10/09/mckinsey_ai_monetization/

[13] https://www.theregister.com/2024/11/21/gartner_agentic_ai/

[14] https://www.theregister.com/2025/09/04/m365_copilot_uk_government/

[15] https://whitepapers.theregister.com/



Doctor Syntax

"They think they have a smart system, but what they really have is a stateless algorithm that never improves."

And what were they sold as by the sales-snakes? A smart system that could learn.

What an idiotic article

VicMortimer

The best way to include AI in your company is to not use it.

You're setting yourself up for failure if you do. When this bubble pops, AI is going to get a LOT more expensive, and you'll have skyrocketing costs that may just bankrupt you if you can't rip out AI and replace it with people. Oh, you fired the people? Well, bye.

Marketing

elsergiovolador

The need for marketing department is an indicator that your product is shite.

Re: Marketing

nematoad

I disagree. It's no use making something and then not being able to sell it.

I know because I found myself in that position.

I started a tiny company which designed and made etched-brass items for model soldier enthusiasts. A fair bit of work on my part and a small outlay to get the parts made. Then I hit a brick wall, the people I was hoping would sell the parts backed out and I had nothing to replace them with. So the company folded and I was left with inventory that just sat there. Eventually I went to Ebay. It's not perfect but it at least helps me market what I have and get back some of the money invested, but not my time.

You may not like the tactics used by marketing departments but they are a distasteful necessity. Just like lawyers.

Re: Marketing

DecyrptedGeek

True but I used more shitty products because of a fancy sales brochure when all the tech savy staff pointed out it was shit this other company made a superior product. Seems to me its the higher ups and sales staff that fuck up projects.

Interesting Etymology

mickaroo

You can’t spell “fail” without “AI”.

Since when are LLMs intelligent!?

StewartWhite

"In simple terms, the AI was intelligent..."

No, it wasn't intelligent - go to the back of the class and repeat 1,000 times "LLMs are stochastic parrots" and no cheating using ChatGPT because it will eventually degrade and return "Limbs are pork-fascistic carrots".

Decay

The sad reality is most users, business departments and management want a no thought required solution to a problem and AI is sold as that solution. I want AI version x, y or z to do this work for me faster and cheaper without putting any effort into thinking about how it will do it, what we are doing that could be improved. Just set it up and we will push the button and it all just works. This mindset isn't limited to just AI. We have all been victims of sales drones convincing the powers that be that product X is plug and play, just install it and your accounting system, CRM etc. it will be fantastic, just sign this contract and expect invoices every month.

The difference with AI is that the hype, and the decision makers experience with AIs writing grade A management waffle emails and memos, has convinced many that if it can write my emails so well, then just imagine what it could do building reports, performing complex analysis etc. I have somewhat solved this problem by having same people perform some basic analysis, stats or reports using a fairly simple dataset that they understand. Once the AI has mangled it a few times, made obvious errors or just plain made stuff up they usually respond with shock or occasionally with it must be that particular AI, so we try a few other flavors and they soon realize how bad it can be. Doesn't stop the sales drones trotting out the "our product is trained specifically for this scenario" bull but at least it gets the users or management asking questions and doing some critical thinking which seems to be a lost art.

Try understanding the problem BEFORE solving it

wub

Great article - very insightful. I'm a bit disappointed in myself for needing to have this explained to me. I do have a small bit of relevant experience in a similar area.

I was tasked with obtaining a LIMS (laboratory information management system) for the lab I worked at in the dim and distant past. As part of my education, I attended a dog and pony show for one of the leading providers. In my mind, this was still early days for LIMS products, but the first thing the speaker did was to ask the audience to show hands whether this was the first LIMS their companies had bought. It turned out that there were already folks looking for a fourth LIMS system.

I am far from a management expert, but even I realized immediately that if you had already failed three times at trying to make one of these products benefit your company, there might well be a systematic failure in that organization that needed to be addressed first.

When it was time for actual implementation of a product for us, I was struck by the fact that no one wanted to do the work of figuring our IN DETAIL how our internal processes actually worked, and in the cases where I did get some cooperation, no one seemed to understand that the only things the system could do for us were the ones we told it about - if we only did something once in a while, we could never do it at all, unless we also explained how and when to do that do the new system.

I'm surprised any of these things work, at all. You're absolutely right - many if not most of the failures come down to management failing to understand the problem first.

[Astrology is] 100 percent hokum, Ted. As a matter of fact, the first edition
of the Encyclopaedia Britannica, written in 1771 -- 1771! -- said that this
belief system is a subject long ago ridiculed and reviled. We're dealing with
beliefs that go back to the ancient Babylonians. There's nothing there....
It sounds a lot like science, it sounds like astronomy. It's got technical
terms. It's got jargon. It confuses the public....The astrologer is quite
glib, confuses the public, uses terms which come from science, come from
metaphysics, come from a host of fields, but they really mean nothing. The
fact is that astrological beliefs go back at least 2,500 years. Now that
should be a sufficiently long time for astrologers to prove their case. They
have not proved their case....It's just simply gibberish. The fact is, there's
no theory for it, there are no observational data for it. It's been tested
and tested over the centuries. Nobody's ever found any validity to it at
all. It is not even close to a science. A science has to be repeatable, it
has to have a logical foundation, and it has to be potentially vulnerable --
you test it. And in that astrology is really quite something else.
-- Astronomer Richard Berendzen, President, American University, on ABC
News "Nightline," May 3, 1988