Some signs of AI model collapse begin to reveal themselves
- Reference: 1748341868
- News link: https://www.theregister.co.uk/2025/05/27/opinion_column_ai_model_collapse/
- Source link:
Ordinary search has [1]gone to the dogs . Maybe as [2]Google goes gaga for AI , its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.
Guide for the perplexed – Google is no longer the best search engine [3]READ MORE
In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get… interesting,
This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.
Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A [4]Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality."
[5]
Model collapse is the result of three different factors. The first is [6]error accumulation , in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.
[7]
[8]
I like how the AI company Aquant [9]puts it : "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."
I'm not the only one seeing AI results starting to go downhill. In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts [10]would produce bad results .
[11]
[12]RAG , for those of you who don't know, enables large language models (LLMs) to pull in information from external knowledge stores, such as databases, documents, and live in-house data stores, rather than relying just on the LLMs' pre-trained knowledge.
You'd think RAG would produce better results, wouldn't you? And it does. For example, it tends to reduce AI hallucinations. But, simultaneously, it increases the chance that RAG-enabled LLMs will leak private client data, create misleading market analyses, and produce biased investment advice.
As Amanda Stent, Bloomberg's head of AI strategy & research in the office of the CTO, explained: "This counterintuitive finding has [13]far-reaching implications given how ubiquitously RAG is used in gen AI applications such as customer support agents and question-answering systems. The average internet user interacts with RAG-based systems daily. AI practitioners need to be thoughtful about how to use RAG responsibly."
[14]
That sounds good, but a "responsible AI user" is an oxymoron. For all the crap about how AI will encourage us to spend more time doing better work, the truth is AI users write fake papers including bullshit results. This ranges from your kid's high school report to [15]fake scientific research documents to the infamous Chicago Sun-Times best of summer feature, which included [16]forthcoming novels that don't exist .
[17]America's cyber defenses are being dismantled from the inside
[18]It's fun making Studio Ghibli-style images with ChatGPT – but intellectual property is no laughing matter
[19]Why Google's Chrome monopoly won't crack anytime soon
[20]AI running out of juice despite Microsoft's hard squeezing
What all this does is accelerate the day when AI becomes worthless. For example, when I asked ChatGPT, "What's the plot of Min Jin Lee's forthcoming novel 'Nightshade Market?'" one of the fake novels, ChatGPT confidently replied, "There is no publicly available information regarding the plot of Min Jin Lee's forthcoming novel, Nightshade Market. While the novel has been announced, details about its storyline have not been disclosed."
Once more, and with feeling, GIGO.
Some researchers argue that collapse can be [21]mitigated by mixing synthetic data with fresh human-generated content . What a cute idea. Where is that human-generated content going to come from?
Given a choice between good content that requires real work and study to produce and AI slop, I know what most people will do. It's not just some kid wanting a B on their book report of John Steinbeck's The Pearl; it's businesses eager, they claim, to gain operational efficiency, but really wanting to fire employees to increase profits.
Quality? Please. Get real.
We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it.
How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that " [22]OpenAI now generates about 100 billion words per day ," and we presume many of those words end up online, it won't take long. ®
Get our [23]Tech Resources
[1] https://www.theregister.com/2025/05/14/openwebsearch_eu/
[2] https://www.theregister.com/2025/05/21/googles_ai_vision/
[3] https://www.theregister.com/2024/12/16/opinion_column_perplexity_vs_google/
[4] https://www.nature.com/articles/s41586-024-07566-y
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aDXhlhBCeO-dBT7NU2j2ZgAAAQA&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[6] https://www.ibm.com/think/topics/model-collapse
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aDXhlhBCeO-dBT7NU2j2ZgAAAQA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aDXhlhBCeO-dBT7NU2j2ZgAAAQA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.aquant.ai/blog/avoiding-ai-model-collapse-aquant-leading-way/
[10] https://arxiv.org/abs/2504.20086
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aDXhlhBCeO-dBT7NU2j2ZgAAAQA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[12] https://www.aquant.ai/blog/avoiding-ai-model-collapse-aquant-leading-way/
[13] https://www.bloomberg.com/company/stories/bloomberg-responsible-ai-research-mitigating-risky-rags-genai-in-finance/
[14] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aDXhlhBCeO-dBT7NU2j2ZgAAAQA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[15] https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/
[16] https://www.npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai
[17] https://www.theregister.com/2025/04/23/trump_us_security/
[18] https://www.theregister.com/2025/04/14/miyazaki_ai_and_intellectual_property/
[19] https://www.theregister.com/2024/11/23/opinion_google_chrome/
[20] https://www.theregister.com/2025/03/14/ai_running_out_of_juice/
[21] https://arxiv.org/abs/2404.01413
[22] https://x.com/sama/status/1756089361609981993?lang=en
[23] https://whitepapers.theregister.com/
Re: Human Nature
I can't really disagree with anything you said, but to be fair and play devil's advocate here...
> "As for search, I've been looking for an article I read on Litvenyenko for MONTHS now [..] Google is useless"
...is it possible that the article has been taken offline or made no longer publicly-accessible (e.g. shoved behind a paywall) at some stage and has been de-indexed by Google for that reason?
Or then again, it could well be Google just being shit as usual.
"an article I read on Litvenyenko"
Or indeed, Google now being so shit that it's not able to work around the OP's misspelling of "Litvinenko". (Assuming he is talking about the victim of Putin's polonium poisoning, anyway.) Which is still an enshittification, as that wouldn't have foiled them in the past.
Re: "an article I read on Litvenyenko"
Google *did* come up with a "did you mean" when I entered "Litvenyenko". Even so I'd have assumed it would also done the same when Cookiecutter did their original search- i.e. drawn their attention to the misspelling- so it either didn't do so in that case, or it did so and there was another reason their search came up blank. (Including the possibility I already mentioned).
Re: "an article I read on Litvenyenko"
Given that Litvinyenko/ Litvenyenko is an Anglicized version of the original Литвиненко, I'm not sure there's any sort of argument to be had about spelling unless you are actually an expert in Russio-Cyrillic to English translation methodologies and semantics.
The Americans can't even spell English properly so what hope has a pseudo translation of a Russian name using a completely different alphabet got?
Re: "an article I read on Litvenyenko"
> "I'm not sure there's any sort of argument to be had about spelling unless you are actually an expert in Russio-Cyrillic to English translation"
I take your point, but I wasn't the one who was arguing whether or not it was the correct transliteration and/or spelling in the first place. And I'm sure even you would draw the line if it was rendered as "Throatwarbler Mangrove". :-)
Re: "an article I read on Litvenyenko"
+1 for the Monty Python reference
"Ignorance is strength"
A lot of stuff is deliberately taken down now because it would be "inconvenient" for people to find out that the author has completely reverse ferreted, e.g. try searching for an article from a good few years back re how Netflix management said that they'd never make swathes of people redundant for cost savings if the market went into recession and it's as if it was never there. It definitely was and I read it but I'm damned if I can find it.
Re: "Ignorance is strength"
1984 by George Orwell etc. Nice Big Brother avatar BTW.
Re: Human Nature
I am using the suggestion from an article on el Reg a couple of weeks ago and it makes a surprising difference when using Google search. Try this…
https://www.theregister.com/2025/05/14/openwebsearch_eu/
Google is still shite of course but this does improve it quite a lot
Re: Human Nature
Thanks for the heads up. I completely missed that article!
Re: Enshittification of Google Search
[1]The Man Who Killed Google Search
Spoiler alert:
It was Prabhakar Raghavan, who before coming to Google was the head of search for Yahoo from 2005 through 2012. It must be said he did it with the help of others.
[1] https://www.wheresyoured.at/the-men-who-killed-google/
Re: Human Nature
Every word truth.
Every consequence, coming.
Predictions
We all tend to hope for a big clear result - an undisputable sign. If things are going to go bad, can they please collapse in a big noisy bang rather than just... slowly... degrading into mush?
No-one really has a theory for a global information resource designed to route around censorship and allow anyone - just about anyone - to start publishing. Does it grow and get better? Does it become a civilizational resource? Does it fragment? Or does it degrade or collapse? We don't know if this virtual thing is like the Roman Empire, or the foundation of mathematics. Does it become unsustainable, or does it persist and evolve?
The observation is that our internet is currently an attention economy. Eyeballs are almost directly related to profit, and with that comes corporations, gatekeeping and manipulation. That's an environment absolutely ripe for the explosion of AI - but also one that will preserve and sustain it even if the results are demonstrably bad. There's no part of the attention economy that punishes "bad" attention.
So the current prediction is that we'll see a steady degradation of our public spaces and information sources. This may be exacerbated by the next generation of kids who are apathetic towards search and long form content. If Google's Veo3 can pump out a thirty second video 'explaining' how to tie their shoe-laces, they're fine with it (and indeed, most adults are). And the problem we have to face is not that we'll have to avoid AI in future, but that AI will be unavoidable - drowning out or corrupting verifiable facts and poisoning the entire public space. Not in a big bang with a single clear bad actor, but progressively as the whole food chain slowly absorbs the poison over time.
Whacked Out
"The model becomes poisoned with its own projection of reality."
This is the Next Big Thing in technology? Judging from what happens with humans when they start taking their fantasies seriously, I'm a bit skeptical. Do I really need a digital assistant that is stark raving mad and is trying to convert me to Scientology or some such?
Re: Whacked Out
"Judging from what happens with humans when they start taking their fantasies seriously..."
They become the President of the United States of America?
Re: Whacked Out
"Judging from what happens with humans when they start taking their fantasies seriously..."
They become the President of the United States of America?
That also immediately came to my mind reading:
"The model becomes poisoned with its own projection of reality."
"Model collapse" could be read not as just the collapse of a model but also as the paradigm of collapse.
Re: Whacked Out
If something is presented as fact then AI will likely use it as fact,
Documentation (Grimm et al.) dating back centuries describes how Gingerbread when made correctly can be used as an environmentally sustainable building material, the ideal bonding agent for this is a carbon neutral 50:50 wheetybix/lactose(plant derived long chain hydrocarbon) solution based cement which will produce higher shear force resistance than traditional concrete when sufficiently cured, this is the secret behind many medieval buildings longevity. Archaeological finds dating back several thousand years show the same wheetybix/lactose cement when mixed with shredded triticum will produce a non toxic lighter weight alternative to fibreglass with the added benefit of improved fire retardation properties.
I do hope this finds it’s way into some eejits thesis :)
Re: Whacked Out
I'm prepared to contribute to that noble cause by replicating this amazing advice to other forums !
So, observations that are what one? Two years out of date?
That's how long I've been asking myself a similar question and I've nowt to do with AI services.
"How long will it take? I think it's already happening, but so far, I seem to be the only one calling it."
Apparently AI avoids mentioning things that predict its own demise. The suggestion that you use AI to do search may be the reason why you have missed others saying the same stuff.
Enjoy your kool aid.
"Ordinary search has gone to the dogs."
Ordinary search began to go to the dogs when it stopped being ordinary search and began using AI. So it's not a matter of if increased use of AI in searching will make it worse; it's already worse, and the only question is how much more it can go downhill before it becomes completely useless for purpose.
Re: "Ordinary search has gone to the dogs."
For a non-AI search on Google+Firefox look for a plugin called UDM
Just sayin'
Re: "Ordinary search has gone to the dogs."
"..when it stopped being ordinary search and began using AI"
I hate to tell you this, but it started quite some time before then, when Google started the strategy of becoming an 'information source' rather than a gateway to other sources. Think Google Flights, Google Weather, Google Maps, Google Finance... it's a very long list that has been growing for years. Remember Knol?
Long before AI came along, Google Search attempted to answer the question for you - in a deliberate attempt to keep you on the search page up until it could hand you over to a paying website. They actively tried to kill Wikipedia. They bless a select few travel resources. Small and independent websites, blogs and resources are almost completely absent from search results, and have been for years.
And the result is very, very clear. Those small and independent resource sites are dying out. If you want information on a product that is no longer sold, just hope like hell that some incredibly stubborn holdout is still paying hosting fees for a site that was written years ago and can only be found tangentially from Wikipedia or an obscure Reddit post.
"forthcoming novels that don't exist."
I am certain many a publisher past has paid an advance to an author for exactly that commodity. ;)
"OpenAI now generates about 100 billion words per day,"
So by now it should have knocked out the Bard's entire corpus and a plausible Loves Labours Won and without a warehouse overflowing with monkey shit? No? Wrong on both counts?
What is it with scientists ?
Did they all take the "science without maths" courses that must be extant these days ?
You feed a system with a collection of "everything". If you have done any maths beyond GCSE, that you tell you that at the end or your input your system will contain everything divided by the number of things. Which is an arithmetic average.
If the "thing" is human generated content, then your collection will be "average intelligence".
AHA ! Someone says. Whatabout if you use a system we have trained on "the good stuff" to weight the content going into the system ? Then it's contents will be above average.
"So how do we get they good stuff ?" asks someone who hasn't got the memo ?
"Easy. We'll get people to decide what is good so the system has a head start".
Oh dear.
Re: What is it with scientists ?
Oh it's far worse.
El Reg has already noted that extensive, non-curated curated scrapping of the entire web is taking place and, AND, AI is also being fed its own bollocks.
Global GIGO and, for funsies, recursive! You can easily surmise how this has to send.
Obvious
I learned in the 1960s to never try to make a copy from a tape recorder to another one. It very quickly devolves to noise. The same is true when making copies of some physical object - never make a copy of a copy. It will always be at least slightly wrong. So when I heard about what they were doing with AI I knew exactly what would happen.
Entropy
This shit will get shittier!
Re: Entropy
To the the power en-ification!
landmines of AI search
Perplexity is much better than your average search engine, but you do have to know where it goes off the rails, and it can be surprising.
For example, it cites its sources quite well with low (so far) hallucination, in very restricted domains (math, cs, bio), and yes, those sources I do check because I know the error rate is variable to put it mildly.
But then ask it to generate the bibtex entries for the publications it cites, and it will hallucinate 4/5 times.
When asked 'why do LLMs do poorly in retrieving bibtex entries', it will happily say it sucks at this because of the way current LLMs are designed, a dubious answer at the best of times.
Why questions are not good usage of LLMs, they're okay at 'what' or 'where' (and not great, but better than Google).
It's a cognitive dissonant experience, because here is a tool that cites its sources then makes up the citation entries to the correctly cited and hyperlinked sources.
Especially annoying because bibtex should be perfect use case for LLMs because
a) defined by a very simple context free grammar
b) supervised training, because you can use gscholar or any number of APIs to go from DOI to bibtex
c) if model certainty is low, fall back to API calls
As for RAG, just use the API and retrievals, don't feed it in the model. There's enough ducktape on the net as it is. If something works, delegate, if not fix. Don't do both.
And yet, no luck, it just sucks. I'm glad it sucks at it, because it reveals exactly what it can do (rescue search from Google's downfall), and what it cannot (even though technically, it's an easy fix).
A comfortable feeling to know SkyNet isn't here yet.
So, in summary, what the above are saying is, A.I. is shit. I'm shocked. Shocked I tell you.
What does AI learn?
When will AI actually be denegrated to what it is, a glorified search engine and data manipulation tool?
Take in data, manipulate, throw out data.
There is no intelligence, information awareness or analysis of either the accuracy or quality of data ingested. Quantity is king.
Classic GIGO with a positive feedback loop.
*Most* ordinary search has gone to the dogs
One that's still good, perhaps at the level Google was 10 or 15 years ago, is Kagi. It does use other indexes (Google, Bing, Brave, Yandex) but they're working on their own.
Pollutes everything else, and in the end it pollutes itself...
It's ironic here that LLM systems have to (but can't) be protected from consuming the same spew they're designed to output in the first place.
I'd have said that this was less a case of simple "garbage in, garbage out" (*) than what happens if a system inadvertently starts consuming the shit it's polluted the environment with in a circular manner. But even that misses one aspect- it's not being polluted by an unwanted by-product, but by what it was *meant* to produce.
In short, the situation, to make a horrible analogy, is something like a cross between those Human Centipede films (sorry) and a massive case of inbreeding.
(*) Although that would be the case elsewhere if they were trained on incorrect or badly-written human-originated articles and information.
Spark of creativity
The problem is that as more and more people write stuff helped by AI, AI training is still being fed AI slop even when something looks at first glance to be human-originated. The problem is that AI cannot inject a "spark" of creativity. It can only regurgitate from its training. Humans can create and plan. And we know when to look ahead, and when not to.
If we live in a world where there's no creativity, we'll just stay where we are. AI will make what we've got "sort of work" for a while, until it doesn't.
However, that same creative spark in humans will no doubt help create better AI that can train itself better.
No, I didn't get AI to help write this!
Ask any AI how to remove a specific app or feature
from windows / android and they will all roll off the usual
Go to apps and look here and choose uninstall
Go to settings, apps, Show all apps, and uninstall
Great for some apps, but those buggers that are bloat from the supplier or AI shit - none of it applies, and when I check, I have specificailly asked to remove / uninstall completely, CoPilot/Gemini/Other Bloatg
Incest is bad
As Humans learned a few thousand years ago[1], reproducing on your own data is not a good idea.
[1] except for in some places
Photocopy of a photocopy of a photocopy of a photocopy of a photocopy..
Remember in the bad old days when you'd try to read a photocopy of a photocopy of a photocopy of a photocopy of a photocopy that someone had faxed you? That's where generative AI is going. Oh sure, there will be words you can actually read but no facts or analysis that you can actually trust.
Another analogy.. anyone remember AltaVista? AltaVista was great when it came out, you'd usually get the result you were looking for somewhere on the first page of results. In the pre-Google days, this thing was big traffic driver. But then people worked out how to game it through aggressive on-page SEO and then the output just became slop. It took Google to fix search that time around.
Tragedy of the commons
Classic tragedy of the commons. aI puts everything out of business that it needs to steal from. If only there was a way to compensate creators? Stealing hurts everything eventually.
GIGO sums up AI.
They trained it on obsolete material, websites, social media posts, fiction and other AIs. What do you expect?
quote: I seem to be the only one calling it.
You've not been reading the comments have you? We've been picking AI apart on here since it crawled out from under its rock as the Next Big Thing, after the Metaverse and NFTs failed.
With AI, GAFA joined forces to try to scam us, but it will all go TU eventually. Then they will move on to something else. Probably the Western version of China's unique log on code for everyone. Japan Post are rolling out a version as an address alternative. Makes it easier for spook agencies to monitor all of us.
Really? "I seem to be the only one calling it"
I see daily comments in some pretty respected blogs about the rickety framework that is peddled as "AI". See:
Ed Zitron: https://www.wheresyoured.at/author/edward/
Cory Doctorw: https://pluralistic.net/
"What all this does is accelerate the day when AI becomes worthless"
TBF, some of us have questioned the value of "AI" for a long time - although its not currently "worthless" in financial terms it's been regarded as South Sea Bubble, tulip frenzy etc. (pick your financial fad that imploded spectacularly of choice) for quite a while (i.e. massively over valued / hyped & due for a big readjustment).
I do have to say I quite enjoy some AI image generation, not because it's superb but because of the weird ways in which it gets things wrong (had some very strange prompt interpretations*) that give so many amusing WTF! moments
* probably the most bizarre was when asking it to produce something that should have been relatively fool proof - a small UK household vegetable garden image, had a bit of detail in the prompt (e.g. greenhouse) but it was generally quite a simple request
It managed some veg OK e.g. lettuces growing in the ground OK, but tomatoes were not in the greenhouse** and not even on a plant of any sort, just bunches of tomatoes on bare soil, however the pièce de résistance was the greenhouse, it produced one so immense it wouldn't have looked out of place at Kew Gardens.
** Tomatoes outside excusable (as plenty of warmer places where tomatoes fine to grow well outside, and you can grow them outside in UK (but results are quite poor!) so not "thinking" UK = likely tomatoes in greenhouse is no surprise as that's a leap beyond simple models) but not having the tomatoes on a plant of any sort (even with incorrect leaves / shape etc) was rather dismal (I would guess lots of "harvested" tomatoes in image training data, rather less of actual tomatoes still attached to plants).
Recent Google AI confusion
Converting a US recipe to UK (Cups to Grams) was more fraught with problems than I expected.
Search: "1/2 cup of planko breadcrumbs in grams"
AI: "5 cups of planko breadcrumbs is approximately 113 grams. A standard measurement for planko breadcrumbs is 50 grams per cup. Therefore 1.5 cups would be 75 grams"
Human Nature
It's stunning to me how even after 30 years in the industry, getting ANYONE to even contemplate what pieces of shit humans are is impossible.
After extensively asking vendors at events and webinars "How does your all encompassing agent avoid hallucinations?" I literally get total silence or "Responsible AI, you have to make sure you have experienced people to go through the output and ensure it's right"....yeah because THAT'S how corporations work. Keep the expensive experienced people.
Ask someone "How do you get around the Computer Says No problem?"; If anyone ACTUALLY understands the question, which is depressingly doubtful, you get some weird circular bullshit about training your staff not to 100% trust what the AI says.....What? That thing that you're selling as 100% reliable and able to do the work of 1000s of human beings and will eventually take over the World? Someone on minimum wage making decisions on someones healthcare of benefits is going to go "Yeah, this £multi million thing is telling me this person with no legs can work in an Amazon Warehouse & I'm going to risk my bosses SLA by flagging up that I think it's bollocks"
Utter madness. I genuinely can't wait until the whole house of cards comes crumbling down.
I'm just glad I lived in the 00s when the Internet was ACTUALLY useful for things, rather than the useless mess it is now. I could safely bury £million of gold somewhere, put the location on an internet website with "THE GOLD IS BURIED HERE" and know that no one will find it because the blog would be buried under SEO and AI generated nonsense.
I ran a swift file through chatGPT, Claude and even Codestral (I prefer to keep to the European techs) and none of them could tell me it wasn't working because I'd fat fingered it.
As for search, I've been looking for an article I read on Litvenyenko for MONTHS now. I definitely read it, definitely remember some of the quotes on there, but want a link to it so I can save it. Google is useless, even if I put in ACTUAL quotes from the article that I do remember and go through that whole multi prompt bullshit, none of the chatbots can find it. Useless. 15 years ago, Google would have had it as the Number 1 result.