Struggling to put your AI aversion into words? Here's a handy glossary
- Reference: 1773905410
- News link: https://www.theregister.co.uk/2026/03/19/ai_skeptic_labels/
- Source link:
LLM bots are everywhere, from your phone to your browser, and increasingly so is their output, which is widely termed "slop" – a term [1]documented two years ago by AI writer and advocate Simon Willison. The Reg FOSS desk was horrified to receive, among his Christmas gifts, a set of drinks coasters printed to vaguely resemble vinyl LPs with illegible slop labels. As well as endless torrents of slop, there are also endless waves of bot promoters.
Some of us remain unconvinced. We thought that if you too are a skeptic who has not been persuaded that there's any form of intelligence in these text-prediction machines, there might be reassurance in knowing that you're not alone – and that there are some useful terms for how staunch your resistance is.
[2]
Sean Boots recently offered a new term: " [3]Generative AI vegetarianism ." He describes a moderate position:
I want to write my own emails. I want to write my own (mediocre) software code. I want to learn and think and ponder with other humans, not with a text-prediction system built by consuming all the text on the internet.
Even so, he continues:
If you're stoked about generative AI tools, that's cool... In my day job, I'm keen on helping people use AI tools to make government data more accessible. I'm not here to cut you down; I'm not a generative AI vegan, after all. (Sorry, vegans!)
His list of reasons is sound, and there is much to recommend in a non-confrontational approach. However, it is not this vulture's. Boots also links to several other, more strident statements, and we enjoyed all of them. Jenny Zhang wrote in " [4]choosing friction ":
The promise of AI is that it removes friction... In their ideal world, you don't have to think about anything because an AI will do your thinking for you, and so you can fire everyone whose job it was to think.
I quite like thinking, and I think humans should do more of it, and I think the less we do it the more our thinking muscles will atrophy. This seems bad for everyone except for authoritarians who wish we were easier to control. I also happen to think AI is quite bad at thinking and that what LLMs do is not thinking at all, but I could be wrong!
It's just over 2,000 words long, and we can't really fault one of them. We also liked Rusty Foster's recent " [5]A.I. Isn't People ," which carefully and methodically deconstructs the claims that large language models can "think" or "reason" or "learn" or are "intelligent" for any possible definition of the term.
The one we sympathized with most is from Anthony Moser. His statement is plain – " [6]I Am An AI Hater ":
This is considered rude, but I do not care, because I am a hater.
To speak politely about AI, you put disclaimers before criticism: of course I'm not against it entirely; perhaps in a few years when; maybe for other purposes, but. You are supposed to debate how and when it should be used. You are supposed to take for granted that it must be useful somewhere, to someone, for something, eventually. People who are rich and smart and respected are saying so, and it would be arrogant to disagree with such people.
But I am a hater, which is a kind of integrity. It means I am willing to disagree with anyone, even if it is rude. "But I only use it to–" "Actually if you just—" "The new models–" "I was making fun–" Stop. You're embarrassing yourself. I am embarrassed for you.
There is some mileage in the concept of being an [7]AI vegan , but the thing that many vegans will find is that if they travel far enough, they'll find themselves in a country where they don't speak the language and can't explain their restrictions.
The Reg FOSS desk is not vegan, but has been strictly vegetarian for about 45 years now. Some of the vegans that we know personally become vegetarians when traveling far from home, because it's more practical. One way that you can sometimes get through to people whose culture doesn't have the notion of being vegetarian is to explain that it is a religious prohibition.
[8]Digital fruit fly brain model walks and cleans its feelers
[9]Norway's Consumer Council takes aim at enshittification
[10]Gram: Zed, but with AI and chat features removed
[11]Firefox 148 adds master switch for browser bot bother
Here, we feel that Cate Sawers has [12]nailed it . We suggest memorizing it, and as a fallback, print it out, laminate it, and keep it in your wallet:
I have a religious exemption from using all generative "A.I." I am not a member of the Silicon Valley sect. Their beliefs and practices are an affront to my sensibilities. The tech is trained on stolen intellectual property. The output is riddled with mistakes, and it is incapable of comprehending the weight of its errors. It is not even an "it." But sometimes, it is filtered and massaged by unaccountable human sweatshop workers and bad actors. I am not required to use "A.I." any more than I am required to join Amway, buy black market rhino horn, or attend the Fyre Festival. As a human, I have a duty and right to limit my carbon and water footprint and protect my fellow human. As a union worker, I have a duty and right to oppose tech that is used to threaten workers or cheapen our product. As a tech consumer, I have a duty and right to oppose scams that lower the quality of our tools. As a scholar, I have a duty and right to oppose anti-intellectualism. As a taxpayer, I have a duty and right to oppose the misallocation of public funds and data. As a grown-up, I have a duty and right to protect young people from predators. As a person with a conscience, I am appalled at the decadent disregard for user safety, the callous dismissal of responsibility for lives ruined and slaughtered. My religion is related to my identity, geography, and family, making it an irrelevant accident of birth. What matters is that I was born. I have an exemption from their digital rapture, their cultural austerity, their intellectual poverty. I practice wholesome hedonism and First Do No Harm.
Catherine Sawers 12/9/2025
Bravo! ®
Get our [13]Tech Resources
[1] https://simonwillison.net/2024/May/8/slop/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2abvXUpMuFH2D8DVqbDBc0wAAA08&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://sboots.ca/2026/03/11/generative-ai-vegetarianism/
[4] https://phirephoenix.com/blog/2025-10-11/friction
[5] https://www.todayintabs.com/p/a-i-isn-t-people
[6] https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html
[7] https://theconversation.com/ai-veganism-some-peoples-issues-with-ai-parallel-vegans-concerns-about-diet-260277
[8] https://www.theregister.com/2026/03/16/digital_fruit_fly_brain_model/
[9] https://www.theregister.com/2026/03/06/forbrukerradet_aim_enshittification/
[10] https://www.theregister.com/2026/03/04/gram_cut_down_zed/
[11] https://www.theregister.com/2026/02/25/firefox_tbird_148/
[12] https://bsky.app/profile/catebridget.bsky.social/post/3mcxosc7c322t
[13] https://whitepapers.theregister.com/
Re: Kudos to Mrs Sawers !
It was too long to read, so I got an AI to summarise it.
"AI good. Let it do all your thinking"
Just Stop Slop
I do wonder what someone blowing the whistle on the [1]1720 South Sea Bubble would have said, before they were hanged for sedition..
[1] https://www.historic-uk.com/HistoryUK/HistoryofEngland/South-Sea-Bubble/
Are you an AI hater, an AI vegan, or a slightly more moderate AI vegetarian?
I'm not really an AI hater. Hate implies mindless judgement. I'm sure it has some uses but I am yet to be convinced that the benefits outweigh the downsides. I am however a vegan, just not an AI vegan. Now there's a choice that attracts mindless judgement.
"mindless judgement"
"Hate implies mindless judgement" – sometimes, maybe even often, but not necessarily always. Sometimes, mindfulness might judge a specific manifestation of hate to be rightful. Like, for example, when it is directed at mindless hate which itself targets things, or, worse, people that absolutely do *not* deserve it.
Re: "mindless judgement"
"Hate implies mindless judgement"
Yoda was quite correct, when he made his his famous statement in The Phantom Menace. (It's also the only line in the entire movie that slightly redeems it, but that's beside the point.)
Re: Are you an AI hater, an AI vegan, or a slightly more moderate AI vegetarian?
How do I hate thee? Let me count the ways.
I hate thee to the depth and breadth and height
My soul can reach, when feeling out of sight
For the ends of being and ideal grace.
I hate thee to the level of every day’s
Most quiet need, by sun and candle-light.
I hate thee freely, as men strive for right.
I hate thee purely, as they turn from praise.
I hate thee with the passion put to use
In my old griefs, and with my childhood’s faith.
I hate thee with a hate I seemed to lose
With my lost saints. I hate thee with the breath,
Smiles, tears, of all my life; and, if God choose,
I shall but hate thee better after death.
Re: Are you an AI hater, an AI vegan, or a slightly more moderate AI vegetarian?
"Hate implies mindless judgement."
No it doesn't. I am very mindful of my judgements and the hatred that results.
One of the problems with the mainstream use of AI is the I part. Management and people with lower intellectual capacity seem to seize on this as a form of levelling up. There is no intelligence, it is just a smart search engine. Where is the rigour? The challenge? The critical thinking? The input based on experience and actual knowledge? No, AI does not make you smarter or even seem smarter - it just provides a cover which anyone with actual experience and skill can see right through.
Just because you are wearing an aviator's uniform and have the Raybans, does not mean you can actually fly an F16....
What's more concerning is that said fake aviators are often reporting into fake squadron leaders.....
What I see daily is eye watering... a manager very proud as he had produced a guidance document to be shipped out to thousands of users based on AI. I reviewed it and rewrote it based on my experience of what people actually want and in a style that they would understand. He went: "Oh...." and it then got dropped. Another who spent a long time feeding in prompts to create a massively overly-complex delivery plan which was totally impractical and undeliverable. A third who ran a training session on AI and asked it to summarise figures from a spreadsheet which it got completely wrong.
A colleague sent me an email to proof before sending to a customer. I was taken aback by references to data I didn't know we had and frankly was exciting to get access to it... Upon query my colleague responded "haha oops, I got my email AI to draft that". I wrote the email citing actual business facts, the data (as I was correct about all along) didn't exist.
I'm not sure about the terms AI vegetarian or AI vegan - I would position more as AI intolerance - like food intolerances, many can stomach a small amount and beyond that threshold it creates undesirable suffering.....
Ai can crawl back to the primordial bit-slime that it came from
Not a fan.
Not even close.
Shame
This article comes across a little of tongue in cheek drumming up tribalism and that’s unfortunate if this publication wants to have mature discussion on the matter, but so be it.
However, in my view it doesn’t serve anyone to make it harder to talk meaningfully about the intended and unintended consequences or benefits of this technology if we pre-emptively dismiss the notion that it can have merit in some form. For example consider an hypothetical LLM trained on a non-copyrighted corpus, using renewable energy, verified free from bias, developed by a community steering group could address many of the criticisms levelled at the current implementations. So, here’s my attempt.
Re: Shame
....Where the air is made of unicorn farts and fairies provide IT support?....
Re: Shame
....Where the air is made of unicorn farts and fairies provide IT support?....
Camp Mews ?
Wherever that might be, we are likely to soon discover how dysfunctional this AI Elysium will prove to be.
Re: pre-emptively dismiss
I do not pre-emptively dismiss LLMs. They have done a thorough job of earning my scorn.
I interact with the real world. My delusions are limited by reality whacking me with a clue bat when I go to far off the rails. People may have knowledge on there own subject but are usually ignorant and wrong outside there subject. Ignorance outnumbers knowledge and posts far more on the internet too. This is the best of what LLMs are trained on. A huge amount of effort goes into spreading propaganda and scams that LLMs absorb along with the occasional nugget of truth. LLMs are owned by billionaires. When was the last time one of them got a mild scolding from the SEC for telling ridiculous lies? Ask your favourite LLM to talk about something you thoroughly understand already. It may get a high proportion right but it will make popular mistakes. That is a fundamental part of how they work. It really shows in legal briefs where opposition lawyers to a proper job of checking court filings. Imagine the efficiency of having AI generated court filings checked by AI then handed to an AI jury that gives a verdict to an AI judge that sentences you 10 centuries in prison for the murder of Peter Rabbit.
There are limits to how quickly we can manufacture renewable energy equipment. There is a limit to the area of useful sites it can be deployed. An island can get plenty of wind but needs an under sea cable to transport energy where it is needed. Deserts get lots of sunshine but the energy would have to go hundreds of miles before it gets to a large collection of humans. LLM vendors want investment to build data centres that use more power than is available on the planet. They take it and we cannot have it. They want more chips than the combined output of all the chip foundries in the world. The take it and we cannot have it.
Re: Shame
Thank you.
I have two conflicting views about AI. As a human I can do that because, let's face it, I'm not binary but reassuringly (and sometimes wilful annoyingly) analogue.
First the plus: I had to do something weird with Postfix (pipe a mailbox into a sanitiser and a script). The last time I did that was 20 years ago. I could spend a week digging through HOWTO's and testing, or I could get Claude to mine the available data (in which I 100% agree with the idea that it's mainly an advance search engine) and give me a rundown with suggestions. Of course, I'm still checking everything but it saved me a lot of time, so that I deem positive and - and this is an important caveat - it can do so from publicly shared data. No IP theft required, and the analysis and suggestions were sane. I'm not sure if Google deliberately made its search engine rubbish so it could promote its own AI, but I wouldn't be surprised.
The minus is what scares me, because I have a sigint background. The only way you can properly screen or analyse a conversation is by converting it into text so you can then easily search on keywords. Echelon has done this for literally decades with mobile phone conversations, but had less access to landlines. Now everybody I know uses Teams, and I have seen enough of its meeting reports to know that its seeming inability to get things right is but camouflage: just get the Premium package and you'll see that that gets it right properly so it IS possible, and this is literally EVERYWHERE. Every Microsoft user uses this, and if there was ever a rich source of global intercept capabilities, this is it, and AI makes that a lot more efficient. Add to this that it is indeed a very capable search engine (sorely needed because everything inside the Microsoft products is at best of questionable quality, with special low marks for Outlook) and that everyone and their dog 'protects' their valuable data with Data Leak Protection (MS Purview et al), the trend to store everything in the Cloud and US law and I think intercept perfection has been achieved - not good news for your personal privacy or the protection of interesting Intellectual Property, also because some Orange Baboon really wants to know who isn't falling for the scams.
Is that actually deployed? I don't know, but hoping it is not is IMHO not really enough, because this makes global mass intercept analyses frighteningly easy. Ther's no control mechanism that I can identify, and it happens at a volume that information pollution techniques also don't really work.
Last but not least, AI is not intelligent. It's at best a 4 year old with an already complete vocabulary so more Articifial Ignoring of any governance and law. Applied statistics coupled with a massive power bill is nowhere near equivalent to organic thinking so it worries me that there are already examples of autonomous weapons being developed.
I enjoyed Terminator movies, but I don't want them to become a documentaries..
ChatNPC
I assumed that was "Chat No Phucking Clue" but apparently it is actually another dismal AI application.
AI·phobia might be a thing but that implies fear but for the most seriously AI averse, profound enduring contempt for the technology and the grifters peddling it, would be more accurate.
Generative Pre·trained Transformer - bollocks - Grifting Phucking Twat (you can match at least one face to that † .)
† the AI grifter who always looks like a sheep botherer caught in flagrante would be favourite - not that this narrows it down much.
Re: ChatNPC
> I assumed that was "Chat No Phucking Clue" but apparently it is actually another dismal AI application.
"NPC" means "non-player character".
In video games, these are the little people or whatever controlled by the computer, which wonder around, may help or may get in the way, or may attack.
https://en.wikipedia.org/wiki/Non-player_character
Anthony Moser
Deserves one of these --->
But I am a hater, which is a kind of integrity. It means I am willing to disagree with anyone, even if it is rude. "But I only use it to–" "Actually if you just—" "The new models–" "I was making fun–" Stop. You're embarrassing yourself. I am embarrassed for you.
Exactly my view on all the other religions, too.
I am an AI dabbler
I use it personally and for work, but I am very aware of its inadequacies. Its good for turning over ideas and getting a perspective but as indicated in the article, outsourcing thinking to an AI is a terrible idea.
It makes no difference to me if people use it or not, but I derive genuine value from it.
Arthur IDent> Well I quite liked it actually.
Not keen
Have to use "AI" at work (& get audited that we use it enough!)
A prompt needs to be very well crafted to get the results you are hoping for.
You really need to examine the output carefully as often the errors are subtle & easily overlooked (that's why "AI" is so bad when used by people without expertise in area they are prompting about)
The one thing I find it (the one I use anyway) is consistently OK at is SQL.
When prompted to produce queries (often gets it right, when it's incorrect the SQL produced typically only needs a few minor tweaks to be acceptable), and its also good at examining actual query plans alongside corresponding SQL & suggesting the areas to be optimized.
So, I get my "AI" quota tickbox filling dealt with by firing those type of "tedious chore" SQL tasks at the "AI".
So, my take is "AI" can be useful in certain scenarios, but the real downside is it often gets things subtly wrong & then heavily reliant on the person looking at the output to examine it closely enough to spot the problem (e.g. (especially in "front end" code) that may "work" but can be subtle timing issues, locking / synchronisation isses, performance problems that only show when you stress test ).
Have to treat "AI" (in code gen anyway) as a keen but very junior dev.
Re: Not keen
>>get audited that we use it enough<<
You need to get a new job, mate.
An interesting social phenomenon
In the early days of photography, only "real" photographers had the (rare and arcane) skill and the (bulky and expensive) equipment to take photos. To have one's portrait taken photographically was something special, an occasion. People dressed in their Sunday's best and assumed serious expressions on portrait photos in those days.
Then along came the Kodak Brownie camera arrive with the slogan 'You press the button and we do the rest". Suddenly everyone with at least one working eye and one working finger could take portraits. Which they did, and inevitably the vast majority of the photos taken were tarrible, whereas every portrait so painstakingly produced by a traditional, skilled photographer was immaculate.
Clearly consumer camera's were rubbish that only produced bad photos.
Except that they weren't.
AI is, in my view, much the same: it is a potentially useful tool, but not used skillfully. I myself have gotten great results from LLM based chatbots like ChatGPT, from illustrations for technical writings (for which I previously had to hire someone with drafting skills) to insights that finally led to a life-long medical condition being diagnosed (by a properly qualified medical practitioner, needless to say). Whereas I previously spent countless hours on Google and various websites looking for bits of information, now an LLM AI chatbot usually gets me what i need in one or two queries. I do, however, verify the results. I even have used it in a pinch as a coding assistant. It saved time and so far my brain has not atrophied. But then, I'm also a hobby woodworker. I use whatever tool does the best job. I don't try to use my chisel as a hammer, screwdriver and crowbar all at the same time and expect that to work.
And that's it: AI is not the problem. The problem is peoples' ridiculous expectations from it. The problem is the hype. AI is promoted as the next best thing since sliced bread. It is not. It will, however, be able to help you work out a good recipe to bake a better bread, provided you use it with its strengths and limitations in mind. AI is a tool, nothing more. It has to be wielded with a degree of skill, or expect no decent results. The skill is not in the tool but in the wielder.
And yet many people hate AI with a passion. Not just because manglement forces it upon them like any other nebulous hype like we have seen so often, or because it has very serious drawbacks and limitations, or because it threatens job security. Things like outsourcing and cloud computing have been misimplemented with terrible results, but never elicited the emotional reactions that AI brings.
AI affects people for some reason. There is a discomfort here that points at a deeper issue than being fed up with bloated hype, a market bubble and managerial stupidity. It's not just that AI threatens what we do. It raises questions of who we are. It makes us face, for the first time in history, something that is hard to tell from the real thing. AI can produce results of a quality and nature that previously were only available from humans, but without the humanity. AIs are undoubtedly machines, without real emotions (but good ad faking them), without real morals (the guardrails have to be built around them) and ready to be misused by whatever bad actor knows how to get them to do the wrong things.
The emotional reaction against AI is, in my opinion, so intense because the threat presented by AI does not revolve around the results it produces. It revolves around existentialism. If people can be imitated (not yet quite convincingly but closer all the time by machines so easily, then what are we? Who are we? Are we really so special? Are the faces we present to the world and when we interact with others more than just a mechanism? And if so, then can we be manipulated as easily as an AI can be? And what is the nature of our consciousness? Are we more than just very advanced and complex biological mechanisms?
Don't get me wrong: I'm not saying that the vast majority of us now suddenly turns to these deeply existential questions. I'm merely saying that the intensely emotional hate for AI (which goes far beyond any other over-hyped nonsense we've seen so far) has a lot to do with these questions. Intellectually we can use AI judiciously, as a tool, only where it serves a purpose. AI produces a lot of slop, yes. But so do most of the people I've ever worked with. In fact, I've worked with people whom I'd happily have replaced with ChatGPT; it would have been an improvement.
And perhaps that's at the heart of it: at present, AI sits smack in the middle of the [1]uncanny valley and we respond emotionally, not intellectually. The posts in these forums are a good example. (As, I'm sure, the number of downvotes this one will elicit!)
[1] https://en.wikipedia.org/wiki/Uncanny_valley
Another useful term
It's not common, but a few companies who are less drooling and craven than their contemporaries have at least added a hidden checkbox down on page 26 of their "Settings" menu that brave AI haters can use to flag that we don't want our work used to train AI.
I propose this be called a slop tout opt-out.
Compared with what?
> if you too are a skeptic who has not been persuaded that there's any form of intelligence in these text-prediction machines
I have a foot in both camps.
I can see that in some areas an AI vastly outperforms an average¹ human , performing the same task. However, when compared to the acme of human achievement, there is still a way to go. But the real problem is that the average (or even highly educated) person cannot differentiate AI slop from world-class content.
[1] the "average" human does not create art, cannot identify a tumor, tell you what nine times eight is, nor can they answer any of the questions they regularly put into ChatGPT. And if you want to see what the average human is capable of, check out social media.
Kudos to Mrs Sawers !
I am going to copy that text, print it and have it embossed, then put it on my desk at the office.
As for " from your phone to your browser ", I have only one thing to say : not if I can help (meaning stop) it.