AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co)
- Reference: 0181026124
- News link: https://tech.slashdot.org/story/26/03/17/2243219/ai-job-loss-research-ignores-how-ai-is-utterly-destroying-the-internet
- Source link: https://www.404media.co/ai-job-loss-research-ignores-how-ai-is-utterly-destroying-the-internet/
> Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.
>
> Anthropic's paper, called " [1]Labor market impacts of AI: A new measure and early evidence ," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was [2]included in a newsletter by MSNOW's Phillip Bump and [3]threaded about by tech journalist Christopher Mims. ( [4]Because everything is terrible , the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.
>
> But I believe most of these studies are flawed in a deeper way: [5]They do not take into account how people are actually actually using AI , though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been [6]overtaken by AI slop . Chatbots themselves have killed traffic to lots of websites that were once able to [7]rely on ad revenue to employ people, so on and so forth..."
"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
[1] https://cdn.sanity.io/files/4zrzovbb/website/3f7fd9d552e66269bdb108e207c5d80531d04b8b.pdf?ref=howtoreadthisch.art
[2] https://www.howtoreadthisch.art/the-answers-to-five-questions/?ref=404media.co
[3] https://bsky.app/profile/mims.bsky.social/post/3mh4cs4yi4s2f?ref=404media.co
[4] https://www.actionnetwork.com/lifestyle/will-ai-replace-your-job?ref=404media.co
[5] https://www.404media.co/ai-job-loss-research-ignores-how-ai-is-utterly-destroying-the-internet/
[6] https://tech.slashdot.org/story/26/03/13/1953248/digg-relaunch-fails
[7] https://tech.slashdot.org/story/25/07/22/1629240/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results-pew-research-finds
Cry more (Score:2)
Maybe that'll help
AI is not very intelligent and not improving. (Score:4, Insightful)
Parrots sound like they are speaking, but they are merely repeating.
AI has only one single reasoning methodology - prediction based on existing data.
AI is not gaining more methods, it is instead just increasing the data. This gives 'better' results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.
AI is not even as intelligent as the Parrot, it is just better educated.
The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.
We think it is smart only because it has learned how to predict words that we recognize as sentences. Ignoring that ability, it is the same stupid it was when we first invented LLMs.
You can get better results from AI simply by telling it not to guess and to only show results it can back up. That is not something a person has to be told. That is something we do automatically. A well trained dog does that (i.e. drug detection dogs know not to false alert if they are well trained).
AI is like a guy I knew from college that got in because of his parent's money: Well educated moron.
Scams are a bigger problem (Score:2)
Scams have become way more convincing, which will lead to larger losses to theft. No longer can you identify scammers by broken English, or other obvious markers.
I had one recently that seemed legit until I went off script and he started dropping âoesirâ more than a normal conversation. Another hacked a friendâ(TM)s account and had a convincing post about how his uncle died and was selling cars and various items that heâ(TM)d hold for a deposit.
It will be much easier to scam grandpa when
Push back? (Score:2)
Is there any push back on this?
I've actually labelled a few of the sites I run to indicate no "AI" used, and all human content.
I would guess there's gotta be a bit of a movement happening behind this, anyone seen anything?
The internet was destroyed a bit before that (Score:3)
And the biggest reason for its massive enshittification are exactly the "shoshul media" sites that TFS is lamenting for, where people go for shock content or to watch fakery and commentaries that align with what they like. So it isn't such a big loss, after all.
Perhaps eventually a dark net for humans with verification in-person will appear and leave that other internet to the chatbots.
Someone go make a movie about it.