Large Language Models, Small Labor Market Effects (nber.org)
- Reference: 0178014749
- News link: https://slashdot.org/story/25/06/12/0117247/large-language-models-small-labor-market-effects
- Source link: https://www.nber.org/papers/w33777
> We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark.
>
> AI chatbots are now widespread -- most employers encourage their use, many deploy in-house models, and training initiatives are common. These firm-led investments boost adoption, narrow demographic gaps in take-up, enhance workplace utility, and create new job tasks. Yet, despite substantial investments, economic impacts remain minimal. Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 3%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.
[1] https://www.nber.org/papers/w33777
News Flash (Score:3)
AI is over-hyped and under-performs
From a software development perspective, I've found it's good at troubleshooting problems, much like Google and StackOverflow, but completely shit at writing code that works.
Re: (Score:3, Insightful)
One thing that AI is fantastic at is parting investors with their cash.
Re: (Score:2)
My experience echoes yours. GitHub Copilot often writes reasonable-looking code, but then it will do stupid stuff like repeat code I already wrote, in a way that won't even compile. I find the tool useful mostly for languages that aren't my best, and arcane syntax such as is used to manipulate JSON or XML within SQL fields, or PowerShell or TerraForm scripts.
The study completely misses the point (Score:2)
What AI has done is encourage every CEO and CTO to look top to bottom through there entire organization for opportunities to automate. There are tons and tons of things that were being done by manual processes because the head honchos didn't think they could be reliably automated.
Remember CEOs and ctOS aren't smart they are lucky and well connected. Nepo babies mostly.
So there is tons of crap out there that's being called AI but is really just run-of-the-mill automation. Remember that stupid shirt a
It's coming for your job... (Score:1)
Editor/Copywriter. "Good enough" is perfect when it'll only get a fraction of second of brain time. LLMs are already laying waste to marketing departments around the world. It'll write "all" the product copy, just copypasta the user manual with some additional info, like pricing, sales promotions, etc.
Anyone still saying the gains are negligible, has their head firmly planted in the sand, imo. Jobs far below your pay-grade aren't about to be decimated (1/10), no, more hung and quartered (4/10). Anything inv
Drawbacks of the study (Score:2)
This study uses survey results from one to two years ago. First, survey results are simply data the reflects personal perceptions. Second, the employment impacts of LLMs are arguably just emerging, so the old surveys are not so insightful. Third, the key result is a 3% average productivity gain, but what would be much more useful would be the top 10% of productivity gains versus the bottom 10% and why there is a difference. The average number is not as useful without an understanding of the distribution
Re: (Score:2)
> Second, the employment impacts of LLMs are arguably just emerging, so the old surveys are not so insightful
Also, the impacts may be quite volatile. LLMs might result in huge job displacement - until the discovery of some flaw which can't be corrected for renders them unsuitable.
Then there's the problem of models feeding off of each other's hallucinations. If that's as bad as some have suggested, retraining the models to keep them 'pure' could cost a lot. That would make their adoption less universal than AI's cheerleaders expect.