Life With AI Causing Human Brain 'Fry' (france24.com)
- Reference: 0181186246
- News link: https://it.slashdot.org/story/26/03/30/1824254/life-with-ai-causing-human-brain-fry
- Source link: https://www.france24.com/en/live-news/20260330-life-with-ai-causing-human-brain-fry
> Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have [2]dubbed the phenomenon "AI brain fry ," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."
>
> The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.
>
> [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."
>
> BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a [3]decline in burnout rates when AI took over repetitive work tasks.
[1] https://slashdot.org/~fjo3
[2] https://www.france24.com/en/live-news/20260330-life-with-ai-causing-human-brain-fry
[3] https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry
I believe it (Score:3)
It is cognitively expensive to do this kind of task. Essentially you are expected to assume an alien thought process with no context or explanation. I don't think it's unique or novel form of cognitive load, though. I think it's very similar to what a therapist or psychoanalyst will do... you need a sort machine empathy to try to arrive at the same solution and judge whether it's appropriate. These people were trained as programmers but they're actually computer therapists now.
Clearly the solution is more AI.
25,000 lines of code (Score:3)
It might take one person one year to write 25k lines. How does a person get their head around that in 15 hours? One little "why on earth is this here" question can generate an hour or more in research with product managers, asking developers, reading 1000's of pages of documentation.... If it is fintech quant code, good luck with finding a quick explanation.
Re:25,000 lines of code (Score:4, Informative)
The article does mention that it was "fine-tuning" the code base. But it does remind me of a colleague who once bristled at the mention of Line of Code metrics of developer output with an established codebase as removing lines is often the mark of the superior software engineer.
Re: (Score:2)
As far as I've seen, the AI fanatic's answer is "don't care about the code".
They ask for something and whatever they get, they get. The bugs, the glitchiness, the "not what they were expecting" are just accepted as attempts to amend purely through prompting tend to just trade one set of drawbacks for another rather than unambiguously fix stuff. Trying again is expensive and chances are not high that it'll be that much better, unless you have an incredibly specific and verifiable set of criteria that can d
Re: (Score:2)
It takes a lot of time to create GOOD code with LLMs. The first thing it generates might be good, but not good enough to ship. All the happy-path tests and unnecessary string equals checks (like testing that a hard coded message is the exact string we specified... come on now) aren't going to tell you about all the edge cases you missed. It can only generate what you tell it to. There will be bugs.
Re: (Score:2)
Yes. And that is how AWS got their 13 hour (?) outage. That outage was probably more expensive than what they can save in cost over a year or several by using LLMs as surrogate coders.
Re: (Score:3)
> It might take one person one year to write 25k lines.
A year? I've regularly written that much in a month, and sometimes in a week. And, counter-intuitively, its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later. I think it's because that very high productivity level can only happen when you're really in the zone, with the whole system held in your head. And when you have that full context, you make fewer mistakes, because mistak
Not unique to AI (Score:2)
The findings here seem like they match the pre-AI reality that good code review is a cognitively expensive task. Where writing code allows for focusing on one piece of the puzzle at a time and when the written function / section of code works as part of the larger project is a rewarding experience. Likewise reviewing the code requires understanding the entire context to know where the code fits. Added to this is the lack of attachment and pride with the success of the code working. For the review doing a go
Re: (Score:2)
The problem is volume.
Just like AI slop content isn't generally that much worse than human slop that flooded the services, at *least* the human slop required more effort to generate than it takes a person to watch, and that balance meant the slop was obnoxious, but the amount was a bit more limited and easier to ignore.
Now the LLM enables those same people that make insufferable slop to generate orders of magnitude more slop than they could before. Complete with companies really egging them on to make as m
Re: (Score:2)
> Problem is micromanaging executives that are all in and demanding to see some volume of LLM usage the way they think is correct (little prompt, large amounts of code).
Thus practice may be very bad for your health. Not that these "executives" care, but you should.
Thinking vs drudge work (Score:2)
For most of humanity even the most creative jobs had a bit of thinking and a ton of drudge work.
Now, for certain jobs, the drudge work is gone, and all that is left is thinking.
Real thinking is very very hard. The human brain uses more energy than any muscle, even the heart.
Trying to do the real thinking 100% of the time is draining. It's like playing chess for 8 hours rather than 30 minutes.
Re: Thinking vs drudge work (Score:3)
I don't think you're wrong but.. we aren't just talking about "thinking". You can't analyze code without experience. Junior people can't understand without having DONE stuff. They won't have anything to compare too.
Not the problem (Score:2)
"Brain fry" makes it sound like the workers are failing, but it's not them. There are ways AI can augment your job - I use it as a quick way to search and compile relevant results into something I can use, and occasionally to produce simple snippets of code.
If you're a low-skill coder trying to be an expert because you have AI to 'help', then your manager did an awful job of understanding both AI's capabilities and yours. If you're a high-skill coder and your manager expects 10x the output from you after
Re: (Score:2)
When I read "Brain fry" I started wondering if zombies preferred their brains raw or cooked.
Re: Not the problem (Score:2)
I thought of the poor brain slug. :-(
Re: (Score:2)
I've seen a number of cases of people treating AI as a brain replacement. AI can be great, but lately I've found it making tons of mistakes. In some cases, the mistakes are inane, but there are many cases where you have to pay extremely close attention to spot the fallacies. And since it speaks with a very authoritative voice, people aren't generally reviewing its answers with the level of skepticism they should be using. This is causing more work to flow uphill since managers and leads have to spot the
Re: (Score:2)
It's worse than authoritative - it's kissing your ass.
You: "Hey, AI, I think the world is flat and rests on the back of an infinite stack of turtles"
AI: "That's a great, here's how that works: [blather]"
People love having their ass kissed. If you don't have control over your ego, you're going to accept AI hallucinations more readily.
As expected (Score:2)
It takes a while to learn how to effectively use new tech, especially powerful tech that is rapidly changing
Expect more confusion and disruption before things stabilize
Re: (Score:2)
That's true. Now AI of course is certainly not new tech. It has been widely used since at least the '70s. I personally have been using it since the '90s, and my brain isn't fried yet...
Re: (Score:2)
That is, indeed, one of the problems, but it's not the one the article is discussing.
Or just overworked (Score:2)
If AI makes you the 10x engineer, you may get the burnout 10 times earlier. You type less, but you work the same. Programmers are not paid as good typists, but because they can understand the problems they deal with. Typing time is the part when you do not need to think that hard. AI generates 200 lines in a few seconds, so you have to read and understand them almost instantly. If that fries your brain, you may consider making a few more breaks. It may be enough to become the 3x engineer.
Re: (Score:2)
Yep, this is it. Doing code reviews as a senior engineer for about 7 team members and a few dozen contributors is a lot of work. Doing that all day with 2-3 agents working simultaneously is just as intense. Claude etc keeps telling you that "this is production ready, ship it!" because I/we did a little tuning that fixes some bugs and edge cases. But I find more, and MORE and some other issues, and each time it tells me "now its ready to ship!"
It's like there is an expert level of knowledge there, but you ca
TV, Video Games, Internet and now AI causes "FRY" (Score:2)
Seriously? What's with these money grubbing people. Every month they come out and say everything is causing brain fry.
When can we push back and say "STOP SAYING BULLSHIT"?!
WHEN????
Re: (Score:2)
> However, "That self-care piece is not really an America workplace value," Wigler said.
TFS tells you everything you need to know to answer that question: Everything is the cause except taking advantage of others even to the expense of their health.
That's why the bullshit never stops. It's always about how much blood can I squeeze out of this one? and anything that you can use to dehumanize someone is used as justification for why they "deserve it".
No. Nothing and no-one deserves to be sacrificed at the unholy alter of greed. Until people push back on that, as they say: The beatings will
LLMs Are Unhinged (Score:2)
Employees have recently started using our LLM as an agent to install applications and the thing is absolutely a loose cannon. We've caught it doing things like downloading scripts from questionable sources, running them with the "at.exe" command to get them to execute as the System user, and disabling the firewall before running them. And the reports generated by our EDR solution are so complex that's it's extremely difficult to determine the original intent of the LLM prompt. I'm sure we're not the only
Re: (Score:2)
Sounds like all of those years and articles about "how I got ${thing} to work: Lower the shields captain." or "Step one of troubleshooting: Completely disable SELinux." are finally coming back to bite the IT industry in the ass. Maybe next time they'll remember this instead of just considering end-user security an afterthought.
Pffttt....HAHAHAHA!!!! Sorry I couldn't keep a straight face. Of course, they'll blame the LLM, some company will sell a "solution", and everyone will go right back to sleep because
BCG (Score:2)
Aren't these the same guys behind all the short selling attacks on American companies?
Must be nice... (Score:2)
> [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."
Mate... I feel that way after a solid 8 hour day of code archeology even without AI assistants. Must be nice to not experience fatigue ever. To me this sounds like the AI tools just made you actually do a full day's work for once.
TurtlesGPT (Score:2)
> lengthy prompts to draft
Just use AI to help generate prompts!
The solution (Score:2)
> ...armies of AI assistants to wrangle, and lengthy prompts to draft...
The solution is obvious. Just have an AI do that.
Re: (Score:2)
This is actually a legitimate solution to some classes of problems - you have an overseer AI for your natural language interface that delegates tasks to subordinate AIs using LLMs tuned for specific tasks.
more bots (Score:2)
Obviously, you just need more bots to manage your bots. Maybe just one that is actually intelligent.
Some dogs like to chase their tails.
Regarding Reverse Centaurs (Score:2)
There's a finite amount of information that the human brain can process. AI coupled with a "Reverse Centaur" business model is the root cause.
Don't know what a Reverse Centaur is?
https://www.versobooks.com/products/3584-the-reverse-centaur-s-guide-to-life-after-ai
mmm brain fry (Score:1)
"I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." Isn't that just every single nerd on a normal day? :D
Reviewing code is more effort than writing code (Score:2)
I've been writing software for 30 years. Honest code reviews take more mental effort than writing the code yourself, unless the changes are small and clearly and verifiably well tested. Proper design for unit testing is hard and beyond the capabilities of AI. Hence, you can't really do better than a human software engineer, yet.
Rust never sleeps. (Score:2)
I don't know how many other ways people can say that if you don't practice a skill, it will be lost.
Re: Rust never sleeps. (Score:2)
But ChatGPT says I'm smart so I don't need to practice.
Re: Rust never sleeps [peacefully]. (Score:2)
Mod parent funny. Basically the joke I was looking for, though I would have asked the genAI for a short version as a kind of "Just so" story.
So now I'm trying to extend the joke in the direction of "Rage Against the Machine". Unfortunately I lack sufficient context and it no longer feels safe even to ask websearch for background information. The pandering is too extreme. It will tell me what it thinks I want to hear, and it's too darn good at guessing. Or maybe it's just too clever at forcing my thinking in
Re: (Score:2)
You're absolutely right!