News: 1740133070

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Hey programmers – is AI making us dumber?

(2025/02/21)


Opinion I don't want to sound like an aging boomer, yet when I see junior programmers relying on AI tools like Copilot, Claude, or GPT for simple coding tasks, I wonder if they're doing themselves more harm than good.

It's made doing simple jobs much easier, but it's only by learning the skills you need for minor tasks that you can master the abilities you must have for major jobs.

I'm not the only one who worries about that. Namanyay Goel, an independent developer, recently wrote a [1]blog post - with more than a million hits - that clearly struck a nerve. Goel wrote:

Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They're shipping code faster than ever. But when I dig deeper into their understanding of what they're shipping? That's where things get concerning.

Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

The foundational knowledge that used to come from struggling through problems is just… missing.

We're trading deep understanding for quick fixes, and while it feels great in the moment, we're going to pay for this later.

I agree.

I'm not saying you need to learn the skills I picked up in the '70s and '80s with IBM 360 Assembler and Job Control Language (JCL). That would be foolish. But, by working with such tools, I grokked how computers worked at a very low level, which, in turn, helped me pick up C and Bash. From there, I wrote some moderately complex programs. I can't say I was ever a great developer. I wasn't. But I knew enough to turn in good work. Will today's neophyte programmers be able to say the same?

[2]

I wonder. I really do.

[3]

[4]

As Goel said: "AI gives you answers, but the knowledge you gain is shallow. With StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked but why it worked."

Exactly so. In my day, it was Usenet and the comp newsgroups – yes, I'm old – but at its best, the experience was the same. The newsgroups were made up of people eager not just to address how to solve a particular problem but to understand the nature of the problem.

[5]

This isn't just two people spouting off. A recent Microsoft Research [6]study , The Impact of Generative AI on Critical Thinking, found that knowledge workers with "higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." Further, "used improperly, technologies can and do result in the deterioration of cognitive faculties."

Another study by Michael Gerlich at SBS Swiss Business School in Zurich, Switzerland, also found "a negative correlation between frequent AI use and critical thinking abilities." Grant Blashki, a professor at the University of Melbourne, agrees.

Blashki wrote: "It's a simple case of 'use it or lose it.' When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere – or just going idle. Convenience comes with a cost. If AI takes over too much of our cognitive workload, we may find ourselves less capable of deep thinking when it really matters."

[7]Musk's move fast and break things mantra won't work in US.gov

[8]Windows 10's demise nears, but Linux is forever

[9]Is it really the plan to take over Greenland and the Panama Canal? It's been a weird week

[10]The Automattic vs WP Engine WordPress wars are getting really annoying

That's bad. It's especially bad when people are still learning how to think in their field. Sure, we get faster answers, but as Blashki noted: "It's the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride – the other is just a free ride."

Besides, as much as you may want to turn over all your work to an AI so you can get back to watching Severance or The Night Agent, you still can't trust AI. AI chatbots have been getting better at not hallucinating, but even the best of them still do it. Even with programming, my ZDNet colleague David Gewirtz, who's been testing chatbots for their development skills for two years, observed: "AIs can't write entire apps or programs. But they excel at writing a few lines and are not bad at fixing code."

[11]

That's nice, but it won't help you when you need to write a complex application.

So, what should you do? Here's my list:

Don't treat AI as a magic answer box. Trust, but verify its answers. Use AI results as a starting point. For programming, work out how it's solving your problem. Consider whether there's a better way. Look for sites where the smart people are talking about your field of expertise. Ask questions there, answer questions there, study how others are dealing with their problems. Get involved with your colleagues' professional conversations.

When you do code reviews, don't stop when the code works. Look deeper into understanding the process.

Last but not least, try coding, writing, or whatever from scratch. Stretch your mental muscles.

Blashki said it best: "The goal isn't to reject AI – it's to create a balanced relationship where AI enhances human intelligence rather than replacing it." ®

Get our [12]Tech Resources



[1] https://nmn.gl/blog/ai-and-learning

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2Z7hc2HKFsntpXb-3spx2rQAAANQ&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44Z7hc2HKFsntpXb-3spx2rQAAANQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33Z7hc2HKFsntpXb-3spx2rQAAANQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44Z7hc2HKFsntpXb-3spx2rQAAANQ&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://www.theregister.com/2025/02/11/microsoft_study_ai_critical_thinking/

[7] https://www.theregister.com/2025/02/07/opinion_column_musk/

[8] https://www.theregister.com/2025/01/28/windows_10_demise_linux/

[9] https://www.theregister.com/2025/01/11/opinion_column_us_moves/

[10] https://www.theregister.com/2024/12/20/opinion_column_wordpress/

[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33Z7hc2HKFsntpXb-3spx2rQAAANQ&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[12] https://whitepapers.theregister.com/



ArrZarr

Current LLMs are junior team members who work quickly but all their work needs checking over.

You can't rely on one to do a task you don't understand yourself, because their work needs checking over.

You also need to have done your reps without the LLM to get the fundamentals internalised - how else will you learn about the edge cases which are going to bite you in the arse?

Assuming you're using them appropriately above, it lets you spend more time working on the knotty problems and how the whole thing will fit together in the complicated system that you're putting together - something that I don't think current LLMs are up to internalising.

(My favourite thing about getting an LLM to do the legwork on a task for me is that it's then capable of banging out reasonable documentation for the process, that gives me something to work from. Writing documentation from scratch on a given process is incredibly painful for me, so getting something close generated that I can tweak is a huge time saver)

Anonymous Coward

Agree BUT not a fan of taking Documentation that is 'Almost what you want' then changing it !!!

I find writing good documentation helps to double check the thought process used to create the code in the 1st place.

As ever ... horses for courses !!!

:)

ArrZarr

Ah, but surely the act of taking a first draft of the documentation then editing issues with it is a very similar process to double-checking the thought process?

As mentioned above, writing documentation is one of my weakest areas. I need the whole concept to crystallise in my mind (along with how to explain it without the use of a conversation) before I can get the words to paper - and that crystallisation will not be rushed, much to my frustration. Using an LLM to give me that first draft then working through it and editing it to be more accurate will still lead me to think through each aspect of the process.

Ding Ding Ding !!!

Anonymous Coward

100% agree.

If you don't learn how to think through problems because 'AI' does it for you then you are totally 'owned' by whoever provides the 'AI'

This is NOT just to do with coding BUT all areas where 'AI' is being aimed.

A deliberate 'enfeeblement' of peoples ability to think.

After it becomes the norm and is depended upon you cannot go backwards because the 'skills' have been lost.

Question:

What happens when 'AI' goes off-line as all these things will do as 100% uptime is never reached for 'Global' systems ?

[Think ... Hacks, ransomware attacks, Human error etc etc (as 'AI' will never be able to maintain 'AI' although they will try :)]

:)

On Krat's main screen appeared the holo image of a man, and several dolphins.
From the man's shape, Krat could tell it was a female, probably their leader.
"...stupid creatures unworthy of the name `sophonts.' Foolish, pre-sentient
upspring of errant masters. We slip away from all your armed might, laughing
at your clumsiness! We slip away as we always will, you pathetic creatures.
And now that we have a real head start, you'll never catch us! What better
proof that the Progenitors favor not you, but us! What better proof..."
The taunt went on. Krat listened, enraged, yet at the same time savoring
the artistry of it. These men are better than I'd thought. Their insults
are wordy and overblown, but they have talent. They deserve honorable, slow
deaths.
-- David Brin, Startide Rising