News: 0179060242

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds (phys.org)

(Sunday September 07, 2025 @05:57PM (EditorDavid) from the survey-says dept.)


How do AI tools impact college students? 231 students in an object-oriented programming class [1]participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student).

> They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and [2]Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.

The researchers say [3]their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses."

Other results from the survey:

47 respondents (20.3%) never used AI assistants in this course.

Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low."

"Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."



[1] https://phys.org/news/2025-08-frequent-ai-hinder-students-academic.html

[2] https://en.wikipedia.org/wiki/Spearman's_rank_correlation_coefficient

[3] https://www.sciencedirect.com/science/article/pii/S2451958825000570?via%3Dihub



Surprising? (Score:2)

by liqu1d ( 4349325 )

Perhaps if you frequently offload your thinking to AI you would also find it surprisingâ¦

Re: (Score:3)

by gweihir ( 88907 )

Well, as most people cannot fact-check, they are forever surprised by actual facts. And I have a nagging suspicion that AI, dumb and without insight as it is, may actually do better than many people at "insight" about common and well-understood topics. But a good encyclopedia basically does the same.

Functionally illiterate (Score:2)

by will4 ( 7250692 )

The surprising statistic is that the UK has 260 colleges and universities and 2,053,520 students (undergraduate and postgraduate) with 663,355 international students not from the UN and not from the EU. [1]https://www.universitiesuk.ac.... [universitiesuk.ac.uk]

When the top universities are excluded, the student enrollment from non-UK and non-EU countries is much higher than the 30% overall number.

A guess is that the lower tier universities/colleges will have a much higher AI use.

A second guess is that there are many colleges/univer

[1] https://www.universitiesuk.ac.uk/latest/insights-and-analysis/higher-education-numbers

Such a surprise (Score:5, Interesting)

by gweihir ( 88907 )

I have two personal data points:

1. My IT security students (several different classes and academic institutions) all view AI as last resort or something to be used only after they have solved a task to verify they got it all. This comes from negative experiences they made. They say AI misses important aspects, prioritizes wrongly, hallucinates (apparently IT security is niche enough that this happens often), and generally it takes more time to check its results than to come up with things directly. They also mislike that you often do not get references and sources for AI claims.

2. I taught a Python coding class in the 2nd semester for engineering students (they needed a lecturer and I had time). The students there told me that AI can at max be asked to explain one line of code, it routinely already failed at two connected ones. And for anything larger it was completely unusable. They also found that AI was often clueless and hallucinated some crap.

Hence I conclude that average-to-smarter students are well aware of the dangers and keep a safe distance. Below average ones are struggling anyways and may just try whatever they can get their hands on. And at least here, 30-50% of the initial participants drop out of academic STEM courses because it is too much for them. AI may have the unfortunate effect of having them drop out later, but overall I do not think it will create incompetent graduates. Oh, and I do my exams on paper whenever possible or online-no-AI for coding exams (so they can use the compiler). The latter gets more problematic because of integrated AI tools. I expect we will have to move coding exams to project work (on site) or something in the near future and have them take a full day or the like and maybe group work and pass/fail grading. As I do not teach coding anymore from this year on, I am not involved in any respective decisions though.

Re: (Score:2)

by TurboStar ( 712836 )

> The students there told me that AI can at max be asked to explain one line of code, it routinely already failed at two connected ones.

The smart students are saying this because they can get through your busy work faster with AI and don't want you to know. The dumb students are saying this because they aren't smart enough to manage the AI. You are believing this because why?

Re: (Score:2)

by gweihir ( 88907 )

I am believing this because these were discussions about non-graded exercises and they demonstrated both failed and successful AI results and they can get as much help as they want from me on these exercises. Also, I have a good relationship with most of my students and discussions are frank and open. And on top of that, about 50% (in the coding class 100%) of my students are working while studying and have professional experience and this type of student is primarily interested in learning things.

Your leve

Re: Such a surprise (Score:2)

by paul_engr ( 6280294 )

IMO, it's not "cheating" if it's 1. Sanctioned by the requestor, predicated on 2. The "cheater" has to disclose their methods and results and 3. an honest evaluation and discussion transpires about the meta of the whole ordeal.

Re: (Score:2)

by TurboStar ( 712836 )

> Your level of mistrust, on the other hand, seems to indicate something completely different.

The word you're looking for is experience. Something you've demonstrated a lack of in regards to AI and CS competency.

What? (Score:2)

by SlashbotAgent ( 6477336 )

Having a machine do something for you doesn't make you as proficient when doing it yourself? Shocking!

Next you'll tell me that using a calculator doesn't help you remember your times tables.

Re: (Score:2)

by gweihir ( 88907 )

Nice failure to understand you have there. As it is about building or not building understanding, that is pretty ironic. You nicely demonstrated why _you_ are getting dumber using AI. And here is a hint (although I expect this will be flying right over your head): Information, like tables, you can carry around with you, learning them by heart is pretty worthless. Insight you cannot substitute with a book or a somewhat better search mechanism like an LLM. Insight you either have or you do not.

Re: (Score:2)

by Potor ( 658520 )

> Nice failure to understand you have there. As it is about building or not building understanding, that is pretty ironic. You nicely demonstrated why _you_ are getting dumber using AI. And here is a hint (although I expect this will be flying right over your head): Information, like tables, you can carry around with you, learning them by heart is pretty worthless. Insight you cannot substitute with a book or a somewhat better search mechanism like an LLM. Insight you either have or you do not.

Learning tables is learning patterns. Are you really saying that learning patterns does not extend your knowledge?

Re: (Score:2)

by SlashbotAgent ( 6477336 )

I think you may have failed to understand what times tables are.

> Information, like tables, you can carry around with you, learning them by heart is pretty worthless.

Good luck finding a single math teacher on the planet that agrees with you on this topic.

Re: What? (Score:2)

by paul_engr ( 6280294 )

I had a half dozen. Going back to the "can we use calculators?" Question in K-12 - most said no, learn to do the damn math first. Same goes for tables. What if your table is wrong? 9*9=/=87 If you can't think about it and independently draw the same conclusions yourself by other means, the table is useless. Basic multiplication is a skill one should have proficiency in

Re: (Score:2)

by SlashbotAgent ( 6477336 )

Replied to wrong comment.

Correlation does not mean causation... (Score:4, Insightful)

by Fons_de_spons ( 1311177 )

Don't jump to conclusions... Maybe the students that struggle are driven to LLMs to get things done.

Re: (Score:2)

by znrt ( 2424692 )

it's reallt simple: if you want to get more shit done, use more ai. if you want to learn, use less.

Re: (Score:2)

by Ritz_Just_Ritz ( 883997 )

It's kinda like laying work off on bottom feeder outsourced labor consultancies. You'll absolutely generate a lot of "activity," but generating better outcomes???...not so much.

Re: (Score:2)

by gweihir ( 88907 )

The question how much outsourcing actually helps. From my experience, as soon as you need to get actual work done, you need to use local, small outsourcers, because they will care about you and will actually try to do good work. Forget about any well-known names. They just want your money and want to string you along for as long as possible. Actually solving your problems is not part of their business model and would decrease their profits.

Re: (Score:2)

by StormReaver ( 59959 )

> it's reall[y] simple: if you want to get more shit done, use more ai. if you want to learn, use less.

I will modify that a bit to fit reality. It you want to get more shit done, but results don't matter, use AI. If you want to learn, or if you want a higher correctness ratio, forego AI.

Re: (Score:2)

by gweihir ( 88907 )

Indeed. There is nothing wrong to use AI as an additional plausibility check, but that is essentially it for any work that requires insight for good results. Note that a lot of big names get filthy rich by not caring about the quality of their work. These are clearly evil (doing massive damage to others for a comparatively much smaller gain to them), but that seems to be something the human race can well afford at this time. Or not.

Re: (Score:2)

by gweihir ( 88907 )

For the first case, that really depends on what you want to get done. For the second part, yes.

Re: Correlation does not mean causation... (Score:2)

by paul_engr ( 6280294 )

More shit, damn the accuracy

Re: (Score:2)

by Fons_de_spons ( 1311177 )

Allow me to clarify... I teach a simple basic programming course. I see the talented ones writing games in a matter of weeks. Little chatgpt, some googling, usually when they are stuck. Then there are the ones who just do not like computers. Everything is a struggle. They want to spend as little time as possible on the course. Chatgpt is their best friend. So I am not surprised by the results of the study. They are kind of obvious to me. Of course there should be no people in a CS course that do not like co

Re: (Score:2)

by Tony Isaac ( 1301187 )

But this is *negative* correlation, maybe that does mean causation???

Hinges Strongly on "HOW" They Use AI (Score:5, Informative)

by Slicker ( 102588 )

Initially, I found the same in myself--a real degradation overall in my productivity. I am a software Engineer. It has not been easy learning how to use generative AI to actually increase and improve productivity. At first, you think it can do almost anything for you but gradually over time realize it greatly over-promises.

Overall, the key is that you need to remain in charge of your work. It is an assistant that can be entrusted more or less to small tasks with oversight, at best. For example, frame out your project and clearly define your rules and preferences in fine detail. Then..

It's good at:

- Researching, summarizing, and giving examples of protocols, best practices, etc.

- Help you identify considerations you might have overlooked.

- Writing bits of code where the inputs/outputs and constraints are clearly defined.

It's bad at:

- Full projects

- Writing anything novel (it knows common patterns and can't work much beyond them.

- Being honest and competent -- it cheat on writing code and writing tests for that code; when you catch it red handed, it will weasel its way out.

The bottom line: you are in charge. You need to review what it gives you. You need to work back and forth with it.

Also -- I am still learning.

--Matthew

Re: (Score:2)

by gweihir ( 88907 )

While I mostly agree, there is another aspect: Coding is a skill that needs to be practiced. If you stop writing the simple things yourself, you will get worse at writing the harder stuff.

Re: (Score:2)

by ModernGeek ( 601932 )

I feel like a lot of these same observations can be said for people that over rely on libraries and frameworks and don't actually understand or have any control over what they're doing. At a certain point, the AI is just patching frameworks together and implementing libraries without understanding them .. just like some programmers do.

In Other News (Score:2)

by John Allsup ( 987 )

It turns out that even though you can cover 5 miles quicker in a car, it negatively correlates with health outcomes compared to running or cycling the same distance. Using AI is like taking a taxi.

Re: (Score:3)

by gweihir ( 88907 )

That is actually a very good comparison. Skills and insights need to be used to maintain them and even more so to improve them.

This is the google calculator effect (Score:2)

by paul_engr ( 6280294 )

I have worked with a number of lazy engineers who use the google search bar as a calculator. As such, they don't write shit down and can't remember their answer five minutes later. They tend to be smart, but lazy as fuck. I suppose the AI kids have the same ranks, where there's folks that take note and make an effort regardless of the tools they use, then there's the folks who will copy from the chatgpt window and not read or recall the "answer"...

Funny, Seems like a given to me! (Score:2)

by oldgraybeard ( 2939809 )

Students that don't do the work! Don't learn much!

Pardo's First Postulate:
Anything good in life is either illegal, immoral, or fattening.

Arnold's Addendum:
Everything else causes cancer in rats.