News: 0180652284

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org)

(Sunday January 25, 2026 @11:34AM (EditorDavid) from the school-daze dept.)


This month saw results from a yearlong global study of " [1]potential negative risks that generative AI poses to student ". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits:

> After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.

"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," [2]reports NPR — "how they learn new skills and perceive and solve problems."

> The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

>

> Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out:

> The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

>

> AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."



[1] https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/

[2] https://www.npr.org/2026/01/14/nx-s1-5674741/ai-schools-education



Inevitable (Score:2, Insightful)

by Anonymous Coward

As long as schools are narrowly focused on just results, I believe this will become the norm for human thinking. When these children graduate, they'll continue to have access to the tools that allow them to get the results they need for life. Their way of operating in the world will become ubiquitous and, sadly, most people won't care. It all seems so dystopian to me.

Re: (Score:1)

by test321 ( 8891681 )

It's inevitable that it will be used in adult life, but school still teaches subjects that have been replaced by digital. We have had calculators for over 50 years, but we still teach hand and mental calculation. Same with handwriting, pencil/painting art, foreign languages.

We must ensure that AI can be used in some classes, but not all. I see that schools still rely on paper and pencil, and that phones are being banned from classroom, so many countries are on the right trajectory.

Idiocracy (Score:3)

by LazLong ( 757 )

Maybe AI is how Idiocracy truly comes about?

Re: (Score:1)

by Anonymous Coward

Probably just sleep mode.

Re: (Score:3)

by gweihir ( 88907 )

That would be very unsurprising.

Such a surprise (Score:1)

by gweihir ( 88907 )

Wanna bet that most schools will do it wrong and just use it to do more crappy teaching cheaper?

Re: (Score:2)

by noshellswill ( 598066 )

You're pimping-the-ride for huge *.ai industrial data companies. A free person/culture ALWAYS has the "if" option. It's fellow-travelering pervos like you and of-course greedy "big data" that wants to remove that personal power "if" from the discussion -- damned straight. I would remove *.ai from all K-12 school experience as I would remove all smartphones & "computer instruction". Stay human ... face-2-face ... Socratic method all day every day. And of-course 2-hours of vigorous physical pl

AI/LLM is the equivelent of an Intern (Score:2)

by gurps_npc ( 621217 )

A lot of scut work can definitely be done by AI, but you should not consider more trustworthy than an Intern. It should not be trusted with:

Any legal work not reviewed thoroughly by a real human lawyer. This includes criminal and civil.

Any child care

anyone's money

human health

the life of any animal you care about.

Kids should be learning about how it's bad (Score:2)

by drinkypoo ( 153816 )

If you want to protect children from AI, you're going to have to educate them about how AI fails. Not accidentally, by letting them use it and experience those failures by themselves because they might not run into them or might not understand them, but with directed age-appropriate education about how it works. Kids need to understand that getting into the van with the candy also has other consequences.

Millenial teachers should know better (Score:2)

by xack ( 5304745 )

They were at the point where pen and paper was just being complemented with computers and the early internet, but they need to teach gen beta (that's what we are up to now) the ways of the pen, paper and book and not just some chatbot talking out their clanker ass. When moving on to computers they should be on proper peer reviewed by expert humans content and not what silicon valley dreams up. We had the same issue with people making things up on Wikipedia and Tumblr, now we just have another source of idio

Now for... (Score:2)

by joshuark ( 6549270 )

Now for something entirely different (allusion to Monty Python)...now NVidia, Anthropic, OpenAI, etc. will now fund a study to release a report about how AI is a great asset in schools, etc. The benefits of AI/ML are reminiscent of the studies about coffee in the 1980s and 1990s. First bad for you, drink de-caf. Then de-caf is bad for you drink coffee, and the band plays on...

--JoshK.

"Mach was the greatest intellectual fraud in the last ten years."
"What about X?"
"I said `intellectual'."
;login, 9/1990