Why One Computer Science Professor is 'Feeling Cranky About AI' in Education (acm.org)
- Reference: 0179445682
- News link: https://news.slashdot.org/story/25/09/21/2331240/why-one-computer-science-professor-is-feeling-cranky-about-ai-in-education
- Source link: https://cacm.acm.org/blogcacm/feeling-cranky-about-ai-and-cs-education/
> Over at the
>> Communications of the ACM
> , Bard College CS Prof Valerie Barr explains why she's [2]Feeling Cranky About AI and CS Education . Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI.
>
> "We're jumping through hoops without stopping first to question the run-away train," Barr writes...
>
> Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."
From the article:
> In much discussion about CS education:
>
> a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.
>
> b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.
>
> c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.
[1] https://slashdot.org/~theodp
[2] https://cacm.acm.org/blogcacm/feeling-cranky-about-ai-and-cs-education/
Same old song (Score:2)
> There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers
Since when has anyone (not counting the people downstream, the theft victim, or the workers themselves worried or cared about that?
> There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.
I fail to see how this is any different than now or at any other point in CS education since at least the 1980s and possibly before.
> There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold
Same thing that happened the last dozen times. Everyone will be chasing after the next wave to ride instead of caring about AI. Does anyone even remember, much less actively think about all of the blockchain companies?
Re:Same old song - but much louder. (Score:2)
Yes, it's the same old song, but the AI bubble does it on an unprecedented scale in terms of energy use, environmental and societal impact, and the enormous amounts of money involved and the industry concentration in the hands of a few giant players.
Re: (Score:1)
For a very long time, one of the biggest problems we have had is FOMO -- Fear Of Missing Out.
If something becomes popular for more than 7 minutes, everyone immediately rushes to jump on board. $Billions are spent and wasted, a few people might get rich from it, and then it all collapses. Lather, Rinse, Repeat.
Vibe coding is the new self-driving (Score:2)
The promises started long before the technology could fulfill them. Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?
People who understand how to write reliable maintainable code, of course... But the world seems poorly positioned to produce more of those.
The tricky thing is that LLMs are actually pretty good at implementing homework assignments. It's when you need code beyond that scope that the illusion of competence starts to fall apart.
Re: (Score:2)
> Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?
There are [1]consultants [donado.co] for that.
[1] https://donado.co/en/articles/2025-09-16-vibe-coding-cleanup-as-a-service/
AI is a huge opportunity (Score:2)
If I were in charge of a Computer Science curriculum at a university, I would address the LLM problem like this:
I would offer a class (third or fourth year class) that starts from the basics of Neural Networks, and by the end of the class the students have built their own LLM. By building their own LLM, they will deepen their understanding, have a solid foundation, and avoid a lot of the nonsense that gets propagated about LLMs. The amount of code involved is not huge, it's actually quite doable.
Ive seen this before (Score:5, Insightful)
And every time I mention it, some angry soul mods me down.
This is exactly why universities can seem aloof to industry needs. Industry decides they need everything to be focused on *insert most recent shiny thing*, and suddenly every company is demanding that every new employee aBsOlUTeLY MuST have at least 10 years experience in something that basically didn’t exist 4 years ago. And they blame universities for a “disconnect between the ivory tower and the real world”. The universities roll their eyes, and start making noises about catering to current needs, knowing full well that the bubble will pop and the focus will be replaced by the next shiny thing.
Universities need to be prepping people to use AI, for sure but it’ll be one skill among many. You know what else employers like to see in their new hires? Decent speaking skills, decent writing skills, an understanding if basic professionalism, the ability to work on a team, basic mastery of the standard CS topics developed over the last 25 years, an ability to work with non-CS types. And, yes, some skill with upcoming AI/ML/LLM tools. Oh, and all this has to be taught in 4 years. Any university that actually listens to the industry screaming about AI will dump all those other skills, implement 4 full years of AI-centeric content, and their CS program will crater like the tunguska event when the AI bubble pops.
No, LLM models are NOT the start of the singularity. Sam Altman and all the other AI-tech-bros want you to believe that because they want investors to cough up all-the-dollars so they can play in the big leagues of the most recent computing fad.
Maybe I’m wrong and the world will blast past me while I gumble about people on my lawn. I acknowledge that AI will have a significant impact. But, 30 years from now, workplaces will look a lot like they do right now. The main difference is that people will have one more useful tool in their belt to use.
Re: (Score:2)
I can't make much sense of that rant. Here we have a university professor highlighting we need to take this AI craze a bit slower, and you seem to agree, but you still start out with complaining about the universities rather than the kool-aid pushers.
Whatever (Score:2)
No real CS student (aka hacker in the vernacular sense) is ever concerned with A let alone B or C. They are only interested in exploration of technology and how to make it do things that the designers and gatekeepers never intended. Read Steven Levy's book.
Downsides of AI from a techincal standpoint (Score:1)
It seems that downsides of AI as discussed by the professor appears to be mostly about social or economics impact. While these are valid points, they looked like discussion by a social science professor instead of a computer science professor. "Exploitation of (data) workers" is exactly the kind of words commonly spoken by social science professors. It would be helpful to CS education if there are more discussion about pitfalls of AI from a computer science standpoint. I can think of two main problems after
When I got my CS degree (Score:2)
I got my CS degree in 1988, just as the personal computer revolution was washing over the world. There was a lot of hand-wringing back then too, about how the computer would take away people's ability to think and do things on their own.
But I didn't make something of my career by wringing my hands about the down sides of the new computer technologies. I was *excited* about the possibilities I could see, and dove headlong into it. The result was a very fun, long, and exciting, not to mention well-paying, car
Great article (Score:3)
That was an excellent, well thought-out article. Everyone should read it and not just rely on the summary on Slashdot.
Re: (Score:2)
I honestly gained nothing from reading the full article compared to the slashdot summary. What did I miss??
Re: (Score:2)
Look at this guy trying to brag about reading the summary. Real /.ers don't even read past the third word in the title before commenting. After all, why have one computer when you can have two? That's just basic math.
Missing the hidden part (Score:2)
The whole AI question, from use of copyrighted material for training, to ethical use, to usability is way way behind the main hidden topic:
Governments will push the AI as far as they can go because it is to be integrated into national defense, military and military strategy.
Predict a vigorous debate with narrowly accepted and approved opinions on both sides which keeps spinning while the military part is progressed.