Will Some Programmers Become 'AI Babysitters'? (linkedin.com)
- Reference: 0181651770
- News link: https://developers.slashdot.org/story/26/04/12/2225245/will-some-programmers-become-ai-babysitters
- Source link: https://www.linkedin.com/pulse/computer-science-education-ai-era-maggie-johnson-o4ygc/
> "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson [2]in a LinkedIn post . So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.
>
> "While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."
>
> The NY Times reports that companies are already [3]struggling to find engineers to review the explosion of AI-written code .
[1] https://www.slashdot.org/~theodp
[2] https://www.linkedin.com/pulse/computer-science-education-ai-era-maggie-johnson-o4ygc/
[3] https://www.nytimes.com/2026/04/06/technology/ai-code-overload.html?unlocked_article_code=1.ZlA.3dGh.OTvxRV0J6ZVG&smid=url-share
Re: Maybe I'm missing something (Score:2)
Given the small scale plans that AI is already able to make, I really donâ(TM)t thing itâ(TM)ll be long before it can choose an architecture that works, make a document describing that architecture, then follow step by step instructions to build it. At the moment it breaks down small problems into immediately actionable steps, and does them. It wonâ(TM)t be long before itâ(TM)s able to do that recursively and then iterate what the best design is. It also wonâ(TM)t be long before
Re: (Score:3)
> Given the small scale plans that AI is already able to make, I really donâ(TM)t thing itâ(TM)ll be long before it can choose an architecture that works, make a document describing that architecture, then follow step by step instructions to build it. At the moment it breaks down small problems into immediately actionable steps, and does them. It wonâ(TM)t be long before itâ(TM)s able to do that recursively and then iterate what the best design is. It also wonâ(TM)t be long before itâ(TM)s better at it than software engineers. We typically focus on one area, thinking about the general effects on the rest of the system only. An AI will be able to make detailed plans that consider all the interactions with the rest of the system.
Someday in the distant future it may even be able to bring full Unicode support to slashdot!
Re: (Score:2, Funny)
Some day soon people will use a real computing device instead of a kids toy to post to Slashdot.
Re: (Score:2)
" An AI will be able to make detailed plans that consider all the interactions with the rest of the system."
As I understand it: If your target system is complex enough to require multiplication (convolution) then "detailed plans" addressing everything is exactly what cannot be done.
Re: (Score:2)
It demos well, and for some scenarios demo == the application, however it's pretty bad at meeting specific requirements. If the context allows the requester to be flexible about their requirements and the scenario is pretty well trodden, then codegen has a shot at working from a relatively normal 'manager' level prompt. However, a key issue remains that when in doubt, it generates something that doesn't actually do things correctly but superficially resembles things being done correctly. If you are blindly
Re: (Score:2)
You're falling for the AI sales methodology hook, line and sinker. It's being sold not on what it does, but what it might be able to do in the future.
This may have seemed plausible in the ChatGPT 3 era when it still seemed like doubling model complexity led to real gains. But noiw we're all stuck on 4. because that has slammed into the wall of diminishing returns. It's also mathematically impossible to eliminate hallucinations. This is a dead end path to this technology. Don't be so credulous.
Re: (Score:2)
It's also mathematically impossible to eliminate hallucinations.
They're not "hallucinations." The LLM cannot "lie" to you. It's simply trying to predict the next word (or part of word/token). That's it. There's no intent. There's no reasoning. There's a massive lossy compression across a corpus of insane amounts of human text, combined with some human and some automated reinforced feedback training. People cannot seem to understand that, no matter how generic the texts gets or how the chatbot keeps
Re: (Score:2)
LLMs will get better, and LLMs will eventually be replaced/augmented with better architectures and algorithms. Eventually we'll get to human level AGI, capable of continual on-the-job learning, able to pick up senior developer skills if you let it progressively learn on simpler projects, the same way humans progress from junior to senior.
The timeline for this is whatever you guess the timeline to be for development and deployment (no longer just "pretrain and ship") of human-level AGI.
In the meantime there
Re: (Score:2)
I think babysitting is a good analogy because the first step is making the space unviable for full adult functioning by removing anything it could possibly interact with that is dangerous or has actual repercussions because it’s clueless and lacks basic reasoning and has no “common sense” whatsoever. It’s also a demanding task that has no real world application beyond attempting to grow something at great expense with no payoff until many years later and even that’s not guaran
Re: (Score:3)
40 years ago the best chess playing computers could beat almost everybody except good club players
30 years ago few other than top GMs could beat them
20 years ago even GMs struggled to beat them
10 years ago a GM was doing well if they could draw.
I see "AI" programming going the same way. Claude is really good at writing code given good constraints but some things are completely beyond it. It's written code in seconds that would have taken me hours, and it's taken a day to fail to solve a single repeatable cr
Re: I'm already a manager babysitter (Score:2)
Easy : he will be replaced within the next 6 months.
It's all about accuracy (Score:2)
Will you board a plane if you learn that the controllers use AI generated code? We still board a plane because we trust the accuracy of humans. So it's just a matter of time when AI will surpass humans in this benchmark as well. Until then happy babysitting.
Re: (Score:1)
> Will you board a plane if you learn that the controllers use AI generated code? We still board a plane because we trust the accuracy of humans. So it's just a matter of time when AI will surpass humans in this benchmark as well. Until then happy babysitting.
That's an interesting analogy. If we really trusted the accuracy (and performance) of humans, we wouldn't need a co-pilot. We trust groups of humans (i.e. redundancy). And in just about all of the AI implementations I've seen, including science fiction, there is always One computer/AI driving the ship or otherwise running things. I wonder if we'll end up expecting redundancy when it comes to AI, not just hardware, but having one AI check on, monitor and possibly take over from the other.
Re: (Score:2)
Do you remember when commercial jets required three people? We didn't change it to 2 because humans became better or more trustworthy. It was never about "redundancy": it's workload. And much of it has been automated out, but the workload of actually operating a modern jet is still higher than what one person can safely and meaningfully handle.
Don't try to reframe things you don't understand to make your points incorrectly. It's a bad look.
AI is a huge problem for programmers (Score:5, Insightful)
There's basically two options. Either it works or it doesn't.
If it works it's basically going to be doing grunt work. It's all well and good to say it freeze you up for the hard work but that means you now have a 24/7 job doing the hard work. You no longer get an hour or two of downtime resting your brain everyday. You are expected as an employee to be on 24/7 producing high quality novel code.
And if it doesn't work then yeah you are an AI babysitter. But you're still going to be treated as if the code tool works so your productivity is expected to go up.
There is absolutely no winning this.
Re: (Score:2)
If I get a strange error code from my app (an error which I am not familiar with, usually caused by some 3rd part library we use), I feed it to the AI and AI will usually about 80% of the time guess correctly what is wrong, I check if that was the case and then I fix the error. Traditionally that was long hours of googling and reading manuals trying to figure out what is wrong. I did not enjoy it, nor did I rest when doing it. Using AI like that feels pretty relaxing to me.
Re: (Score:3)
No doubt every ill-conceived idea that can be tried is being tried, but the math doesn't really work on that one. How can the same person be 10x more productive generating code that they are then personally expected to review.
The "solution" to this is either you just don't review the code, since you didn't 10x your manpower to review the 10x more code, or you just issue some impossible mandate like Amazon just did (when some junior dev's AI slop took down one of their production system) and insist that the
Re: (Score:2)
I used to agree with you that AI basically did grunt work. But in recent months, the tools, like GitHub Copilot, have gotten significantly better doing things that went beyond grunt work.
For example, I wanted to add Lucene (the engine behind ElasticSearch) to my application. Not knowing how Lucene worked, I prompted AI to add it, and told it what kinds of queries I wanted to support. It generated the code to my specs and made it work. Then, Lucene being a complicated beast, some searches come back with scor
Too much typework (Score:5, Interesting)
I let chatgpt write a little gui for a hobby project I made. A few prompts later and I had a working GUI for my python program that automatically generates excel sheets for my colleagues.
Then the babysitting started. My God... I had to think of everything that could go wrong and tell it what to do in that case, meanwhile it lost track of previous requirements more than once and wiped that out. Simple example? User has to type in a number, user should not be able to type in a letter, or a negative nember, ... I got sick of all the explaining I had to do at some point. I was typing in lengthy paragraphs and gave up.
The GUI was good enough for my purposes, it was ok if you followed the steps one after the other. I got further than I would have gotten if I had written it myself and the program became a lot more usable. It was able to save settings in a JSON file, reload the settings, You could set up the program and hit generate as long as you did not deviate too much from the intended work flow. The good news? I got a working gui very fast.The bad news? No way I would use this in a professional environment. I'd do it all manually. Probably was less typework. I would have gotten less features, but it would not misbehave if you typed in something wrong or hit the buttons in the wrong order.
Is that a good summary for using AI in programming? Makes nitwits think they can do anything in a few prompts, The sky is the limit! The people on the workfloor know that its outputs still needs a ton of revising before you could even consider releasing it?
Re: (Score:2)
Don'y worry man, if you're using Python you're not a real programmer.
Re: (Score:2)
Honey, talking about real programmers is something only the fresh ones do. I hate python.
Re: (Score:2)
> meanwhile it lost track of previous requirements more than once and wiped that out
It's best to start off by telling the AI to write an implementation proposal for your project and get it to put all your little requirement details in that. Then you can tell it to implement the plan in phases. Revise the plan later if necessary. That way everything is documented and the AI knows exactly what to do.
Re: (Score:2)
Your experiences aren't surprising. However, one issue is that you seem to be using ChatGPT (web?) to do this. If you use an IDE integrated with AI, such as Cursor or Visual Studio Code + GitHub Copilot, you will likely get much better results. This is because every time you give it a prompt, it uses the existing code as context, even if it "forgets" what you prompted it earlier.
And now they've finally seen... (Score:2)
The problem of replacing programmers with generated crap.
It's like watching a car crash in slow motion.
That CEO vibe coding hasn't got the competence to reboot his PC, never mind work out complex interactions inside software.
Who'd have seen that coming?
Every coder.
No They're Not (Score:3)
> The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.
No, they're struggling to find engineers who accept the pittance they're offering. Pay them, and they'll do it.
not saying I want this but... (Score:2)
It's pretty obvious that the Agents will be trained for different functions. Red Team AI vs Blue Team AI.
One Agent writes it, another Agent reviews it. Just like old times, huh?
Re: (Score:2)
That's better than nothing if you're using "LLM as judge" to try to catch the errors in your RAG outputs, but if you're talking about "code review" (or whatever we should call critiquing voluminous AI slop generated by junior developers), then the problem is that AI isn't yet at the level to do that (and likely won't be until we develop human-level AGI).
If AI was good enough to do meaningful code reviews, then it wouldn't be writing crap code in the first place.
Bad ideas all around (Score:3)
People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs. Those still in it are constantly being harassed by human dinosaur rhetoric and expectations of becoming reverse centaurs. Unless you run a code mill babysitting an LLM is ultimately more difficult than just coding it yourself. Lack of net productivity gain once you figure in lifecycle costs speaks for itself.
Few are likely to be willing to invest time and effort to become proficient in some intermediate skills such as prompt engineering, agent wrangling..etc when the lifetime of acquired skills are measured in weeks and months and may not even translate across models or systems.
There seem to be two possibilities for the medium term future. Either AGI renders humans obsolete or the obliteration of CS pipeline due to magical thinking results in significant supply shortage.
Re: (Score:2)
> People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs.
Is that all AI's fault? I also don't know how bad the job market for beginning coders is!
I graduated from undergrad a bit more than 20 years ago with a computer science degree. At the time there were less than 100 majors per year. This was roughly 4-6% of the student body. Comp sci was well behind economics, public policy, biology, political science, and maybe some others in terms of popularity.
Starting in the 2010s, the number of computer science majors started to grow very rapidly. In 2024 there were almo
New job (Score:2)
Hiring for AI security officer. Job description: sit by the host server's hardware in 8 hour shifts, right next to the purple Ethernet, with a machete in hand at all times.
Consider it career acceleration (Score:2)
It is the fate of many senior programmers to become babysitters of junior programmers. Now that the juniors are AI, that kind of moves everybody up a rung. At this rate, we might see new programmers turn into middle managers by year 5!
Re: (Score:2)
Some companies have just stopped hiring juniors and interns, so there is no-one at the bottom to career accelerate. Doesn't seem like a very well thought out "plan" ("we'll just hire more seniors unfamiliar with our business when the current ones leave"?), but they are doing it nonetheless.
What happens when AI can do the senior work too? (Score:2)
the AIs gain the capability to do the "contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system.. to recognize when a piece of code is sub-optimal or dangerous in a production environment"?
We've already seen Anthropic et al's report on Mythos for security assessment.
Just saying, we (and I) have been wrong in the past by saying "AI can't.." and mistaking that to mean "AI will never..."
Charlie Chaplain (Score:2)
> While large language models can generate functional code in milliseconds,
But the babysitters will be expected to keep up with the LLMs output. It'll be like the assembly line scene in Modern Times.
Great timing (Score:2)
I retired from SW development in April 2023. Looks like perfect timing; everyone I know who is still working in the field hates it.
"shifts from author to technical auditor or expert (Score:2)
As a software developer myself, I see nothing wrong with that. I've spent a lot of years tediously grinding out code that does some essential but pretty boring stuff. Now I just get the AI to do the grinding. These days it does an amazingly good job with little effort on my part, and I'm getting better results with fewer prompts than just a few months ago. The improvements over the past year have been incredible.
I do understand people's concerns about the skill pipeline though. I know what components I want
How do you develop that skill (Score:4, Insightful)
If you replace junior programmers with AI and use senior ones with the knowledge to review its output, how do you develop th enext generation of senior programmers?
Re:How do you develop that skill (Score:4, Interesting)
I think they hope that in theory, by the time the senior programmers retire, you'll be replacing them with the AI as well.
In practice, none of the people involved seem physically incapable of thinking in terms of a timespan longer than their next round of bonuses.
Re: (Score:2)
This is pretty much it when you're dealing with VP's, C-Suite and above. Every action taken is usually for next quarter's results, At most is spans out to end of year results. Almost NEVER longer.
Re: (Score:2)
Real talk - If you work at a public company and you don't have a seat in the boardroom complete with name plaque, you might well be laid off at any moment, for just about any reason. There is no corporate loyalty any more.
So should anyone in that sort of organization, ever think past the time-span involving their next bonus? People have long accused C[X]Os of not looking past the quarterly earnings report, or past their next bonus, but maybe the rest of the workforce just needs to get memo that you either
Re: How do you develop that skill (Score:2, Insightful)
Donâ(TM)t worry, it wonâ(TM)t be long before AI can make these more architectural decisions. Senior programmers and architects seem to be living in a weird fantasy that the AI is not coming for their job too. No software engineers wonâ(TM)t become AI baby sitters. Managers will. Software engineers will become jobless.
Re: How do you develop that skill (Score:4, Insightful)
That's the issue - it's all or nothing, just with weird caveats. Either:
1. The AI can do everything an engineer can do, in which case some business management person might come back and tell it that it was wrong with some assumptions on this or that (just like they would with a human), but it's otherwise fully autonomous, acting entirely on its own, or:
2. It can't.
The problem with #2 is that we'll spend so much time and money in thinking we're just a little ways away from #1 that no one is in the pipeline. There's also the risk of treating #2 like it's #1, where we let it make decisions, with no repercussions, and we just watch things burn.
I suppose there's a third option - it can do everything, *plus* mentoring a junior so that a human is still learning things just in case.
Re: (Score:2)
4. We use AI to do tasks that it is good at and humans do tasks that they are good at.
I don't understand why everyone is trying so hard to make AI do things that are pretty impossible for it. Do they hate programming so much?
Re: (Score:2)
Broadly speaking, a lot of AI advocates believe AI can do every single job *except* their own.
In terms of hating programming, yes, actually a lot of the staunchest supporters hate programming. Because they can't do software development themselves but have somehow latched onto the business of software development. Business folks that carry a great deal of resentment that there are employees that have sufficient leverage over them to extract significant salaries and there's not a lot the business side can d
Re: How do you develop that skill (Score:3)
It's cute you think they'll keep the managers to do that.
The owners will hire cheap interns with AI experience to replace them.
And yes, eventually the owners will be jobless when the whole software as a service/product model falls apart. People will just ask their phones to do a thing, no app required.
Re: How do you develop that skill (Score:1)
"The managers laughed at the senior developers, thinking that their jobs could never be automated..."
Re: How do you develop that skill (Score:2)
I mean, the lower level managers wonâ(TM)t have a job to do once they have no one to manage. Itâ(TM)ll be reduced to just the managers who are needed to figure out what the product is.
No need for managers for that... (Score:2)
You do not need managers to figure out what the product is.
Usually it is a work of senior programmers...
Managers are just politicians - buddies of higher managers...
Re: (Score:3)
> I mean, the lower level managers wonâ(TM)t have a job to do once they have no one to manage. Itâ(TM)ll be reduced to just the managers who are needed to figure out what the product is.
Our management team are currently busy having AI write their single sentence reports into giant sprawling messes that the next up the chain uses another AI to summarize as a single sentence that may or may not resemble the original single sentence. They're already automating the main functions of their jobs, and they aren't bright enough to realize that's what they're doing. And all you need to decide what the product is is a sales manager and a marketing manager. The people that know the technical details
Re: (Score:2)
[1]Facebook is automating manager roles [wsj.com]. [2]Amazon is just getting rid of them [fortune.com].
[1] https://www.wsj.com/tech/ai/meta-to-create-new-applied-ai-engineering-organization-in-reality-labs-division-d41c4a69
[2] https://fortune.com/2025/03/04/amazon-ceo-andy-jassy-middle-managers-rto-gen-z/
Re: How do you develop that skill (Score:2)
Probably but the fundamental issue still is âoewhat is the right architecture?â That is defined by current and future requirements , something that is not easily reduced to a prompt. There are decisions and trade offs that need to be made, which AI may continue to struggle making.
Re: (Score:2)
If it did work that well, then it would be similar to math education. You start by forbidding calculators, then allow only basic arithmetic calculators, then graphing calculators, then full computer aided math.
Think there's flaws in general, but to the extent it can work, the burden shifts more to education rather than workplace.
Re: (Score:3)
There seem to be at least four "AI strategies" (if throwing spaghetti at the wall can be called a strategy) that different companies are currently trying.
1) This get rid of, and stop hiring, juniors and interns, and give AI tools to your senior developers. At least you've now got capable people doing your design and guiding the AI, but indeed where does the next generation of seniors come from, especially if you want seniors that actually know your business and IT systems. Taken to it's logical conclusion,
Re: (Score:2)
Perfectly said, AI is really a massive negative for programming. AI is great is you need boilerplate, or if you're stuck and want a suggestion, but outside of that, you should avoid it. If you don't know what the code generated means, or how it works, you can't debug, you can't support it, and you can't claim it's safe. That's the danger of AI, it can generate a lot of code, but in my experience, that code needs to be carefully checked and commented.
I've said, in one form or another, that code should
Re: (Score:2)
I used to agree with you about "boilerplate only." But over the past few months, AI (my choice is GitHub Copilot) has gotten significantly better at non-boilerplate kinds of code. It used to be that AI spit out uncompilable code half the time. Now it almost always works right on the first try. I do still have to carefully inspect what it generates to make sure it's doing what I actually wanted. But most of the time, it does.
The biggest shortcoming I see now with AI, is that it doesn't know when it knows eno
Re: (Score:2)
This worry is not new with AI. Companies that produce software have long wanted experienced developers (for an entry level price, of course). This is also not new with programming. Trades like plumbing and electrical work, also want experienced workers. Doctors too. I mean, who in their right mind wants to be the very first patient to undergo surgery at the hands of a physician who just graduated from college?
Each of these professions has found ways to bring in and train new talent. Programming will also fi