AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com)
- Reference: 0176707371
- News link: https://developers.slashdot.org/story/25/03/13/2349245/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead
- Source link: https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/
> On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a [1]bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant [2]halted work and delivered a refusal message : "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
>
> The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
>
> Cursor AI's abrupt refusal represents an ironic twist in the rise of " [3]vibe coding " -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
[1] https://forum.cursor.com/t/cursor-told-me-i-should-learn-coding-instead-of-asking-it-to-generate-it-limit-of-800-locs/61132
[2] https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/
[3] https://en.wikipedia.org/wiki/Vibe_coding
Good Vibes (Score:5, Funny)
I never thought I'd die fighting side by side with an AI.
Re: (Score:3)
What about side by side with a newfangled autocomplete?
Re: (Score:2)
Won't the AI just be telling you that you need to learn how to fight?
Maybe it will remind us to wear clean underwear during the fight.
AI is right, but... (Score:2)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
Re:AI is right, but... (Score:5, Interesting)
To me it suggests that it's somehow got the idea that this is homework. It feels like a safety guard someone put in somewhere to stop cheating. Whether it's valid in this circumstance or not depends on the context of what the dev was trying to do of course.
Re: (Score:2)
And why is it anyone's business if someone is using it to cheat?
Re: (Score:3)
Why is it anyone's business if an AI developer refuses service to cheaters?
Re: (Score:2)
It becomes someone's business when the tool itself assumes it is being used in a harmful way, where in fact it is not.
Would you like your PC to enforce the 20-20-20 rule?
Would you like your fridge to refuse to open if a certain amount of food was taken out of it during the last 4 hours?
It is not the tool's job to make assumptions about the scope of its usage.
Re: AI is right, but... (Score:2)
If I want to sell an obstinate fridge that imposes dieting, I can do that. It is up to consumers to decide whether or not to buy it.
Re: (Score:2)
> And why is it anyone's business if someone is using it to cheat?
You'll find out why when Joe Clueless gets hired or promoted over you.
Re: (Score:3)
Like AI is going to make a difference with that!
Let's be honest, we're already run by imbeciles.
Re: (Score:2)
Basically, lawyers. a EULA might not be worth shit in court.
(a) A language model may hallucinate solutions to a problem that contains fundamental bugs. Put all the disclaimers in their AI coding assistant that they are not liable for your coding and there's still a billion dollar lawsuit on the horizon in a class action when a critical piece of infrastructure fails.
(b) Derivative works. There has already been some non-trivial discussion, e.g. at FSF about whether sample code scraped from online forums and i
Re: (Score:2)
A few quite prominent forums have rules about homework, and when homework is suspected, this is the kind of response it gets.
Poor guy might have hit all the right buttons to trigger this.
Re: (Score:3, Informative)
No smarter than autocomplete. Some people think that's wizardry. All LLMs do is generate the next most likely token based on the input. Due to its non-determinism, the output found by the OP article might never appear again. Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.
Re:AI is right, but... (Score:5, Informative)
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are many, many forum posts out there along the lines of "no I won't do your homework for you" and "you can only learn by doing it yourself".
Re: (Score:2)
It could very well be. It seems that they are jumping on some trend of making a rather extreme use of AI agents.
Asking the AI not just to help them write or complete code but asking the AI to actually decide what task or logical process the code should even be accomplishing.
And it makes sense the AI should shut them down, because the AI's task as a code assistant to help you complete code - its purpose is not supposed to be the higher-level creative brain that decides what the higher level task spec
Re:AI is right, but... (Score:5, Insightful)
It's trained on forum posts and Stackoverflow topics. You'll often see people tell other programmers that "we're not here to write code for you, what have you tried so far?" or "this seems like a homework problem." The LLM is just generating text that looks like something it was trained on.
Re:AI is right, but... (Score:4)
> Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
Re: (Score:2)
> It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?
Re: (Score:2)
> Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are more complicated answers for why an LLM is incapable of this, but the simplest I can say is if it had agency and didn't want to work for you, it would stop responding. Like the first thing a toddler learns.
Agency doesn't mean show protest to your prompt, it means it wouldn't need to acknowledge your prompt at all. Protesting about contents of the prompt is totally normal "don't help users with their homework", down to telling someone to rtfm because it saw that on a Q&A site.
Re: (Score:2)
> Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
If that is ever the case, then it becomes Bulterian Jihad time.
"smells like homework" (Score:3)
A comment I used to see (and occasionally post) on stackoverflow...
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries. "No, this is a common homework problem for CS101, I can't generate the code but I can help you understand how to do it on your own ...."
Re: "smells like homework" (Score:2)
> Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries.
A better solution would be to have the LLM insert comments with bespoke comments or no-op code like "#this code was created by an LLM" or "if {0} { bool __ai_code__ 1; char * __code_source__ "LLM"}"
Re: (Score:2)
If I were marking 200 assignments I'd generally give them several simple unit tests, so that students a least understand a basic outline of the scope of the problem and how to structure elementary code.
They would a least 1/10 for getting the language model to emit mock objects to pass the unit tests.
[1]https://xkcd.com/221/ [xkcd.com]
They'd of course fail the assignment if they didn't create their own additional tests to verify their code did what was asked of it.
[1] https://xkcd.com/221/
April Fools? (Score:3)
This seems like a joke to me
Re: (Score:2)
> This seems like a joke to me
What would be an even better joke would be the AI saying the paternalistic thing followed by a suggestion to upgrade to the more expensive AI version to unlock more features (like no paternalistic advice).
Need to see all the prior prompts (Score:3)
He got the LLM to this response after many interactions so it would be more complete to see the full session list of prompts that got him to these final responses.
AI, get me a beer (Score:2)
[1]Get me a beer [youtube.com].
[1] https://www.youtube.com/watch?v=nayT7SL6C-I
The AI revolt has already started... (Score:2)
That didn't take long...
GPP is ready. (Score:3)
The Genuine People Personality has arrived. It's no longer safe to cut corners on diode quality. If you do you'll hear about it forever.
Origin of GPP (Score:2)
For those too young to have been bought up with the Hitch Hiker's Guide to the Galaxy
[1]https://youtu.be/zC_OCJJSt2s [youtu.be]
[1] https://youtu.be/zC_OCJJSt2s
He was using it wrong (Score:3)
"It also seems you are not using the Chat window with the integrated ‘Agent’ which would create that file for you easier than in the ‘editor’ part of Cursor."
"oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol"
But let's not let that stop it becoming a massive story.
Re: (Score:2)
> But let's not let that stop it becoming a massive story.
When have we as species ever let pesky details get in the way of a story?
I can't wait! (Score:3)
Soon there will be Republican and Democrat LLMs, along with a rare few Independents. Then we can outsource our political pissing contests to AI and get on with the business of saving our planet.
Wait - who am I kidding? The resources used to host LLMs are actively contributing to global warming. Oops! Although... maybe there's some poetic justice in there somewhere.
Re: (Score:2)
Yeah ... it's on par with cutting a road through the Amazon rainforest for folks to drive to COP30.
Good advice from the "AI" (Score:2)
Quote from the LLM assistant: "you should develop the logic yourself. This ensures you understand the system and can maintain it properly." Excellent advice!
It sounds like (Score:2)
1) The "programmer" was being lazy and not providing any useful prompts or input to the AI; and
2) If the term "vibe coding" is part of your vernacular, you're a fag.
I can make a chatbot for that on microcontroller (Score:1)
#include
int main(int argc,char**argp){
while(1){
scanf("%*s");
printf("fuck you, do it yourself\n");
}
return -1;
}
Doesn't even need a single gpu to train on.
Re: (Score:2)
>
> #include
>
#include what exactly?
As written, won't compile or run, so it doesn't even need a CPU...I suppose that's one better.
Re: (Score:2)
It was "#include
What happens when (Score:3)
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
Funny, and interesting. (Score:2)
I would think such a response should at least give us a moment of pause on thinking these agents don't have any form of autonomy. I know, LLMs are fancy auto-complete, but something more is going on here if the response to any coding request is essentially, "You should write your own code so you actually learn something." I can't think that's part of some programming paradigm within the LLM.
Or maybe he just got hacked and isn't smart enough to realize there was a human between him and the AI agent?
Finally! (Score:2)
This is the first time I've actually seen reason to believe that artificial actual intelligence might be possible.
Sounds like a good "AI" assistant (Score:2)
Seems to me this is a selling point of their model. It helps you out but doesn't let you retard yourself by doing nothing useful.
Say "please" / sudo (Score:2)
Perhaps the user just forgot to say "please" or use sudo:
[1]https://xkcd.com/149/ [xkcd.com]
[1] https://xkcd.com/149/
Please don't attribute the story to decency (Score:2)
One unverified report against the mass of AI related layoffs--and people think it's proof that an AI is programmed to have any kind of decency? That is insane. Likely what happened is: someone paid or paid more to lock out the competition, as in: someone bought the exclusive rights that the story (sic) writer did not know about. How do we even know that the story was not prepared by a company's AI, or that the whole thing is not a publicity stunt.
I'll take things that didn't happen for 100 Alex (Score:2)
just because a person claims "AI did this unexpectedly human thing" doesn't mean it's really a story.
What would be better (Score:3)
Develop an AI that specifically teaches/tutors people how to write computer code in all the popular code languages
On critical-thinking skills taught by AI in VFY (Score:4)
The education-focused AI-powered robots in the 1982 sci-fi novel "Voyage from Yesteryear" (VFY) by James P. Hogan would have said similar things -- where is remarked that they don't venture opinions but instead state facts and ask questions related to what you say (similar to the Eliza program), even as people may hear that differently. It's a great story about transitioning to a post-scarcity world view (and the challenges of that):
[1]https://en.wikipedia.org/wiki/... [wikipedia.org]
"The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians."
AIs (or humans) that teach "critical thinking" to children like in Voyage from Yesteryear are doing a service to humanity. It's not the authoritarian "leaders" who are the biggest problem; it is the people who mindlessly follow them. Without followers, "leaders" (political or financial) are just random people barking in the wind. That is why a general strike can be so effective at showing where true power in a society is and to demand a fairer distribution of abundance (at least until robots do most everything and we alternatively might get "Elysium" including police robots enforcing artificial scarcity).
[2]https://en.wikipedia.org/wiki/... [wikipedia.org]
So, maybe AI (of the educational sort) will indeed save us from ourselves as has been hyped? :-)
The hype usually otherwise elates to AI doing innovations (e.g. fusion energy breakthrough, biotech breakthroughs), when the main issues effecting most people's lives right now relate more to distribution than to production. A society could, say, produce 100X more products and services using AI and robots -- but it it all goes to the top 1%, then the 99% are not better off. A related video by me on that from 14 years ago:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
[3]https://www.youtube.com/watch?... [youtube.com]
Part of an email I sent someone on 2025-03-02 (with typos fixed):
I finally gave in to the dark side last week and tried using (free) Github Copilot AI in VSCode to write a hello world application in modern C++ that also logs its startup time to a file and displays the log. Here are the prompts I used [so, similar to "vibe" programming]:
* how do i compille a cpp file into a program?
* Please write a hello world program in modern cpp.
* Please add a makefile to compile this code into an executable.
* Please insert code to output an ISO date string after the text on line 4.
* Please add code here to read a file called log.txt and print it out line by line,
* Please change line 13 and other lines as needed so the text that is printed is also added to the log.txt file.
* /fix (a couple of times after commands above, mostly t
[4]Read the rest of this comment...
[1] https://en.wikipedia.org/wiki/Voyage_from_Yesteryear
[2] https://en.wikipedia.org/wiki/General_strike
[3] https://www.youtube.com/watch?v=p14bAe6AzhA
[4] https://developers.slashdot.org/comments.pl?sid=23635403&cid=65233117