News: 1753819355

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Devs are frustrated with AI coding tools that deliver nearly-right solutions

(2025/07/29)


According to a new survey of worldwide software developers released on Tuesday, nearly all respondents are incorporating AI tools into their coding practices — but they're not necessarily all that happy about it.

The [1]report , part of an annual study conducted by developer help site [2]Stack Overflow , reveals that 78.5 percent of respondents were already using AI developer tools at least "monthly or infrequently." Another 5.3 percent of this year's respondents planned to start using AI soon.

However, how these same respondents felt about these tools was significantly more split. While around 60 percent said their view toward the tools was either "favorable" or "very favorable," another 20 percent said they felt either "indifferent" or "unsure" about them. Still another 20 percent viewed the tools either unfavorably or very much so.

[3]

The survey was based on 49,009 responses from across 160 countries, although the largest portion (20 percent) was based in the US. Answers were collected over a period from May 29 to June 23 of this year.

[4]

[5]

Respondents ranged in age from 18 to 65 and older and included every type of programmer, from experienced professionals to those just learning how to code. Interestingly, though, use of AI tools was about equal across all levels of experience, averaging about 80 percent.

So why the seeming ambivalence toward these same tools? Because a great many developers feel the tools just don't work well.

[6]

Across the board, just 3.1 percent of respondents said that they "highly trust" results from AI tools, with the figure dropping to 2.5 percent among experienced developers. Those who were only learning to code had the most faith in AI, with a still-paltry 6.1 percent indicating high trust.

Overall, though, approaching AI with caution appeared to be the norm. Around 44 percent of respondents said that they were either "somewhat" or "highly" distrustful of AI, and even the 31 percent who said they were "somewhat trustful" of the tools weren't exactly exuding confidence.

Humans still necessary

"Complex tasks" were reportedly AI's worst weakness (although exactly what those tasks included was left to survey respondents' interpretation). AI was either "bad" or "very poor" at handling complex tasks according to 40 percent of respondents. A mere 4.4 percent said the tools handled complex tasks "very well," while 17 percent said they don't use AI for complex tasks at all.

Popular sentiment says that companies are increasingly using AI to generate code that human programmers would ordinarily write. Microsoft CEO Satya Nadella has been widely quoted as saying 30 percent of Redmond's code is already attributable to AI. But Stack Overflow's survey seems to indicate this isn't typical of the broader software industry. Only 17 percent of survey respondents said that they were "currently mostly" using AI to write code, while 29 percent said that they don't plan to use it for that purpose at all.

And " [7]vibe coding ," the fully AI-centric programming method that's [8]made headlines , is right out, with 76 percent of those surveyed responding either "no" or "no, emphatically."

[9]

What they do use it for is perhaps more enlightening. Replacing or supplementing traditional search engines was one popular choice, with 87 percent saying they used AI either for "searching for answers" or "learning new concepts or technologies" (or both).

Many developers' reservations about AI seem to stem from what the survey defined as "frustrations" with the tech. Chief among these was 66 percent of respondents' belief that AI produced "solutions that were almost right, but not quite." What's more, 16 percent lamented that "it's hard to understand how or why the code works."

This, in turn, generates further problems, with 45 percent of respondents griping that debugging AI-generated code is "more time-consuming" than for human-written code.

Then there's the use of AI agents – the new buzzword du jour – in software development, but these seem to be either poorly understood or not yet widely adopted. Fully 69 percent of respondents said that they don't currently use agents in their workflows, with 38 percent of those adding that they don't plan to. Furthermore, 41 percent said that agents have had very little positive effect on their productivity.

And it seems that there are still important uses for humans in the software development supply chain after all. Stack Overflow's survey reveals that 75 percent of developers would still seek a person for help in cases "when I don't trust AI answers." Ethical or security concerns about code called for human intervention according to 62 percent of respondents, and 58 percent would call upon a human "when I want to fully understand something." Similar majorities would prefer to work with people when learning best practices or simply, "When I'm stuck."

This should all come as welcome news to anyone hoping to enter — or merely survive — the modern software industry. While some companies are even suggesting that new applications should be prepared to use AI [10]during the application process , Stack Overflow's survey indicates that the reality for most is somewhat different.

The survey showed just 4.3 percent of respondents claiming, "I don't think I'll need help from people anymore," thanks to AI. It seems, then, that for all the AI hype, anthrocentric workplaces are still here for a while. ®

Get our [11]Tech Resources



[1] https://survey.stackoverflow.co/2025

[2] https://stackoverflow.com/

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aIlEdAjFu5hWFzbG10lqTQAAAAY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIlEdAjFu5hWFzbG10lqTQAAAAY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIlEdAjFu5hWFzbG10lqTQAAAAY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIlEdAjFu5hWFzbG10lqTQAAAAY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[7] https://www.theregister.com/2025/07/25/opinion_column_vibe_coding/

[8] https://www.theregister.com/2025/07/22/replit_saastr_response/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIlEdAjFu5hWFzbG10lqTQAAAAY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[10] https://www.theregister.com/2025/06/11/canva_coding_assistant_job_interviews/

[11] https://whitepapers.theregister.com/



Anonymous Coward

I needed a simple parser for something (just a few lines of regex and some manual character checking, really) and was too lazy to do it myself so I thought I'd give vibe coding a chance. I don't know if I just got unlucky with the task I'd chosen or what, but it took ages to get Copilot to generate working code, I ended up having to read and understand the code it did output just to formulate my own theories of what wasn't working so I could guide it, and in the end there was a very easy problem it simply couldn't fix (but pretended to by generating the exact same code that was already in the file, without changes), which I ended up having to implement manually (literally just adding an extra `c == ","` in an existing check. Yes, that simple.) Overall it was an arduous, grueling task and I felt like I was playing in a three-legged race. I don't know where people get the patience to vibe code regularly, I was miserable the entire time. Ultimately I'm glad the option exists and I'm glad it's something I can reach for when performing tasks I don't want to do (like mucking with regex or SQL), but it really is a last resort for me for most tasks.

Anonymous Coward

It would be interesting to run this survey from a C-Level perspective. I'm sure the answers would be very different.

Where i am there is a 'Use AI or prepare to be shown the door' style policy. The problem is, no-one has shown us how to use AI effectively, so we're kinda stumbling around in the dark. I use co-pilot as a fancy autocomplete mostly, but when I have tried it for serious tasks it's either led me down the wrong path, or has taken me so long to prompt it properly I may as well have written the bloody thing from scratch!

I'm on the older end of the team age range. Watching my younger contemporaries using AI reminds me of how my grandparents used to watch me program the VHS....

Code vs Codebase

vogon00

Full disclosure : I am NOT a fan of AI, and haven''t even bothered to try it for code generation. That said:

I can see the attraction of having AI write your code for you, but what most people appear to have forgotten is that you still have to debug it. IMO, debugging real human code (aka 'Actual Intelligence' code) rather than the artificially generated stuff must be easier, as there will be fewer errors or 'hallucinations' to deal with 1 .

I can see that AI generation may be a way to accelerate production of a codebase, but not to a trusted codebase. Getting a codebase to trusted and mature status requires commitment and effort (AKA test and/or debugging). This may be a 'old-skool' attitude, but AI ain't anywhere ready for use until it can write the code AND reasonable unit tests. Why do I say this? Because you have to understand the code, or the intentions behind it, before you can write good unit tests.

I'm not even going to look into the repeatability of AI generated code. I bet there will be subtle or not-so-subtle differences every time you ask for the same result with an identically-phrased request.

1 Where does this come from? I'm porting some legacy code (cpp, bash) over to python and it's hard to keep the code working when you're inexperienced with python like wot I is.

Haste makes waste.
-- John Heywood