A Coder Considers the Waning Days of the Craft (newyorker.com)
- Reference: 0172244377
- News link: https://developers.slashdot.org/story/23/11/15/1411231/a-coder-considers-the-waning-days-of-the-craft
- Source link: https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft
> Yes, our jobs as programmers involve many things besides literally writing code, such as coaching junior hires and designing systems at a high level. But coding has always been the root of it. Throughout my career, I have been interviewed and selected precisely for my ability to solve fiddly little programming puzzles. Suddenly, this ability was less important.
>
> I had gathered as much from Ben (friend of the author), who kept telling me about the spectacular successes he'd been having with GPT-4. It turned out that it was not only good at the fiddly stuff but also had the qualities of a senior engineer: from a deep well of knowledge, it could suggest ways of approaching a problem. For one project, Ben had wired a small speaker and a red L.E.D. light bulb into the frame of a portrait of King Charles, the light standing in for the gem in his crown; the idea was that when you entered a message on an accompanying Web site the speaker would play a tune and the light would flash out the message in Morse code. (This was a gift for an eccentric British expat.) Programming the device to fetch new messages eluded Ben; it seemed to require specialized knowledge not just of the microcontroller he was using but of Firebase, the back-end server technology that stored the messages. Ben asked me for advice, and I mumbled a few possibilities; in truth, I wasn't sure that what he wanted would be possible. Then he asked GPT-4. It told Ben that Firebase had a capability that would make the project much simpler. Here it was -- and here was some code to use that would be compatible with the microcontroller.
>
> Afraid to use GPT-4 myself -- and feeling somewhat unclean about the prospect of paying OpenAI twenty dollars a month for it -- I nonetheless started probing its capabilities, via Ben. We'd sit down to work on our crossword project, and I'd say, "Why don't you try prompting it this way?" He'd offer me the keyboard. "No, you drive," I'd say. Together, we developed a sense of what the A.I. could do. Ben, who had more experience with it than I did, seemed able to get more out of it in a stroke. As he later put it, his own neural network had begun to align with GPT-4's. I would have said that he had achieved mechanical sympathy. Once, in a feat I found particularly astonishing, he had the A.I. build him a Snake game, like the one on old Nokia phones. But then, after a brief exchange with GPT-4, he got it to modify the game so that when you lost it would show you how far you strayed from the most efficient route. It took the bot about ten seconds to achieve this. It was a task that, frankly, I was not sure I could do myself.
>
> In chess, which for decades now has been dominated by A.I., a player's only hope is pairing up with a bot. Such half-human, half-A.I. teams, known as centaurs, might still be able to beat the best humans and the best A.I. engines working alone. Programming has not yet gone the way of chess. But the centaurs have arrived. GPT-4 on its own is, for the moment, a worse programmer than I am. Ben is much worse. But Ben plus GPT-4 is a dangerous thing.
[1] https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft
Ah, those were the days... (Score:2)
We spent days writing subroutines to find out when Easter would be or a faster sorting routine.
Some of us still had slide-rulers.
Good times but I'm glad they're gone.
It's called 'progress'.
Re: (Score:3)
The problem with progress is that some skills are lost. You can see it in engineering now. You might think how to choose and write something like a sorting routine might soon be redundant but at some point someone has to have those skills if only to check whether AI-of-the-week is talking nonsense or not when it does it for you.
Re: (Score:2)
I guess this is where documentation comes in.
I know that in my own case, I rely heavily on my own documentation if I ever need to troubleshoot something that I did a year or more ago...
Re: (Score:2)
Documentation is necessary, but its not a substitute for a real visceral understanding of a problem.
Replace GPT-4 with Compiler (Score:5, Insightful)
Most of his uses of GPT-4 could be replaced with the concept of the compiler 70 years ago. Coding today is so much different than it was before the first compilers, and coding tomorrow will be much different after these recent advances in generative AI. But I really don't see programmers going away. Or even the true core skills of programmers going away (and I don't consider knowing language syntax one of those core skills).
I have spent too much time watching non-developers muck around with low-code platforms to believe non-developers are going to be successful using generative AI to write software applications. Instead I think developers will use generative AI to be 10x more productive than they are today, just like we are 10x more productive than our predecessors 70 years ago.
Re: (Score:2)
> Most of his uses of GPT-4 could be replaced with the concept of the compiler 70 years ago. Coding today is so much different than it was before the first compilers, and coding tomorrow will be much different after these recent advances in generative AI. But I really don't see programmers going away. Or even the true core skills of programmers going away (and I don't consider knowing language syntax one of those core skills).
> I have spent too much time watching non-developers muck around with low-code platforms to believe non-developers are going to be successful using generative AI to write software applications. Instead I think developers will use generative AI to be 10x more productive than they are today, just like we are 10x more productive than our predecessors 70 years ago.
I concur, LLMs are basically the next stage of compiler. They can generate good code, probably one day better than almost any human programmer, but they still need humans to drive them.
Re: (Score:2)
> Most of his uses of GPT-4 could be replaced with the concept of the compiler 70 years ago. Coding today is so much different than it was before the first compilers, and coding tomorrow will be much different after these recent advances in generative AI. But I really don't see programmers going away. Or even the true core skills of programmers going away (and I don't consider knowing language syntax one of those core skills).
> I have spent too much time watching non-developers muck around with low-code platforms to believe non-developers are going to be successful using generative AI to write software applications. Instead I think developers will use generative AI to be 10x more productive than they are today, just like we are 10x more productive than our predecessors 70 years ago.
And I'm certain that marketing will be able to soak up that 10x productivity increase with ever escalating fantasy scenarios they expect us to make real ten seconds before they had the thought. Sigh.
Re: (Score:2)
> And I'm certain that marketing will be able to soak up that 10x productivity increase with ever escalating fantasy scenarios they expect us to make real ten seconds before they had the thought. Sigh.
Well this is why I don't think there will be a 90% reduction in developer head counts. I believe the world could benefit from 10x more software development than what happens today, and once developers (and BAs, QA, etc) are 10x more efficient we will finally be able to write all that software which isn't cost effective to write today.
We could reach a day when a corporate ERP/CRM system which was written 2 years ago is considered legacy software, because that's how often software is rewritten and refactored
We used to pay people to feed punch cards (Score:3)
into a machine. Those are jobs that don't exist anymore.
I'm old enough to remember when Bill & Hilary Clinton talked about transition to a service sector economy.
I'm smart enough to know that service sector jobs pay like crap.
Years later I came across [1]this article here [businessinsider.com] that linked a study showing 70% of middle class job losses were due to automation, not outsourcing. That's your "service sector economy" right there. McJobs as far as the eye can see.
Clinton's advisors (both Clintons) saw
[1] https://www.businessinsider.com/automation-labor-market-wage-inequality-middle-class-jobs-study-2021-6
Re: (Score:2)
> Years later I came across this article here [businessinsider.com] that linked a study showing 70% of middle class job losses were due to automation, not outsourcing. That's your "service sector economy" right there. McJobs as far as the eye can see.
While it most likely is true that this next wave of automation will push more middle class individuals into the working class / poverty, let's not forget that most people leaving the middle class have moved upwards into the upper middle class and upper class. [1]Pew Research [pewresearch.org] found that while the middle class shrunk from 61% of the population in 1971 to 50% in 2021, 64% of those individuals moved to an upper income level while 36% moved into lower income. Overall it was a net positive.
As long as we do a better
[1] https://www.pewresearch.org/short-reads/2022/04/20/how-the-american-middle-class-has-changed-in-the-past-five-decades/
Re: (Score:2)
Er, transition to a service sector economy began being discussed in the early 1970s if not before.
Next Level Search (Score:4, Insightful)
If you read this, the AI still required someone to identify the problem and propose the solution. All the AI did was take much of the tedium out of developing the solution. Something that would be done with hours of searches and filtering out irrelevant content for one reason or another.
If your job is "here, code this up", then you should be worried.
If your job is to come up with "this" then you are probably OK.
Re:Next Level Search (Score:4, Insightful)
Ultimately though, LLMs are really really just a Fancy Dictionary and a Next Level Search Engine and unfortunately both are highly error prone and suffer from a huge problem of subtle lies. Unlike the OP, I find it makes a LOT of rookie mistakes and is nowhere near a "senior dev" level. More like a newbie with good Google skills.
Re: (Score:1)
That explains why Google search is getting worse and worse. They are crippling it in order to sell a working search engine in the near future.
When you can ask it... (Score:3)
... to write the code for an improved version of itself, and it works, you'll know the games is up and not just for us devs.
writer, not programmer (Score:3)
This guy is a writer, not a programmer. If he thinks AI is coming for programming, wait til he realizes what it has planned for writing. AI can certainly do self-promotion as well as he does.
And I don't know what a centaur is in this context, but they may well have arrived for him, but not for programmers.
Re: (Score:2)
He's a coder, too. His blog has some program topics on it and links to some code but there doesn't seem to be anything showing he does more than dabble.
Super-turbo (Score:2)
He says Ben plus GPT-4 is dangerous. The real dangerous combo is the author, a seasoned developer, using GPT-4.
He is designing the architecture, the workflow, the integration points. Translating the requirements into designs. Knowing when the advice is wrong and how to fix it. Having it do the repetitive, boring bits ("write the unit tests and documentation for this function").
A seasoned developer plus tools like this gets you closer to your 10x or 100x engineer.
A huge step forward for hackers (Score:2)
The peculiar thing about AI is that we never demand that it prove correctness of its own answers. We'll let AI write the code and the tests and then accept on faith that it didn't completely hallucinate the whole process of software development. It doesn't do this maliciously, AI is just incredibly incompetent and needs to have someone or something looking over its shoulder if we use it for anything important.
We'll probably use AI during the design and testing phases of software, on the assumption that if
Re: (Score:2)
> The peculiar thing about AI is that we never demand that it prove correctness of its own answers.
Well we've definitely demanded this from theorem provers, which are themselves firmly in the AI domain. So the "never" part seems not quite right there.
Re: (Score:1)
> So the "never" part seems not quite right there.
but you think elon musk is a literal tony stark so Im not too worried
Re: (Score:2)
Have you considered seeking professional psychiatric help?
Re: (Score:2)
> We'll probably use AI during the design and testing phases of software, on the assumption that if there are mistakes we'll catch them in the human side of development.
I doubt that is true, because we will also being using these generative AIs to build our testing frameworks. And attackers will use them to create their attacks. How much more secure will our code be when a penetration testing red team is running a battery of tests after every commit?
I don't think we will depend on human developers to catch most generative AI mistakes. I think we will need to rely on generative AI QA and pen test applications to do most of that. Because if there ever is a massive increase i
Re: (Score:2)
I think in business, we'll assume that human engineers will catch the mistakes that AI makes. Not that it will actually work. Maybe I'm just very cynical.
Don't get it (Score:2)
If you Google, "Firebase DB Documentation" and read it, you should be able to figure out your problem and in the process learn the bigger picture of what you can and can't do with Firebase DB. If you can't do that, you might not be a very good coder. Replaced by AI? Then good riddance!
If the AI has access to proprietary "Firebase DB" documentation or proprietary "Firebase DB" source code that isn't available via Google and uses it to give you a solution, you have a different problem.
Good coders tend to be
Re: (Score:2)
Pretty much sums up most "welp, coders are done" posts.
Author asserts themselves to be a senior programmer, and thus if it's hard for him, it's hard for anyone. He didn't know the documentation for some third party component off the top of his head and the fact that GPT could get him the same information as the readily available documentation seems like magic. Then to reinforce, types a few cliche "programming 101" challenges that are all over the internet and is amazed that it works. Assert that it can
Statistical analysis (Score:2)
Large language models require training on data sets, which automatically makes them great for solving problem types that have been solved a lot in the past, even if the specifics have not. Microcontroller for handling lights (be it a desk lamp, a lava lamp whose brightness is proportional to system load, or all the lights on the side of a building for playing 80s video games) is a common problem type.
The "ideal" would be to have an online library from which you could pull coding libraries which GPT-4 could
Re: (Score:2)
And even if they can get over the hurdle of the copyright office saying AI generated content can't be copyrighted, there is still the issue of GPT being trained on data that can't be re-licensed arbitrarily.
Re: (Score:2)
This is the main issue I would think. If you run AI on your own corporate code base subset that has no external license issues, then helping out a new project would be great. If you are relying on some search engine's returns, it is almost certain that the code it examines to come up with the answer will be from GPL'd or if you're lucky BSD licensed code. So even if it's correct, it still isn't usable unless you know the license it was created from and are willing to allow the terms of that license to apply
He's not just a "coder", but also an "author" (Score:2)
And if his code is anything like his writing, I can understand why he thinks his time is over.
Re: (Score:3)
You do understand he's writing this for the New Yorker, right? The article isn't for a technically astute audience, or an uneducated one. His writing is fine in the context within which he is authoring.
Re: (Score:2)
I don't know in what circles does long-winded whining peppered with inappropriate analogies and a confession about his inability to "master" the new operator in C++ pass for "fine", but it is hardly among the "educated".
That person isn't a programmer, he's a layman. He isn't a professional writer, he's a blogger.
Take him away, he's got nothing to say, get out, king of the Jews.
Re: (Score:2)
You've never red the New Yorker. :)
Re: (Score:2)
However, he asserts credibility and makes statements that are intended to influence that less technically astute audience. That audience is likely to include decision makers that will take him at his word and make questionable decisions.
The entire premise of the article is useless if the author's claimed technical acumen is misrepresented. If the audience isn't relevant to that reality, then why would they even care about the article?
"Ben plus GPT-4 is a dangerous thing" (Score:3)
Running code spit out by an AI without fully understanding it is like following your GPS and driving into a lake.
AI can be a great tool but you still have to know what you are doing.
Even if AI gets to the point it can code as good or better than most humans, you still need to be able to understand it, not trust it blindly. We don't even trust each other to write code. That's why we have code reviews. The craft isn't going anywhere any time soon.
Re: (Score:1)
If GPT does the work 90% of the time, then people will use it blindly.
Again, the examples... (Score:3)
Are in the domain of likely having a verbatim result in a google search...
The snake game example comes up with dozens and dozens of ready to go samples, adding 'optimal path' produces dozens more. That "wow" result isn't that wow, because it's a done to death tutorial example in virtually every programming course ever.
This is consistent with my experience, if google can turn up a result, the GPT has a decent shot at representing it in a result. If google results come up empty, so too does GPT (except it will often come up with *something* that isn't really relevant, but might *look* credible. It might do a nicer job of synthesizing two requests without a human thought about it, but the human thought involved is generally pretty trivial, and balanced out by the human thought needed to audit the GPT result (which will often superficially look equally confident for accurate and wildly inaccurate result).
Now there are a lot of coders that I think can't manage to even copy/paste google search results, and maybe this is a boon to such folks, but it's hardly a waning craft just yet, at least for things that aren't copy/paste.
I don't give my senior devs specs to code (Score:2)
I give my senior devs problems to solve. "How" is never clear for them. I agree with the article in a lot of ways.
I am encountering a limitation on my own ability to prompt engineer. I've been trying for a while to get a snippet of java does the following.
Given a 1 dimensional list of objects, generate a two dimensional grid of a specified height and width, populated either left to right, then top to bottom, or top to bottom, left to right, depending on a provided flag. The final populated row or column sho
Re: (Score:2)
In no way did I say that. This is my own specific example - not related to my work or my teams. I would never give this to a senior dev. It's absolutely a junior problem.
I suggested it as a separate thing - my own little attempt at code engineering. Thought maybe somebody might jump on it to help me engineer the prompt.
The solution is easy. The prompt is turning out to be hard.
It's industrialization... (Score:1)
Consider how much manpower(slaves?) it once took to create one brick or one iron kettle. With current technology hundreds of the same item can be mass produced using a hundredth of the manpower in the time it took to create one unit centuries ago. An artisian who knows the history of the craft will oversee, refine the process, your functional "coder". Most other crafters involved even a generation ago are simply redundant to this process.....
Waning days of the craft.... (Score:2)
And Robots took all the warehouse and burger flipping job ten years ago
And PCs died off twenty year ago.
And we would all have flying cars by now
Like all fads/trends/yada that seems like a new tool will solve everything....it will find its niche. Like agile did. Like bluetooth did, and so on
imho AI is not ready for prime time yet, anyway. despite everybody jumping on the bandwagon.
Programming is not coding (Score:1)
All this "coding will be replaced by AI" FUD stems from a fundamental misunderstanding of what programming is, and typically comes with an unstated assumption that "code" (i.e. programming languages) is an elitist nerd invention designed to keep normal people out of a lucrative job.
That is, of course, utterly absurd. Software development is not about writing code. It's about converting a vaguely-stated real-world problem into an exact specification of a program solving that problem. The act of writing down
Re: (Score:1)
AI doesn't care how you define programming. It is coming for your programming/coding/writing jobs. Hope you are close to retirement.
Half Developer, Half AI, Fully Confused (Score:3)
Looks like we've finally hit the era where 'Have you tried turning it off and on again?' is replaced with 'Have you asked GPT-4 yet?' The nostalgic part of me misses the days when debugging was more about coffee and less about cloud-based AIs. Remember when our biggest worry was a misplaced semicolon, not whether our AI co-pilot might replace us? Sure, Ben and his GPT-4 might be the new dream team, but let's not forget the unsung heroes: Stack Overflow and coffee. Maybe the real 'centaurs' of programming are just devs with a decent Wi-Fi connection and a strong espresso. And who knows, maybe one day our AIs will write nostalgic articles about the good old days of human coders.
Re: (Score:2)
I miss the days when an entire programming language could be reasonably well described in a book of 80 or 90 pages, like TRS-80 BASIC. I literally learned how to code from their BASIC instruction book for my shitty like TRS-80 MC-10 with a whopping 20k of RAM (4k+16k expansion pack). Now I wager there's not a program I wrote in 1982 and 1983 that I toiled over for hours, that GPT couldn't recreate in a few seconds. But that's progress.
Re: (Score:2)
Well, Scheme still has your back there.
Re: (Score:1)
Scheme was def my favorite programming language. For a class assignment I added complex number math to a spreadsheet. Good times!
Re: (Score:2)
I miss the days when an entire programming language could be reasonably well described in a book of 80 or 90 pages, like TRS-80 BASIC
Sometimes that crops up, on the deep embedded end. The description is 2 pages for the summary with a further 6 pages of excruciating detail (page 69 heh heh heh onwards)
[1]https://ww1.microchip.com/down... [microchip.com]
I haven't done pic asm in a few years but it's very refreshing, somehow.
[1] https://ww1.microchip.com/downloads/en/devicedoc/41190c.pdf
Re: (Score:3)
> Looks like we've finally hit the era where 'Have you tried turning it off and on again?' is replaced with 'Have you asked GPT-4 yet?' The nostalgic part of me misses the days when debugging was more about coffee and less about cloud-based AIs. Remember when our biggest worry was a misplaced semicolon, not whether our AI co-pilot might replace us? Sure, Ben and his GPT-4 might be the new dream team, but let's not forget the unsung heroes: Stack Overflow and coffee. Maybe the real 'centaurs' of programming are just devs with a decent Wi-Fi connection and a strong espresso. And who knows, maybe one day our AIs will write nostalgic articles about the good old days of human coders.
While I do think the days will come where AI will be able to program at least on par with most developers, right now it's more of a souped up search engine. Whereas we used to Google for other developers solutions to problems we faced, now we can go to GPT-4, tell it the sitch, and rather than get page after page of results we need to manually sort through to find our solution, it tries its best to sort through all that for you and return you what it considers the optimal result. And as it learns optimal re
Bad programmers have always existed (Score:3)
and they're always in high demand because they're cheap and good enough is good enough. If you're a skilled programmer you're either really good at math and not really a programmer, your a mathematician using your tools, or you're going to get out of programming quickly and move into some kind of management roll.
The alternative is to either start your own company (and hope you get bought out, because with zero anti-trust law you'll either be bought or run out of business if you're successful) or wait un
get an union and strike to get AI protections (Score:2)
get an union and strike to get AI protections
Re: (Score:1)
That's the fastest way to have AI replace them.
Re: get an union and strike to get AI protections (Score:2)
Yes, let's retard progress instead of instituting UBI. This is why we can't have nice things. We NEED progress, but we humans also NEED progressive social systems if we are going to survive it.