News: 0181150148

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Linux Maintainer Greg Kroah-Hartman Says AI Tools Now Useful, Finding Real Bugs (theregister.com)

(Saturday March 28, 2026 @06:34PM (EditorDavid) from the better-bugs dept.)


Linux kernel maintainer Greg Kroah-Hartman tells The Register that [1]AI-driven code review has "really jumped" for Linux . "There must have been some inflection point somewhere with the tools..."

> "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now...."

>

> For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches. "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better...." [H]e said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today.

>

> The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and [2]now donated to the Linux Foundation .

Kroah-Hartman said some patches are being generated with AI now. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."



[1] https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

[2] https://www.theregister.com/2026/03/20/sashiko_code_review_linux/



For me, it is last few months... (Score:5, Informative)

by dragisha ( 788 )

since AI agents became usable and started to bring results.

Of course, you must have skills usually not associated with the manager caste - ask precise questions, be realistic in expectations, and be ready to jump in and fix in ten minutes instead of spending time on 5 prompts. Among others.

So it is not a question about AI being usable or not; it is a question about it being useful enough to cover its expenses and ensure ROI.

An improbable thing to happen.

Re: (Score:2)

by HiThere ( 15173 )

Compare this to what you would have said last year.

Re: (Score:3)

by dragisha ( 788 )

> Compare this to what you would have said last year.

I remember it well enough.

Read under "its expenses". All this is extremely costly and needs skills I started to enumerate. It is also cheap today, but it will not remain so.

If not for the Chinese factor, prices would have skyrocketed already. Real competition there is what keeps prices in check. And this, while being good for us, is not so good for (esp. US) AI industry. No real perspective on ROI, and we have yet to see what happens when the bubble bursts.

Just the other day, I compared AI agent use to the

Re: For me, it is last few months... (Score:2)

by dknj ( 441802 )

The ROI is at the nation-state level. The ones who benefit are the rich and powerful in control right now. The reason it is being gatekept is to give the elite the power now that will be harder to wrestle from later.

Re:For me, it is last few months... (Score:4, Interesting)

by Kisai ( 213879 )

The answer to that is "absolutely not"

If you can't code worth a damn, then of course the AI is going to find a lot of "bugs" and many of those bugs aren't even bugs, they generate warnings in the compiler otherwise the program would not compile in the first place. The first thing you do when you want to eliminate bugs is "treat all warnings as errors"

You don't need AI for that.

I'm sure AI is useful for finding errors that don't show up as warnings first, but I can tell you first and second hand that your average open source project has thousands of bugs in them, and they're ignored because the compiler is allowed to ignore warnings, especially those about truncation and incorrect cast's.

Do not let the AI recommend solutions unless the code going into it is already 100% correct, otherwise you may simply be "unplugging the oil pressure light" rather than servicing the vehicle.

One thing that would be interesting (Score:2)

by rsilvergun ( 571051 )

If AI ever gets to the point where it can outperform human beings at finding defects then there's going to be a major issue with world powers.

That's because right now if you really want to hack somebody's data you can do it. There is a company out of Israel that will sell you software if you have enough money had enough connections and that software can break into just about any phone in existence. If they can break into the phones they can get past most encryption mechanisms.

So the question is what

Code review is not what AI is being sold as (Score:2)

by dfghjk ( 711126 )

There's nothing wrong with using AI tools to review code and identify issues, real humans will review those issues and solutions after all. It's a far cry from what the AI industry claims AI tools will be useful for, specifically writing all the code in the first place.

Writing good code requires creativity, hard work and accountability; reviewing code is all over the map, it doesn't require creativity and does not come with accountability. Sounds like something AI might be suited for.

Re:Code review is not what AI is being sold as (Score:4, Insightful)

by LainTouko ( 926420 )

In general, the principle problem with LLMs is that they're completely unreliable, due to the basic design. But in cases where they''re just saying "look at this, maybe this is a problem", reliability is not required because if it makes no sense, someone can just say "no". The problem comes when people begin to trust them, despite them being completely untrustworthy. Applications where trust is not required are fine.

Re: (Score:3)

by bwoodring ( 101515 )

The principle problem with humans is that they're completely unreliable, due to basic design.

Re: (Score:2)

by bloodhawk ( 813939 )

AI absolutely is being sold for code review. It is also assisting with code writing. The vast majority of good code requires precisely ZERO creativity; it requires accuracy and following strict business rules.

This is a good approach (Score:2)

by MpVpRb ( 1423381 )

Instead of using AI to "increase productivity" by quickly generating bloated, inefficient, bug-ridden, insecure slop, the better use of the tools is to find bugs, security weaknesses and unhandled edge conditions. AI research should focus on creating better code, bug-free, efficient and secure with all edge cases handled

He was one of the best of us (Score:1)

by Iamthecheese ( 1264298 )

Rest in peace MJ Rathbun

In No Way Worth the Cost (Score:2)

by BrendaEM ( 871664 )

From the getgo, most of he sites you visit, you will be checked for AI--because AI has scraped everything from the internet, copyrights or not. Add to that problem. we are already seeing raising unemployment, oil-eating/planet-warming data centers, billionaires becoming even richer so they can meddle with you government--and for what?

Re: (Score:2)

by real_nickname ( 6922224 )

They said it was to cure cancer but brain rot and fake young women is what we got.

Re: (Score:2)

by backslashdot ( 95548 )

It will cure cancer. In combination with robotics it will make personalized cancer treatments based each person's tumor genome. Basically once you have a few biopsies of a person's cancer, you can determine what proteins, DNA, and RNA are aberrant and design a treatment against that.

Yep (Score:1)

by cascadingstylesheet ( 140919 )

A tool. A very useful tool, if you know what to use it for and how to use it.

Unfort. e'ryone picked an opinion/side two yrs ago (Score:2)

by Hadlock ( 143607 )

Unfortunately everyone picked an opinion two years ago, when AI was genuinely garbage beyond some basic bash scripts or a top 1000 bug/question on stack exchange (which mostly overlap). AI started getting really good in Dec '24, particularly spring '25 and by August 2025 even the $20/mo tier of chatgpt was starting to get legit as OpenAI started to try catching up with (now market leader) Anthropic and their blessed claude code. The 4.5/4.6 models released this year are nothing short of incredible, and the

To teach is to learn.