30 percent of some Microsoft code now written by AI - especially the new stuff
- Reference: 1745994732
- News link: https://www.theregister.co.uk/2025/04/30/microsoft_meta_autocoding/
- Source link:
Nadella revealed that number during an interview with Meta boss Mark Zuckerberg at his company’s LlamaCon AI event.
A few minutes into their chat Zuck asked Nadella “Do you have a sense of how much of the code, like what percent of the code that's being written inside Microsoft at this point is written by AI as opposed to by the engineers?”
[1]
Nadella responded by saying Microsoft tracks accept rates, which he said are “sort of whatever 30-40 percent it's going up monotonically.”
[2]
[3]
The Microsoft CEO said plenty of the company’s code is still C++, which he rated not that great”. Microsoft maintains a lot of C++ too, and Nadella said it’s in “pretty good” condition while more recent Python is “fantastic”.
Overall, Microsoft finds AI is best at writing entirely new code rather than reworking older material, but the company is finding ways to use AI often across its codebase.
[4]
“I'd say maybe 20 to 30 percent of the code that is inside of our repos today in some of our projects are probably all written by software.”
Nadella then asked Zuckerberg the same question, but the social networking nabob said he couldn’t recall a statistic and said data points about AI coding sometimes reflect use of auto-completion tools and therefore don’t accurately describe software written entirely by other software.
Zuck said Meta has teams working on auto-coding in domains where it can see its own history of changes.
[5]
The Meta man said “the big one that we're focused on is building an AI and a machine learning engineer to advance Llama development itself.”
Zuck said that “in the next year probably … half the development is going to be done by AI as opposed to people and then that will just kind of increase from there.”
Nadella riffed on that by pondering whether development tools and compute infrastructure should be rebuilt so they can be driven by AI agents.
[6]If you use AI to teach you how to code, remember you still need to think for yourself
[7]LLMs can't stop making up software dependencies and sabotaging everything
[8]AI can improve on code it writes, but you have to know how to ask
[9]When AI helps you code, who owns the finished product?
The mutant offspring of Word, PowerPoint, and Excel
The Microsoft boss also thought bubbled about the blurring line between documents and applications.
He explained that he now researches topics by using an AI chatbot and saving the results, and said auto-coding means that process can result in creation of software.
“This idea that you can start with a high level intent and end up with …. a living artifact that you would have called in the past an application is going to have profound implications on workflows,” he said.
It may also dissolve what he described as the “artificial category boundaries” between documents and apps – a problem he revealed Microsoft has tried to address in the past.
“We used to always think about why Word, Excel, PowerPoint isn't it one thing, and we've tried multiple attempts of it. But now you can conceive of it … you can start in Word and you can sort of visualize things like Excel and present it, and they can all be persisted as one data structure or what have you. So to me that malleability that was not as robust before is now there.”
Which sounds like the OpenDoc vs. OLE wars of the 1990s – during which Microsoft and Apple fought over how to share data across apps – brought into the AI age.
One more thing: Neither billionaire commented on whether their autocoding efforts are costing jobs, or if the code their companies generate without human input has proven problematic. Zuckerberg said he thinks AI coding presents an opportunity to improve security. ®
Get our [10]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aBJJLzQbt4g4drLco6-b3gAAAQw&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJLzQbt4g4drLco6-b3gAAAQw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBJJLzQbt4g4drLco6-b3gAAAQw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBJJLzQbt4g4drLco6-b3gAAAQw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBJJLzQbt4g4drLco6-b3gAAAQw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2024/01/27/ai_coding_automatic/
[7] https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
[8] https://www.theregister.com/2025/01/07/ai_can_write_improved_code_research/
[9] https://www.theregister.com/2024/05/15/ai_coding_complications/
[10] https://whitepapers.theregister.com/
Re: Only 30%?
This explains a lot of things...
... and should serve as a warning example.
Re: Only 30%?
Based on personal experience I would guess that software engineers are using AI in development 30% of the time.
Re: Only 30%?
It'll be something like WorXPoint and everyone will go "what the hell".
But actually it's more likely something like SuperDuperMerger and people will have no idea what it's supposed to be.
Re: Only 30%?
It's supposed to be LibreOffice-killer.
MS: "Hey, open-sourcers ... implement this! "
Re: Only 30%?
Microsoft 123.
Lotus had WYSIWYG text in their spreadsheet and tried to present it as a word processor before they had Amipro (I forget the details).
Look how well it worked for them!
Re: Only 30%?
You mgiht be thinking of the (original) Lotus Symphony: https://en.wikipedia.org/wiki/Lotus_Symphony_(MS-DOS)
Re: Only 30%?
No, it was definitely Lotus 123. I remember a young trainee being sent on a training course for Lotus. When she returned, she insisted on using Lotus as a word processor.
It was DOS though.
Re: Only 30%?
an amalgam of word excel and PowerPoint.
World Exception Power
Re: Only 30%?
an anagram of word excel PowerPoint is excretion plow powder
Re: Only 30%?
Buggy McBugface obviously :)
Re: Only 30%?
well played sir/madam/AI
Re: Only 30%?
It depends on what's meant by that. Individual LibreOffice applications are calls to a common executable with a flag to tell it whis functionality the user wanted. He might simply be catching up with that but it would likely be a complete rewrite. Something for Office users to look forward to.
Re: Only 30%?
Surely they'll call it "Office App"?
Code Merge
So Microsoft is, by merging its three primary end-user apps, attempting to create a propeietary version of Emacs.
Re: Only 30%?
It's not a "hell child" it's " a living artifact. "
So the Antichrist then?
writing code is only part of the equatation
and do they use AI to write the tests to test the generated code? Is this marking your own homework, and cannot see the defects because it didn't see them in the first place?
Re: writing code is only part of the equatation
Yes. Exactly this. Thumbs up. I don’t think I have a problem with some code being written with the help of an AI. That’s just code monkeying. But tests? The formal declaration of the specification? The do you really understand what the hell you’re supposed to be developing? Seems like a bad idea to me.
Re: writing code is only part of the equatation
This was my thought. It's been many decades since I wrote (first as a schoolboy and then skilled amateur- but in those days most of us were) any software. The hard part wasn't writing the code. It was working out what I needed the code to do- and who for and what they'd do with it, and why. That sort of human stuff.
Re: writing code is only part of the equatation
"do they use AI to write the tests to test the generated code?"
Testing??? This is Microsoft we're talking about - testing is done by the plebs customers!
Number game bullshit
I call complete and utter bullshit on these numbers. If they were true, then the whole software industry would already be in complete uproar and upside down.
Re: Number game bullshit
30% of some projects
(My emphasis)
Two is "some". Maybe just the one, rounded up.
I suspect this is probably the internal test project for Copilot, and it represents the absolute best they could get to compile.
Run? Nah, it doesn't need to run!
Re: Number game bullshit
It compiles, lets put it in production...
Re: Number game bullshit
The software industry is already upside down. Why do they have to keep patching the rubbish product. I am glad I don't have to patch my car every year or more.
Re: Number game bullshit
If you had a Tesla you would have to do exactly that...
Re: Number game bullshit
He says 30% of the code in the repos, He doesn't say where those repos are against the product release.
It will be interesting to see how buggy the next release is.
He also doesn't indicate how that code gets in the repo, but implies the statistic is from committer status. So if he's got 30% of his repos due to uncurated AI commits the next release is going to be .. interesting. I'd hope for his sake that all the commits were made by humans, after testing, and 30% is the proportion that used AI assistance on the changes.
Re: Number game bullshit
These are true. They asked their LLM and it immediately returned with these numbers.
Re: Number game bullshit
Uhm, it's already upside down and has been for a very long time.
It's easier than ever to believe 10 impossible things every morning.
LLM inference training is by its very nature lossy, and as a result LLM generated code quality is significantly poorer than the average quality of the code used to train it.
This would normally be a concern , however for Microsoft, below average probably represents a huge increase in code quality, going by the number of critical flaws patched every single month. I can see why they are so keen on it.
It's not having the LLM write code that's the problem.
The problem is how the hell are you testing or debugging that? Because tests can't ALSO be written by AI, and debugging something that another human has written is bad enough, or even something that YOU WROTE YOURSELF. Trying to debug some spam-churn out of an LLM? No thanks, I'd rather be out of work.
Testing? This is MICROSOFT!! Testing is for losers....
That's what end-users are for.
To be fair (and I'm not), it could be done by a DIFFERENT AI, preferably separately trained with different data.
Only if you're prepared to tell all your customers: "Sorry, no human was ever involved, our AI just did this, not our fault".
Good luck with that.
Given the spin that's put out about AI half the people would believe that and the other half would be saying "It can't be worse".
This would normally be a concern , however for Microsoft, below average probably represents a huge increase in code quality, going by the number of critical flaws patched every single month. I can see why they are so keen on it.
I'm not so sure.
Given the IMHO rather dramatic decline in (remaining) quality of their latest releases I'd argue they have actually started this a while back, in which case the promises of Microsoft that this would somehow improve have about the same value and persistence as one from Trump.
For instance, the new Outlook is so feature deficient that it's almost not usable for users who have come to rely on those features. For instance, personally I loath Microsoft deciding that urgent messages cannot be simply merged in the new date sort and will ALWAYS appear first. I can see why that is important for some, but not for everyone and it is in my opinion becoming very clear that Microsoft has simply stopped listening to its users, because, let's face it, it's not like a majority of them even has a choice..
Well that certainly explains Windows 11 and Copilot
Windows 11 has completely gone to shite and Copilot has never been anything other than complete shite.
If they were 30% written by 'AI' that would explain a lot.
24H2
Ai increases explain this abomination
Explains everything
It all makes sense now. no real people coding, garbage churned out based on historic questionable code. No-one understanding it, or testing it internally.
"Ship it to the users, let them find the bugs"
As for a uber-app combining Word, Excel and electronic crayons. The users I've witnessed can barely operate the stand alone applications, dread to think the support calls which will come for someone who can't format their table into columns like Word while making it fly in like PowerPoint.
I suppose eventually they'll shoe-horn Outlook into this abomination too. Programs aren't complete until they can email.
The curse of recursive Clippy
It seems like you are .. It seems like you are .. It seems like you are ..
the blurring line between documents and applications
One ring to rule them all? No thanks...
I make data. I choose a program to generate it; I choose a program to process it. Those two programs may or may not be the same as each other; there may be more than one choice in either field. But _I_ choose...
Re: the blurring line between documents and applications
The end point they're aiming at is you write a description of what you want done with the data, and it's done for you. You don't have to worry about the software. It may pick the package for you. Or it may even code it on the spot.
Whether we get there or not, I don't know. But that's the vision. My guess is it will work for low hanging fruit. Where the boundary between low and high is, I don't know.
Re: the blurring line between documents and applications
That's how programming already works. We write a functional description (we call it the spec), then we write a procedural description (we call it the code) then machines write a register-level description (we call it the executable).
In general, the functional description is incomplete and the programmer makes a bunch of judgement calls to turn it into a procedural description. This results in bugs. You can significantly reduce the number of bugs by having a more thorough functional description, but the people who write those aren't generally capable of doing better, which is why they don't.
Having AI do the procedural 'compilation' puts the onus on the functional spec writer to write a good spec. That's not going to happen.
Yikes
See title
"Zuckerberg said he thinks AI coding presents an opportunity to improve security"
He's underselling. The more AI-written code you release, the more opportunities you'll have to improve security.
Symphony / framework III
Nadya needs to look at the past - kind of
Couldn't do graphics, but everything else in one
That approach works
libreoffice24.8 -writer
libreoffice24.8 -calc
etc.
Feeble Point
I cannot understand why the trade press burbles on about Power Point. The programs I use, in order of importance, are One Note, Word, and Excel.
Re: Feeble Point
I have to say that's what I use too. PowerPoint,or its LO equivalent, is something I have in the background when I'm doing my voluntary "Digital Champion" thing, because Local Authority corporate requires it, and which I pretty much ignore after I've done the "housekeeping" slides.
If you need a long flow of slides behind you to say what you are meant to be telling the audience, you aren't telling the audience properly.
Re: Feeble Point
The long list of charts is so when they come back to look at the stuff6 months later they think ! Ah yes, I remember. Without these... people forget or misremember.
Also think of the people who's first language is not English
Only 30%?
We've gotta have a competition to give a name to whatever hellchild is an amalgam of word excel and PowerPoint... Jesus WEPP?