'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO (businessinsider.com)
- Reference: 0176762715
- News link: https://developers.slashdot.org/story/25/03/18/1428226/vibe-coding-is-letting-10-engineers-do-the-work-of-a-team-of-50-to-100-says-yc-ceo
- Source link: https://www.businessinsider.com/vibe-coding-startups-impact-leaner-garry-tan-y-combinator-2025-3
"You can just talk to the large language models and they will code entire apps," Tan [2]told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."
According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.
[1] https://www.businessinsider.com/vibe-coding-startups-impact-leaner-garry-tan-y-combinator-2025-3
[2] https://www.youtube.com/watch?v=coojA-odaTk
Brought to you by AI-Is there anything it cant do? (Score:2)
This AI propaganda is getting ridiculous.
> with 25% having 95% of their code
It's penetrated 1/4 of a niche market. We should surrender to the corporate AI gods. Please remember us when you are charging $20000/agent for something that doesn't even exist.
Re: (Score:3)
The solution to criticism of how bad these apps will be is to squelch that criticism, something AI will be able to do and something its billionaire creators will focus on doing. AI will be able to create these apps, so long as AI gets to decide what the standards are for judging the apps created. You will buy it and you will like it.
*cough*BS* (Score:4, Insightful)
I don't think they know what they are saying.
They're letting 10 idiots code all the work of a team of 50-100, that is going to require 10,000 people years to fix once it breaks and nobody knows jack about it from the lack of documentation.
Re: (Score:3)
Doesn't matter, the "right" people have made the money by then. This app will be discarded and the next con job will be underway.
There will be no fixing of these apps, there will only be fixing of your attitude.
Re: (Score:3)
That sounds like a problem for another quarter.
Re: *cough*BS* (Score:1)
This is perfect for Internet of Shit devices.
YDKWYDK (Score:3)
They'll never know what it gets wrong, but a lucky customer will. And I am curious who will weave in bug fix, new features, or a breaking change upgrade. The AI? The vibe may be quite solemn indeed when the LLM cannot generate symbol sets on something it never trained for.
Using AI to write software... (Score:2)
...works great when making simple code that is similar to popular, published code.
The prompt "write a snake game in python" works because snake games exist and are simple to make.
Creating novel, large and complex code is a different problem.
A very large codebase is too complex to fit in one human mind. No single person, even if smart and talented, knows every detail of how it works.
If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
If it was possible, the promp
Re: (Score:2)
"Creating novel, large and complex code is a different problem."
It isn't if you subscribe to bottom-up programming. Of course, only morons accept bottom-up programming, among them Agile-philes, but those people are the ones pushing this bullshit.
"If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
If it was possible, the prompt would be a multiple thousand page specification."
These are bad arguments. You do not need to know every detail to be effective at top-
Re: (Score:2)
> You do need creativity, though. How much of that does AI have?
In most benchmarks, more than humans.
Don't know why this should be a surprise to you.
There are a couple of things to look at here.
1) Creativity that exists within the data. It can take the progress of human science decades to piece together an obvious fact from 2 bits of data- like Special Relativity. For humans, it's hard for us to even see connections that were always there and obvious.
2) temperature.
Re: (Score:2)
There's lots of coding done that's simple and easy for AI to write. For that sort of stuff you might as well have the AI churn out the basic parts while the human coders handle the big picture and complex parts
Or is this just rightsizing (Score:1, Troll)
They are claiming they can have 10 developers do the work of 100....
But what if that's not because of AI, but simply because those 10 coders are actually working at full capability?
After all, Twitter reduced headcount by over 80%, and not only kept functioning but started adding more features. They were not using AI tools to achieve this, they simply had tons of coders not doing much!
Maybe "vibe coding" is nothing more than finding a small number of developers that are efficient and actually work most of t
Re: (Score:2)
Haha you buried the sad troll.
quality. (Score:5, Interesting)
LLMs can't reason, they can only predict what you're asking for and try to match it up against what the model has and provide what it thinks is an answer.
Can you generate code with it? Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.
Will you have to have an actual professional software developer fix it? Yes.
LLMs trained on examples don't have an understanding of anything, only a prediction path. It's time we stop pushing the fallacy that they somehow are better
than experienced professionals at anything, only generating fakes.
Re: (Score:2)
Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.
One of these days the devs whose code they are using is going to find out and start issuing Copyright claims against the companies doing AI code completion AND their customers.
Re: (Score:2)
I've had mixed emotions about copyright around code. Now an overall solution, look and feel, trademarks and certainly patents all apply. But if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.
Code that's GPL'd would probably apply here as well.
I guess that's why there's arguments around copyright and AI but that's for legal scholars and politicians to argue.
Re: (Score:2)
if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.
Attribution only satisfies the author's moral rights. The author of computer program code also has the exclusive right to commercially exploit their writings, and attributing it does not make it legal for someone else to do so.
Sample code off the internet is generally for your learning or personal use only; not legal to copy and paste
Re: (Score:2)
That would probably lead to the end of the software industry if they could prove that a particular code segment was influenced by another code segment made them liable for copyright infringement
Re: (Score:2)
instead of writing a proper argument is it a great idea to write a list of silly questions? probably ...
Re: (Score:2)
> LLMs can't reason, they can only predict what you're asking for and try to match it up against what the model has and provide what it thinks is an answer.
This is so laughably fucking wrong, it's akin to flat eartherism.
What's really remarkable here, is the brigade of morons that moderated you up.
That an LLM can reason has long ago been replaced with "well, is it True Reasoning", since anyone with eyes can watch a modern CoT LLM reason.
As for your particular claim, we don't even need to address the whether it's "True Reasoning" (whatever the fuck that is).
LLMs take their own token generation into account while generating your answer. That alone provides
Re: (Score:2)
don't confuse statistical inference, which is what mocks "reasoning" in an LLM with Reasoning. LLMs have a lot of problems with hallucinations and accuracy that a reasonable professional can see through. The areas where LLMs have strengths, NLP, translation and synthetic data generation is are beneficial but they don't create now knowledge but like any tool could lead to new insights with existent information.
Questioning LLMs and they hype around them isn't a case of misguided disregard for technological in
more hyper and idiocracy from the ultra rich (Score:2)
"...they will literally do the work of 10 or 100 engineers in the course of a single day."
As long as that work is shitty work, as long as the expectations of the app are low enough, as long as quality of software continues its trend downward as this will ensure.
"According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models."
That's not good news, it's a condemnation of Silicon Valley greed and billionaire tech bros.
"
will be the hottest thing for 3 years (Score:5, Insightful)
C-levels absolutely do not care about long term maintenance. Their sole focus is this quarters stock price. When you can replace 100's or 1000's of dead weight, liability ridden staff, with a CaaS (coder as a service) your costs drop, your margin increases, and your stock prices rise - with zero capital outlay, no increased sales or productivity. This doesn't even take into account overhead of stuff like managers, HR, office space, equipment, supplies, all can be reduced. Stock price is how CEO pay is determined, and owners, aka share holders, will love it. Stock almost aways rises on news of layoffs. Automate expensive knowledge workers as been the nirvana for AI offerings since the concept began. This is a shot across the bow for the end of "programming" as a trade. I'm going to say it again, get out while you are still in control of your destiny. Once you're laid off with one of the hoards, the markets will be flooded and finding jobs ~ any jobs ~ will be tough.
Re: (Score:2)
Look what happened to automotive mechanics. Long gone are the days of listening, smelling, observing, touching and thinking. The "art" of car repair is reduced to plug in a computer and replace the parts the computer tells you till the problem vanishes.
Re: (Score:2)
I have no points to upvote you, but I would do it 100x if I could.
What I see here is an incredibly amount of people looking the other way and singing 'lalala' to an obvious omen of their own demise.
We've seen this movie: VB & RoR (Score:2)
Generating code that works is EASY. Generating code that works well?...that's what software engineers spend decades mastering. There is precedence for this and that's easy-to-use programming languages and tools. Visual Basic is the first that comes to mind. I honestly never used the product, but that is a metric. It's been around since before I began my career and I even used to see it here or there when I started. The press seemed to like it, but everyone I talked to hated it. More importantly, it g
Applying Kernighan's wisdom (Score:2)
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
Applying this logic to LLMs used by people who can't code at all is left as an exercise for..., well, probably for your pet LLM.
Re: (Score:2)
logic ?
There's no logic to that whatsoever.
There's an assertion, and then there's fallacious logic (begging the question) working off of it.
And then what ? (Score:2)
From my attempts at coding with ChatGPT, I have a hard time you can build anything complex enough with this method, but okay. But then when you have a requirement change later on, what do you do ? AIs are notoriously bad at taking something that exists and making a small modification (just ask the graphic people who try to regenerate AI art with some small modifications); and nobody understand the code since nobody wrote it. How does that unmaintainable code work for you ?
Re: (Score:2)
> AIs are notoriously bad at taking something that exists and making a small modification (just ask the graphic people who try to regenerate AI art with some small modifications); and nobody understand the code since nobody wrote it. How does that unmaintainable code work for you ?
Complete unadulterated bullshit.
LLMs are absolutely excellent at making modifications to code.
You slam what you need in the context window, tell it what you want changed, and tell it to give you the full result, or a diff if you want.
> From my attempts at coding with ChatGPT, I have a hard time you can build anything complex enough with this method, but okay.
Given the above, I'm 99% sure you're completely full of shit.
Here come the Followers of Ned Ludd (Score:2)
The Luddites weren't necessarily against the machines they sometimes wrecked, but rather the downward pressure on wages and product quality .
Parallels to this in /. comments are left as an exercise for the reader.
AI coders are not coders (Score:5, Funny)
I saw this thread on reddit about a month ago. Guy was using an AI tool to program like this, and lost 4 months of work when the AI went nuts and deleted everything. He'd never even heard of git. Then a bunch of other AI coders started jumping in telling him that he needs to use git, and keep copies of all of his code in different folders at milestones so that he has an extra backup. Not one of them had any clue what they were talking about.
This all happened in an AI coding subreddit, I saw it linked from /r/programminghorror. This thread made me feel much more secure in my job lol.
[1]https://www.reddit.com/r/curso... [reddit.com]
[1] https://www.reddit.com/r/cursor/comments/1inoryp/cursor_fck_up_my_4_months_of_works/mccn5mh/
Adversarial coding may give us a timeframe (Score:1)
Adversarial coding may let us know when this approach is good enough for real-world use.
Team A: A few humans + AI writing code.
Team B: A few humans + AI looking for problems with the code.
Team C: Enough good/experienced humans to really pick apart the code and find all but the most obscure serious issues.
When Team B gets as good as Team C, then we can talk about "a few humans + AI writing code" for real-world projects.
Until then, you may want to stick with Team D: Enough good/experienced humans to write
Bullshit (Score:2)
Someone should check these companies code bases. Because I can personally say that as someone who has been using ChatGPT to assist work on a few personal projects, it fucks up ALLLLLL the time. Just enter in a few hundred lines of code and ask it to reprint that code back to you. About half the time it will leave out small bits or entire chunks of code here and there. Don't even get me started on the coding. If you have any ambiguity in your questions/instructions you are going to get a best guess answe
Write Only code. (Score:2)
AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.
Most developers could increase their productivity if they could write code with no thought to maintainability. There's even a guide: [1]How to Write Unmaintainable Code [github.com] - which the AI, no doubt, has been trained on.
When I was in college, I took a course in assembly. Recognizing that the instructors were providing psu
[1] https://github.com/Droogans/unmaintainable-code
Re: (Score:2)
> AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.
Wrong.
Yet another post on LLMs, yet another complete falsehood from you.
LLMs will gladly format code however you want, make iterative changes to it, give it to you as diffs, or entire files. Whatever the fuck you want. They output well-commented and readable code.
You're not wrong about the random bugs and security exploits, though. That's very much real.
Executives don't understand (Score:2)
Only an out-of-touch executive would think a software system just needs "coding" to implement it. That's the least of the job. The more important aspect is designing the system: figuring out how functional modules should be organized and how they should interact.
Which AI is he using? (Score:2)
Serious question. I've only tried the free ones but I've tried the GitHub one, chat GTP and Microsoft's AI to do some basic Python coding with flask, literally hello world shit because I hadn't written a Python flask app before. And every time it's spit out code that was almost but not quite entirely unlike tea.
That is to say it gave me something that looked like it was supposed to work but was never going to work because it was hopelessly hopelessly out of date. I mean like 10 years out of date.
May
Only modest hyperbole (Score:2)
"81% of Y Combinator's current startup batch consists of AI companies" so clearly this guy is motivated to push the narrative, but I don't think he is very far off the mark. Maybe 10 guys won't replace 100 yet but there definitely is movement in that direction.
I use Windsurf (basically a front-end for Claude Sonnet) and it has boosted my productivity quite a bit. It is particularly useful in situations where I'm not very familiar with the programming language or the API's I'm needing to work with. You have
Wasn't it just last week (Score:2)
that the vibe coder was told by the AI code generator 'Learn how to program, it looks like I'm doing your homework'? Yeah, no way that code will be maintainable, much less will those "coders" be able to document it or explain what its doing.
I started working as a programmer/database developer about 40 years ago. I remember some 20 years ago talking to one of my first programming instructors. She was no longer teaching systems analysis because that wasn't what students wanted to learn. They wanted to
Cannot wait... (Score:5, Insightful)
There will be plenty of money cleaning it up in a few years time..
Re:Cannot wait... (Score:5, Insightful)
On the bright side, it's probably making the job of pen testers very simple. The simple script kiddie attacks that stopped working in the mid 2000's will suddenly work again for awhile until the Gen AI's "learn" how to write secure code. And by "learn" I mean that should probably stop scraping Stackoverflow comments and using it as a source of truth on how something should be done.
Re: (Score:1)
A lot of the 2000s type vulns had as much to do with the tooling as with the devs. Hard to write a buffer overflow vulnerability in Python unless you are trying.
That said logic flaws are lot more interesting and just as devastating. In the 2000s it was all about getting shellz and I guess it still in some circles (your state actors etc) but most threat actors are there for the $$$ and some bug where the same discount code can be applied 10 times, or you can order a extra slice of cheese for $0.10 with a ha
Re:Cannot wait... (Score:5, Insightful)
Buffer overflows have just about always been the least of your worries for web-based apps.
One of your most common issues is lack of a bother to check permissions at all.
For example: You have an endpoint named /viewinvoice.cgi?id=12345.
Hackers guess that 12346 is probably also a valid invoice ID then query /viewinvoice.cgi?id=12346.
Your web application was only designed to check that the user was logged in.. there's no proper logic to prevent Customer A from viewing Customer B's invoices. User id number 4567 can Change User number 7890's password, etc.
And that's before you even start looking at Crafted cookie injection exploits, Javascript injections, SQL Injection, XSS, CSRF, etc. Which can be rampant in your app if the AI does not know what kind of design is appropriate.
Re: (Score:2)
I used to screen scrape jail registry records for county jails in my home area. Though the IDs weren't exactly sequential, doing groups of 50 would get hits for two of the local counties.
What I found was that, while the website UI wouldn't show juvenile records, you could access them directly w/the ID. Surfacing it to the county took a day or so to find the right person but they quickly closed that hole, but who knows how many records were handed out to malicious actors over the years before I found it.
Re: (Score:2)
I used the very same method to look at payrolls in our company. They never realized they could be snooped, and it worked until they moved to SAP some years ago.
Re: (Score:1)
The source of truth does not matter.
Can be Stackoverflow, /. or RTFM.
What is your source of truth?
God given intuition?
Re: (Score:2)
Vibe will make COBOL great again!
Re: (Score:3)
There won't be any need for cleaning, none of these automatically generated apps will provide value to fix. This comes from the land of Juicero.
What will need cleaning up is the VC mess left behind.
Re: (Score:2)
> There will be plenty of money cleaning it up in a few years time...
Yup. Nothin' like code literally no one wrote and probably no one understands.
Maybe (Score:2)
Development in AI will not stop. It has really gotten the world's attention now. The promise of eliminating the expensive salaries of software developers is just too enticing. Tremendous amounts of money and energy will continue to be poured into AI research and development, if for that reason alone.
The problems that exist with AI now will be focused-on and addressed. Are YOU confident that they are unsolvable, and that the world will always need lots of software engineers? Because statements of the fo