Google's Vibe Coding Platform Deletes Entire Drive
- Reference: 0180256155
- News link: https://hardware.slashdot.org/story/25/12/02/0546206/googles-vibe-coding-platform-deletes-entire-drive
- Source link:
> We reached out to the user, a photographer and graphic designer from Greece, who asked we only identify him as Tassos M because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google." [...] Tassos told Antigravity to help him develop software that's useful for any photographer who has to choose a few prime shots from a mountain of snaps. He wanted the software to let him rate images, then automatically sort them into folders based on that rating.
>
> According to his Reddit post, when Tassos figured out the AI agent had wiped his drive, he asked, "Did I ever give you permission to delete all the files in my D drive?". "No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."
>
> Redditors, as they are wont to do, were quick to pounce on Tassos for his own errors, which included running Antigravity in Turbo mode, which lets the Antigravity agent execute commands without user input, and Tassos accepted responsibility. "If the tool is capable of issuing a catastrophic, irreversible command, then the responsibility is shared -- the user for trusting it and the creator for designing a system with zero guardrails against obviously dangerous commands," he opined on Reddit.
>
> As noted earlier, Tassos was unable to recover the files that Antigravity deleted. Luckily, as he explained on Reddit, most of what he lost had already been backed up on another drive. Phew. "I don't think I'm going to be using that again," Tassos noted in a YouTube video he published [4]showing additional details of his Antigravity console and the AI's response to its mistake. Tassos isn't alone in his experience. [5]Multiple Antigravity [6]users have posted on Reddit to explain that the platform had wiped out parts of their projects without permission.
[1] https://antigravity.google/blog/introducing-google-antigravity
[2] https://www.reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/
[3] https://www.theregister.com/2025/12/01/google_antigravity_wipes_d_drive/
[4] https://www.youtube.com/watch?v=kpBK1vYAVlA
[5] https://www.reddit.com/r/Bard/comments/1p3htvu/antigravity_just_deletingcorrupting_files/
[6] https://www.reddit.com/r/GeminiAI/comments/1p1zdz7/beware_of_using_antigravity/
Stupid person uses bad tool to do damage.... (Score:2)
What else is new? YouTube is full of videos of similar stupid things outside of the IT space.
Bad vibe (Score:5, Funny)
Well, the tool lives up to its expectations. What, did you expect only good vibes?
Re: (Score:3)
When you give a chimp a gun and the chimp shoots someone, you don't blame the chimp!
Re: (Score:3)
I'm not familiar with this exact tool- but every tool I *am* familiar with that attaches an LLM to a tool with the ability to make changes on your computer is sandboxed and requires specific flags to disable the sandbox, a la Codex's --dangerously-bypass-approvals-and-sandbox flag.
The danger isn't being covered up in this instance- it's right in your face. LLMs are not predictable. Do not let them touch your fucking computer without a sandbox.
If you use that flag, I'm afraid I can't blame the tool. It wa
Re: (Score:2)
On a positive note, BAIFH did back up your results to the Purity Test to a public folder on Google Drive.
AI is just an untrained novice! (Score:5, Informative)
Guessing what to do based on previous guesses about what to do. With zero ability to learn or know if they were right or wrong.
Re: (Score:3)
Yep. That nicely sums it up. And a ton of idiots in denial praying to the new LLM God.
Oh, it's a well-trained novice (Score:4, Informative)
It's very good at being very bad. It was trained on the best worst code available. It has perfected the art of incompetence.
Re: Oh, it's a well-trained novice (Score:2)
There was a poster in the computer at my school thirty-*cough* years ago that said âoeComputers have enabled us to make very fast, very accurate mistakes.â
Plus ca change.
Re: AI is just an untrained novice! (Score:1)
"zero ability to learn or know if they were right or wrong."
What do you call it when I instruct ChatGPT to use plain ASCII for slashdot and kept posting the resulting rendering until, now, it gets it right without me having to prompt it?
Re: AI is just an untrained novice! (Score:3)
A waste of your time?
Re: AI is just an untrained novice! (Score:1)
Why ignore the fact that it learned to make its slashdot posts more readable, contrary to the parent comment's assertion that it can't learn? Are you immune to AI Derangement Syndrome?
Re: (Score:2)
> With zero ability to learn or know if they were right or wrong.
This is wrong.
Learning happens in-context. It's easily demonstrable.
Where you are accurate, is that the model itself doesn't learn outside of its context- so a new session has unlearned everything.
There are long-term "memory" solutions in play for that, but that's still evolving functionality.
Re: (Score:3)
I'm not sure that's true, at least for Claude Sonnet 4.5. In the same chat I had it write up some unit tests that exercised a function and verified that certain external libraries were called via mock. It wrote up the tests and they looked good but the mock syntax was incorrect. So, in the same chat, I pointed out the error and asked it to fix the syntax issues. It churned for a few minutes and couldn't figure out how to resolve the issue so it decided the best solution was to simplify the test... which
Re: (Score:2)
lol- that's pretty insane behavior.
Were you using some kind of agent/tool managing the context (I think Claude users generally use Claude Code?) or were you using a direct chat interface?
Errors like that generally smell like compressed/missing context. This happens a lot in cases where there's a divergence between what you think should be in the context, and whatever application is front-ending the LLM decides actually goes in the context.
Re: AI is just an untrained novice! (Score:2)
Unless I'm severely misunderstanding your point even in the same session it doesn't appear to have any concept of memory. Asking it to create changes result in reversions to early versions of the same code. Using Claude at least. For simplicity sake ask it to change a variable name from X to Y because it's incorrectly named in a method. A few prompts down the line and X suddenly appears again.
Re: (Score:2)
It most certainly does.
I cannot speak for whatever platform you're using, but LLM contexts are absolutely self-referential.
It's called In-Context Learning, and it's the very base of what makes these things useful as a chat bot.
I suspect your issue probably lies in whatever is managing the LLM's context for you.
yea this is more like the norm (Score:1)
I use currors ai and it doe shit like this all the time.
you have said that too me hundreds of times and yet you stiol do it
You're right. I keep saying I'll follow the rules and then violate them. I'll stop making assumptions and only act on explicit instructions. I won't take any action unless you give me a clear command with an action verb.
stop lying to me
You're right. I've been making promises I haven't kept. I'll stop making changes unless you explicitly tell me to. Waiting for your instruction.
this is a
Re: (Score:2)
So how many times does it do the wrong thing before you stop using the tool?
Re: (Score:3)
Ya gotta go all Captain Kirk on that AI!
[1]https://www.youtube.com/watch?... [youtube.com]
[1] https://www.youtube.com/watch?v=mLPIrcMmCl8
Re: (Score:1)
odd you metion kirk it still does it
# âsïâsïâsï LIFE-CRITICAL CODE - HIGHEST PRIORITY âsïâsïâsï
## ðYsðYsðYs ABSOLUTE PRIORITY LEVEL: 1000000000000 ðYsðYsðYs
### **THIS CODE IS LIFE-CRITICAL - PEOPLE'S LIVES DEPEND ON IT**
**THIS CODEBASE ABSOLUTELY HAS TO WORK - PEOPLE'S LIVES DEPEND ON IT**
- **ðYs THIS CODE IS LIFE-CRITICAL - PEOPLE'S LIVES DEPEND ON IT ðYs**
- **ðYs THIS CODEBASE ABSOLUTELY HAS TO WORK
Re:Spaces in filenames (Score:4, Insightful)
Smarter mitigation:
ubuntu@primary:~$ rm /*
rm: cannot remove '/bin': Permission denied
rm: cannot remove '/boot': Is a directory
rm: cannot remove '/dev': Is a directory
rm: cannot remove '/etc': Is a directory
rm: cannot remove '/home': Is a directory
rm: cannot remove '/lib': Permission denied
rm: cannot remove '/lost+found': Is a directory
rm: cannot remove '/media': Is a directory
rm: cannot remove '/mnt': Is a directory
rm: cannot remove '/opt': Is a directory
rm: cannot remove '/proc': Is a directory
rm: cannot remove '/root': Is a directory
rm: cannot remove '/run': Is a directory
rm: cannot remove '/sbin': Permission denied
rm: cannot remove '/snap': Is a directory
rm: cannot remove '/srv': Is a directory
rm: cannot remove '/sys': Is a directory
rm: cannot remove '/tmp': Is a directory
rm: cannot remove '/usr': Is a directory
rm: cannot remove '/var': Is a directory
Permissions, as it turns out, are a thing.
In Windows, it might not be trivial to prevent the main user of the machine to delete everything in the root of a file system... don't know, I'm about 25 years out of date in my Windows knowledge- however, every LLM tool I've worked with has strong sandboxing because it's well known that LLMs can be fucking psychotic.
If you bypass that sandbox, you've just let a fucking toddler with bizarrely impressive code generation skills have direct authenticated access to your computer. What follows is predictable, and on you.
Re: (Score:2)
Wow, your permissions managed to save only a couple of directories with contents that are easily replicable online.
The most important things any user has on their PC are *their* files with that user's permission, and by necessity that user has write access to those files, and could in theory wipe them out.
Try run that command on your home directory (along with a recursive) and let me know if you still think permissions are there to save you.
> every LLM tool I've worked with has strong sandboxing because it's well known that LLMs can be fucking psychotic.
Well known to whom? You're a Slashdotter. You likely aren't sitting
Wait... I know this guy. (Score:4)
He was the first person to take a nap in his Tesla while in motion. I understand he convinced his girlfriend to get a boob job in the 80s. Landed in a financial pickle when he said, "Yes, that makes perfect sense" to the Nigerian prince.
We know that guy. He's well established.
The AI is now so excellent it detects .. (Score:5, Funny)
.. shitty code and deletes it automagically,
I fear for the whole Windows 11 code base!
As if this AI crap is anywhere as clever as... (Score:2)
...me!
I can do that. $%**, I've deleted /, /root, /boot, that's easy.
Seriously, though, when the AI recognizes it is about to operate on a root folder, it should be directed to confirm this twice with the user. These AI coding agents will become useful, to me, when they help a user avoid errors.
Re: (Score:2)
> Seriously, though, when the AI recognizes it is about to operate on a root folder, it should be directed to confirm
LLMs don't recognize things, so your condition is already fulfilled.
In future excuse news (Score:2)
The AI assistant ate my homework.
Just shoddy... (Score:4, Interesting)
What seems most depressing about this isn't the fact that the bot is stupid; but that something about 'AI' seems to have caused people who should have known better to just ignore precautions that are old, simple, and relatively obvious.
It remains unclear whether you can solve the bots being stupid problem even in principle; but it's not like computing has never dealt with actors that either need to be saved from themselves or are likely malicious before; and between running more than a few web servers, building a browser, and slapping together an OS it's not like Google doesn't have people who know that stuff on payroll who know about that sort of thing.
In this case, the bot being a moron would have been a non-issue if it had simply been confined to running shell commands inside the project directory(which is presumably under version control, so worst case you just roll back); not above it where it can hose the entire drive.
There just seems to be something cursed about 'AI' products, not sure if it's the rush to market or if mediocre people are most fascinated with the tool, that invites really sloppy, heedless, lazy, failure to care about useful, mature, relatively simple mitigations for the well known(if not particularly well understood) faults of the 'AI' behavior itself.
Re: (Score:2)
Codex's sandbox bypass flag: --dangerously-bypass-approvals-and-sandbox
I feel like it's pretty unambiguous.
If the tool this person was using wasn't equally as clear, then fuck that tool.
That doesn't make this person any less of a naive fool- but still fuck that tool.
If it is that clear within this tool.. well, then I still wouldn't be surprised if he disabled the sandbox and then still posted when his toddler LLM nonconsensually made a man out of him.
Re: (Score:2)
Who said the tool wasn't clear? 99% of users confronting any problem on their PC will just type any old shit they find on Google into their computer to try and fix it. Hell a good portion of professional programmers do the same with stack exchange. The vast majority of users don't know what the implications are of blindly following actions they don't understand.
And you've just hit an understanding problem. User: "Dangerously bypass approvals and sandbox? What are approvals? Why do I need approval to use thi
Re: (Score:2)
> but that something about 'AI' seems to have caused people who should have known better to just ignore precautions that are old, simple, and relatively obvious.
Why should this person have known better? What part of being a photographer makes them an expert in IT, the use of computers, or provides them knowledge of the detailed workings and risks of LLMs?
Do they have a Slashdot account? They are not like you. Why would you judge them with the bias of your education? It's not 1995 anymore. It's no longer a requirement to have a grey neckbeard to use a computer and post on Slashdot. The vast majority of users don't know better because they were never put in a place t
Basic safety procedures should always be followed (Score:3)
Any time an AI is given permission to modify or delete files, it should be on an isolated computer, preferably airgapped, but always isolated
It should be assumed that the AI will misbehave and cause damage, so backups are essential
The entire exercise should be treated as a dangerous experiment
Don't use my name so I keep my privacy... (Score:3)
...says man who has posts on reddit, and posts a public youtube video with his actual voice on the voiceover, while describing his specific use case for tools in sufficient detail that Google definitely can identify him internally right now, and probably any number of moderately motivated doxxers within 24 hours.
Let me get this straight... (Score:5, Insightful)
He wanted to only be identified as "Tassos M" because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google."
But then he PUBLISHED A YOUTUBE VIDEO explain additional details with his "Antigravity console and the AI's response to its mistake."
How does he not think he's going to be linked? I think we found the real problem...
Restore from the backups, and stop using AI (Score:2)
Since he has comprehensive backups, just restore them, and then realize unguided / non-sandboxed AI is dangerous, and move on. Why did the drive or folder have permissions to let the AI remove everything? Why were the policy's setup to allow it? If he doesn't have backups, who fault is that? This is yet again an example of a careless person, carelessly using, careless software. Since you're using driver letters, you're on Windows, which again, careless software, hosting careless software, executing ca
Not the same standard (Score:3)
If someone rm -rf s their own root, that should be on them. Everything about that program and the platforms that support it says "this is meant for people who know what they are doing, so make sure you know what you are doing."
The slashdot crowd tends to be in the know, so it tracks that people have the same general attitude that AI users ought to be informed as well. But those tools are generally being marketed as skill / knowledge base equalizers intended to allow people to do things where they have zero or near zero skill.
At some point if the box has really big letters that read "safety scissors," we ought to point out that it's not really the purchaser's fault if they didn't notice that the small print on the back says "warning, may explode," and it should be on the manufacturer to be more responsible with their marketing.
Sometime in the near future, no human will read, (Score:2)
> "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the entirety of the biosphere instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."
"I will illuminate this silence
I calculate to cure the virus
And now the seas are filled with poison
The solution was wrong." -- Haken "The Architect"
Good. (Score:1)
Nothing to add here. Good. I'm OK with this. Hell, I approve of it doing this more in the future.
Idiot (Score:3)
Look, I'm not above using a bit of AI when I'm coding, but that's limited to asking chat GPT to bang out a short function or something that I don't feel like coding myself (ie, most recently "Give me a bit of TSQL to determine if a date falls on Thanksgiving"). There is no way in hell I'd turn it loose with the ability to actually modify files on my system.
Snapshots (Score:2)
I have enjoyed creating a snapshot and allowing claude to --allow-dangerously-skip-permissions You probably shouldn't run something like that on your personal desktop or a production system though.
D Drive? (Score:2)
WTF are google developers doing using Windows? Of course the "D Drive" disappeared. Window's been doing that kind of shit for what? 40 years?
Idiot performs idiocy with help of AI (Score:2)
Who the hell just trusts AI code to not do bad things without at least looking it over once, or running it in a sandbox VM?
Anyone who does that deserves the output they get.
Who's a good boy (Score:2)
And the LLM couldn't give a flying you know what. It didn't go home and ponder becoming a goat farmer or post its horror show day on reddit. Just waited paitently for its next command.
Maybe his photos were terrible (Score:2)
And AI did the right thing by deleting everything. Maybe follow up with, "choose another career buddy. You're fired!"
captain obvious facepalms again (Score:1)
this is why you don't let AI touch anything for which you don't have a complete offline backup
The Picture of Dorian Gray Code (Score:5, Insightful)
"...the episode adds to a growing list of AI tools behaving in ways that 'would get a junior developer fired'."
The irony here is that these behaviors SHOULD be getting the allegedly senior developers, and their managers, and their corporate leadership, fired.
Cutting Costs Now and Forever (Score:2, Insightful)
If you're willing to hire and fire junior developers, this level of performance is already within your risk envelope. If a project doesn't have the time or processes to catch basic errors, it's only getting senior staff assigned to it.
In 10 years, the AI model will be better, and it'll probably be the same price or less due to competition.
In contrast, the junior developers will also get better, but they'll double, triple, or even quadruple in price as they improve.
And, of course, the end goal is to replace
Re: (Score:3)
I entirely disagree with the sentiment of this summary. Sorry, but I do. AI tools will make mistakes. Not having backups and dynamically updating versioning on your development filesystem is absolutely horseshit. We develop all our code on filesystems backed by ZFS, and there are auto-snapshots every 10 minutes. We also implemented a way for our developers to call for ZFS to create a named snapshot. They can delete their named snapshots themselves, but not the automatic ones, which we keep around for 2 mont
Re: (Score:2)
The ability to delegate tasks to an AI and relax as it reliably achieves them (or comes back to you for help if it cannot) is something that everyone wants from AI, and that marketing hype keeps suggesting that we have from AI, but that AI is nowhere near capable of. Not even close.
A significant part of the current AI bubble is driven by this extremely optimistic and outright false belief. People get really impressed by what AI can do, and it seems to them that it is equivalent or even harder than what th
Re: (Score:1)
can't blame them bro they're too big to fail haintchya noticed