News: 0179274016

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google Releases VaultGemma, Its First Privacy-Preserving LLM

(Tuesday September 16, 2025 @11:21AM (BeauHD) from the first-of-its-kind dept.)


An anonymous reader quotes a report from Ars Technica:

> The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

>

> Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). [1]The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.

The work the team has done here has [2]led to a new Google model called VaultGemma , its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company [3]says performs comparably to non-private models of similar size.

It's available now from [4]Hugging Face and [5]Kaggle .



[1] https://arxiv.org/abs/2501.18914

[2] https://arstechnica.com/ai/2025/09/google-releases-vaultgemma-its-first-privacy-preserving-llm/

[3] https://research.google/blog/vaultgemma-the-worlds-most-capable-differentially-private-llm/

[4] https://huggingface.co/google/vaultgemma-1b

[5] https://www.kaggle.com/models/google/vaultgemma



seems to be a distraction from the root issue? (Score:1)

by Anonymous Coward

"Couldn't you just stop training models on copyrighted or privacy-infringing data?"

Google: "lol no"

Every lie we tell incurs a debt to the truth. (Score:4, Interesting)

by Pseudonymous Powers ( 4097097 )

I bet VaultGemma safeguards your privacy the same way that RBMK reactors don't explode.

If I steal a car and paint it another color. (Score:5, Insightful)

by Fly Swatter ( 30498 )

.. I still stole the car. Same thing here.

Is this like dithering of images? Sure you changed the underlying pixels but the general image is still there - and last I checked that doesn't fix copyright violations.

It doesn't eliminate the problem. (Score:2)

by HalAtWork ( 926717 )

As it is written, it doesn't eliminate the problem. It only attempts to minimize it. That makes it useless at the intended "privacy preserving"

> - less likely to 'memorize' any of that content

> - l trained with differential privacy to minimize memorization risks

Pronounce that Mr. Privacy Pants (Score:1)

by kurt_cordial ( 6208254 )

GPT6vg wasn't sexy enough.

I have a hard time believing Google wants (Score:2)

by sabbede ( 2678435 )

to protect anyone's privacy. User data is their bread and butter.

Doesn't mean it isn't true, just that it's hard to believe them about it.

Re: (Score:1)

by Anonymous Coward

I suspect the real motivation is to obscure where the training data came from rather than protect anyone's privacy. They're trying to remove the /liability/ of using questionable data without having to actually stop scraping it.

non-deterministic outputs mean you can't predict w (Score:1)

by LDiCesare ( 10493054 )

Am I the only one to cringe when I read "LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say"? Deterministic doesn't mean that you can predict the outcome. If the process is complex but not random, you always get the same result even though you can't predict it.

The FIELD GUIDE to NORTH AMERICAN MALES

SPECIES: Cranial Males
SUBSPECIES: The Hacker (homo computatis)
Plumage:
All clothes have a slightly crumpled look as though they came off the
top of the laundry basket. Style varies with status. Hacker managers
wear gray polyester slacks, pink or pastel shirts with wide collars,
and paisley ties; staff wears cinched-up baggy corduroy pants, white
or blue shirts with button-down collars, and penholder in pocket.
Both managers and staff wear running shoes to work, and a black
plastic digital watch with calculator.