News: 0178677600

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google Releases Pint-Size Gemma Open AI Model (arstechnica.com)

(Thursday August 14, 2025 @11:30PM (BeauHD) from the locally-sourced dept.)


An anonymous reader quotes a report from Ars Technica:

> Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint. [...] Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was [1]able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device's battery. That makes it by far the most efficient Gemma model.

>

> Developers shouldn't expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model's ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits [2]a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.

>

> Google [3]claims Gemma 3 270M is good at following instructions out of the box, but it expects developers to fine-tune the model for their specific use cases. Due to the small parameter count, that process is fast and low-cost, too. Google sees the new Gemma being used for tasks like text classification and data analysis, which it can accomplish quickly and without heavy computing requirements. You can download the new Gemma for free, and the model weights are available. There's no separate commercial licensing agreement, so developers can modify, publish, and deploy Gemma 3 270M derivatives in their tools.

You can download Gemma 3 270M from [4]Hugging Face and [5]Kaggle in both pre-trained and instruction-tuned versions.



[1] https://arstechnica.com/google/2025/08/google-releases-pint-size-gemma-open-ai-model/

[2] https://cdn.arstechnica.net/wp-content/uploads/2025/08/Gemma3-270M_Chart01_RD3-V01.original-1536x864.jpg

[3] https://developers.googleblog.com/en/introducing-gemma-3-270m/

[4] http://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune

[5] https://www.kaggle.com/models/google/gemma-3



25 conversations per 0.75% hardware utilisation (Score:3)

by VaccinesCauseAdults ( 7114361 )

Possibly the dumbest units ever (or at least since someone posted something in imperial units?)

Re: (Score:2)

by Entrope ( 68843 )

On a scale of 1 to stupid, that's at least 0.87 swimming pools per Rhode Island.

Re: (Score:2)

by VaccinesCauseAdults ( 7114361 )

It is Vatican City-level two popes per square kilometre stupid.

Pint Size (Score:2)

by fahrbot-bot ( 874524 )

Being a long-time reader of [1]Questionable Content [questionablecontent.net] Google using the words [2]Pint Size [fandom.com] and AI together makes my ass twitch, and not in a good way. :-)

[1] https://questionablecontent.net/

[2] https://questionablecontent.fandom.com/wiki/Pintsize

... But if we laugh with derision, we will never understand. Human
intellectual capacity has not altered for thousands of years so far as
we can tell. If intelligent people invested intense energy in issues
that now seem foolish to us, then the failure lies in our understanding
of their world, not in their distorted perceptions. Even the standard
example of ancient nonsense -- the debate about angels on pinheads --
makes sense once you realize that theologians were not discussing
whether five or eighteen would fit, but whether a pin could house a
finite or an infinite number.
-- S. J. Gould, "Wide Hats and Narrow Minds"