Google Removes Gemma Models From AI Studio After GOP Senator's Complaint (arstechnica.com)
- Reference: 0179955082
- News link: https://tech.slashdot.org/story/25/11/03/2124238/google-removes-gemma-models-from-ai-studio-after-gop-senators-complaint
- Source link: https://arstechnica.com/google/2025/11/google-removes-gemma-models-from-ai-studio-after-gop-senators-complaint/
> You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google [1]announced late on Friday that it was [2]pulling Gemma from the platform , but it was vague about the reasoning. The abrupt change appears to be tied to [3]a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.
>
> Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing.
>
> The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.
[1] https://x.com/NewsFromGoogle/status/1984412632913494456
[2] https://arstechnica.com/google/2025/11/google-removes-gemma-models-from-ai-studio-after-gop-senators-complaint/
[3] https://www.blackburn.senate.gov/2025/10/technology/blackburn-demands-answers-from-google-after-gemma-manufactured-fake-criminal-allegations-against-her
Surprised (Score:3)
I'm not surprised that a large language model assumed the premise of the prompt and generated an answer, even if it was fabricated. They are basically designed around that. What surprises me is that Google took it offline rather than respond about her profound misunderstanding of how the things work. The guardrails in any of these systems are generally bad at estimating the truth of their own outputs. Especially as a chat goes on longer and the generation of fiction becomes part of the ongoing context.
I wonder why Blackburn (Score:2, Troll)
When there are so many other Republican senators to choose from many of which have actual sex scandals.
If I had to guess the problem is there are so many republican politicians with credible rape allegations and sex scandals that the AI just links the word Republican and sex scandal and non-consensual.
It's kind of like how Twitter and Facebook can't do automatic moderation for Nazis because every time they did the automatic moderation would immediately start flagging Republican politicians up to and
Re: (Score:2, Troll)
That was kind of my first thought. "Republican senator? Sex scandal? Sounds reasonable." Wonder where it got the state trooper from.
Re: (Score:2)
Not really, I think you could test it with any well-known figure to see if there is any bias. I suspect it will accuse anybody of rape.
snowflakes (Score:1)
OMG, a fiction generator made fiction about Marsha!
But its cool if the President drops shit from a plane on people. Our AI good, others AI bad.
What a bunch of idiots.
They're coming for your pencils! (Score:2)
A pencil can be used to create false text too. Watch out. The Repubs are repugnant.
On par with the president (Score:2)
When you have a president that lies on average 21 times per DAY ( [1]source [wikipedia.org]), what does it matter if an AI does it ? I mean, sure, it's whataboutism, but you can't claim the moral high ground when your high ground is at the bottom of the Mariana trench.
[1] https://en.wikipedia.org/wiki/False_or_misleading_statements_by_Donald_Trump
Nobody should be surprise Google AI does it (Score:2)
It is racially diverse nazis all the way down.