News: 0179592782

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Gavin Newsom Signs First-In-Nation AI Safety Law (politico.com)

(Monday September 29, 2025 @11:30PM (BeauHD) from the safety-first dept.)


An anonymous reader quotes a report from Politico:

> California Gov. Gavin Newsom [1]signed a first-in-the-nation law on Monday that will [2]force major AI companies to reveal their safety protocols -- marking the end of a lobbying battle with big tech companies like ChatGPT maker OpenAI and Meta and setting the groundwork for a potential national standard.

>

> The proposal was the second attempt by the author, ambitious San Francisco Democrat state Sen. Scott Wiener, to pass such legislation after Newsom [3]vetoed a broader measure last year that set off an international debate. It is already being watched in Congress and other states as an example to follow as lawmakers seek to rein in an emerging technology that has been embraced by the Trump administration in the race against China, but which has also prompted concerns for its potential to create harms.



[1] https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/

[2] https://www.politico.com/news/2025/09/29/newsom-signs-ai-law-00585348

[3] https://yro.slashdot.org/story/24/09/29/2150202/californias-governor-just-vetoed-its-controversial-ai-bill



Safety from What? (Score:2, Interesting)

by SmaryJerry ( 2759091 )

Based on the Governor's website this is being done to put guardrails on AI so that they comply with "national and international standards" while simultaneously requiring a government funded consortium to build a public AI. This is all being done in the name of "Safety" but it doesn't specify what it is protecting us from. If you follow 4-5 links it takes you to California's report I linked below basically it seems to focus on 'transparency,' which of course is good for any government organization, but why

Re: (Score:3)

by abulafia ( 7826 )

This is basically step 1.

If you're doing this sort of thing responsibly, you're going to do it slowly, based on what's happening in the real-world. This sort of legislation starts doing that - you place markers on where you think you're going, get impacted companies to hire a government affairs persona and starting to generate data and so on.

Assuming you think government has any role at all here, this is more or less what you should want them to do. Doesn't do much right now, and it definitely doesn't me

Re: Safety from What? (Score:2)

by liqu1d ( 4349325 )

Safety from competition perhaps?

Re: (Score:2)

by jenningsthecat ( 1525947 )

> ... basically it seems to focus on 'transparency,' which of course is good for any government organization, but why do we need it for private AI models?

Transparency around the training process and sources. Transparency around guardrails, the history of successes and failures thereof, bad outcomes that might otherwise be swept under the rug, and specific details that allow comprehensive testing by third parties. Disclosure of all 'hallucinations' so that independent parties can look for repeat misbehaviour, repeated patterns, etc. I think all of these, and probably more, would be useful from a safety point of view.

> Also, unfortunately the way these models work doesn't allow for any transparency, it's basically a black box that does statistical trial and error to vastly oversimplify.

Although they're "private AI models", they

Re: (Score:2)

by SmaryJerry ( 2759091 )

Transparency around process and sources for private companies would limit freedom and competitive advantages. It's like requiring Coca-cola to release their recipe. You could say the exact same things about Google algorithm but there was never a requirement for transparency around search and that existed for 25 years without a law that required Google to publish their frameworks.

Re: (Score:3)

by narcc ( 412956 )

> Transparency around process and sources for private companies would limit freedom

Nonsense. Whose freedom would be "limited"? What ways would it be "limited"?

> and competitive advantages.

So what? Why should I care if Google loses some nebulous "competitive advantage" to Meta? Transparency is good for the public . You know, the people that our government was created to serve. If something is good for us, the people, why should anyone care if it's inconvenient for some giant corporation?

> You could say the exact same things about Google algorithm but there was never a requirement for transparency around search

A serious oversight that lead to Google's near monopoly on search, dramatically reducing competition in that space and allowing

Re: (Score:2)

by jenningsthecat ( 1525947 )

Thanks - you saved me the trouble, and you said it better than I would have.

Re: (Score:2)

by SchroedingersCat ( 583063 )

LLMs are sophisticated search engines regurgitating data that has been fed into them. Government making them "safer" is another word for censorship. DMCA was sold as Internet "safery" law after all.

Re:Safety from What? (Score:4, Insightful)

by narcc ( 412956 )

> why do we need it [transparency] for private AI models?

You can't possibly be serious.

> unfortunately the way these models work doesn't allow for any transparency

Nonsense. Sources of training data, for example, can be provided no matter how the model operates internally.

> it's basically a black box

Nonsense. They're hardly some impenetrable mystery. If that were true, things like attention pruning would be impossible. They're a lot more open than OpenAI's marketing department and years of bad tech reporting has lead you to believe.

> does statistical trial and error to vastly oversimplify

Nonsense word salad. Where did get such a ridiculous notion?

Re: (Score:2)

by phantomfive ( 622387 )

Gavin Newsom is trying to raise his profile in preparation for a run for president next election.

Re: (Score:2)

by narcc ( 412956 )

LOL! I suppose you also think that food safety regulations also make food less safe? What a joke. I can see why you'd want to post that idiotic nonsense AC.

Paging Dr. Franklin (Score:1)

by MacMann ( 7518492 )

Didn't Benjamin Franklin warn us about trying to obtain safety in the world? The world isn't safe, and the government should not try to make it safe. If we want safety then we need to arm ourselves with knowledge and an education. It wouldn't hurt to arm yourself with other things.

Also, that image of Gavin Newsom at the top of the fine article is dangerous. Someone is going to flip that image horizontally, crop out his wedding band, then post it somewhere to show how he's buddies with Elon Musk or somet

Needs more air quotes (Score:2)

by SubmergedInTech ( 7710960 )

Gavin Newsom signed a first-in-the-nation law on Monday that will "force" major artificial "intelligence" companies to "reveal" their "safety" protocols.

FTFY.

How is publishing the guardrails "Safety". (Score:2)

by RedK ( 112790 )

All it is is transparency. But it hardly is "safety", except with very thick scare quotes.

Dumb terminal