News: 0181205624

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI

(Thursday April 02, 2026 @11:00AM (BeauHD) from the would-you-look-at-that dept.)


An anonymous reader quotes a report from Gizmodo:

> OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to [1]a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn [2]their efforts were secretly being funded by OpenAI . [3]Per the Standard , the Parents and Kids Safe AI Coalition was a group formed to push the [4]Parents and Kids Safe AI Act , a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as [5]a compromise after the two groups had pushed dueling ballot initiatives last year.

>

> But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act.

Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called [6]World that provides age verification services.



[1] https://sfstandard.com/2026/04/01/openai-ai-kids-safety-coalition/

[2] https://gizmodo.com/group-pushing-age-verification-requirements-for-ai-turns-out-to-be-sneakily-backed-by-openai-2000741069

[3] https://sfstandard.com/2026/04/01/openai-ai-kids-safety-coalition/

[4] https://www.commonsensemedia.org/press-releases/common-sense-media-openai-join-forces-on-strongest-youth-ai-safety-measure-in-us

[5] https://www.wsj.com/tech/ai/openai-reaches-truce-with-advocacy-group-over-dueling-child-safety-measures-59926dc4

[6] https://world.org/



Could Be (Score:2)

by SlashbotAgent ( 6477336 )

> "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.

Could be. Hmmm. Could be.

Re: Could Be (Score:2)

by ArmoredDragon ( 3450605 )

He's just one of many. Forcibly removing privacy from the internet, one service at a time.

Fuck collectivism.

Re:Could Be (Score:4)

by greytree ( 7124971 )

The man who took the open source, not for profit company OpenAI and made it a closed source and for profit company could be a dirty, money-grubbing cunt.

Could be.

human vs slop (Score:5, Interesting)

by ZiggyZiggyZig ( 5490070 )

i've read several times here and other places online that the global push for ID verification online was to allow AI firms to differentiate between slop and content made by real humans. But I don't understand, what will prevent ID-authenticated humans to post slop, only in their name? For me this argument is kind of bogus, although I understand why there is a global push for the end of online anonymity - more likely being because of the new rise of fascism and the need to control the masses...

Re: (Score:2)

by CEC-P ( 10248912 )

The threats of AI to shut down every account you own and not let you shop anywhere or search anything prevents you from doing anything it doesn't want you to.

Re:human vs slop (Score:5, Insightful)

by alexgieg ( 948359 )

The main pusher has been Meta. They want age verification everywhere because it (mostly) allows distinguishing real humans from bots, including AI bots. From what I read, no idea whether this is accurate or not, they want that because of ads. Bots don't generally buy products, so showing them ads reduces click-through metrics, thus ad revenue.

AI companies I don't know. For Altman, World might be a driving factor, but I imagine a more important factor is regulatory capture. The more roadblocks to competition billion- and trillion-dollar incumbent companies manage to add to their markets, the less competition from new entrants unable to afford compliance.

Re: (Score:2)

by OngelooflijkHaribo ( 7706194 )

Truth be told I notice that one of my issues with this new “internet left” is that when I grew up the “left” was counter-establishment, it seeked to offend while the ones who were offended and the censor was the “right” but that role seems to have flipped now in many cases and the “left” has become the establishment.

I kind of feel like Elon Musk's transition from left to right was almost purely about this issue. He wanted to offend the establishment. I feel he

Re: human vs slop (Score:2)

by ThurstonMoore ( 605470 )

My experience is the same.

Re: (Score:1)

by SumDog ( 466607 )

Facebook is tightly tied to the government, launched the day after DARPA shut down LifeLog and was originally funded by Peter Thiel. It's always been intended as a global surveillance system. OpenAI also has ties to the US government and any of the same Peter Thiel backed entities.

OpenAI benefits from a global control grid. You know what China has with surveillance and companies providing individual person credit scores? The US and EU governments want that, but automated and on steroids. These people ar

Age Verification Requirements For AI (Score:5, Funny)

by rossdee ( 243626 )

Yes, an AI shouldn't be allowed on the Internet until its 18 years old.

Two changes (Score:5, Funny)

by Hentes ( 2461350 )

I could accept Worldcoin based authentication with two minor changes: instead of an iris scan, it should use the more modern, IgNobel winning [1]rectal print technology [slashdot.org], and instead of a creepy orb Sam Altman would have to personally sketch the prints with a broken pencil.

[1] https://tech.slashdot.org/story/21/09/24/2128227/are-you-ready-to-share-your-analprint-with-big-tech

Altman is the Soros of tech (Score:1)

by shm ( 235766 )

His grimy little hands are everywhere.

Re: (Score:2)

by sound+vision ( 884283 )

I've been told he wants to turn me gay and Jewish, but that scares me a lot less than building a panopticon and turning the eyepiece over to Trumpistas.

Re: (Score:3)

by alexgieg ( 948359 )

Soros hasn't been the Soros of tech, or anything, for a long time. He's one billionaire doing advocacy and lobbying for liberal causes, while all the others, individually and put together, are nowadays doing advocacy and lobbying for conservative causes. If anything, he's currently the lone underdog fighting an uphill battle against impossible odds.

Liability (Score:5, Interesting)

by Dan East ( 318230 )

It absolves them of liability. If there is a law they have to validate age (even if it is ineffective and easily worked around by minors), and they are doing whatever silly thing they need to do to be compliant, then they have shielded themselves from liability.

By being involved in the process they can steer things to something easy and affordable to implement on their end. Make it work the way they want to (scan an ID, have AI look at their face, DNA test, measure their height - whatever method they're specifically wanting to do is why they are funding this and pushing for it).

Re:Liability (Score:4, Interesting)

by DarkOx ( 621550 )

All of that is true but I think it is far more about barriers to entry. For all the talk about the need for these massive datacenters, a lot of, maybe most of, the use cases for the the frontier models that actually are worth $$ like code assistants etc rapidly falling into the range where what OpenAI is selling just isn't needed. Qwen is not as good as GPT but it is close, a Mac Studio maybe can't pump out tokens quite as fast as an API hosted on OpenAI's infrastructure but it is knocking on the door (for one human consumer, applications).

Is there going to be market for hosted models, of course not many are going to want to onprem the LLMs running the chat bot on their websites. A lot of companies will want to onprem their RAG tools and anything handling data they care about protecting.

At one point Microsoft people were saying workstations were over, that developers, engineers (not in the software sense), Architects (not in the software sense), were going to use Azure hosted VDIs...Yeah have not seen that, yes I know its possible and someone here will tell us how wonderful their thin-client virtual desktop experience is, but the lion's share of these professionals that I encounter anyway are still buying workstations (or near-workstations pro-line Mac). Point is people are going to want to run their GenAI work loads locally, and they very nearly can. The free and "Open" models combined with affordable performant hardware are going to eat OpenAI's lunch, in a huge slice of the market.

Unless - they could somehow make it impossible to distribute and bundle these things for compliance reasons....Then they'd have nice little moat that would be difficult to cross.

Re:Liability (Score:4, Interesting)

by alexgieg ( 948359 )

> even if it is ineffective and easily worked around by minors

Australia is on the forefront of not allowing that to work for long. Their age-verification enforcement agency is actively monitoring every single trick kids use to bypass verification and updating their compliance rules to force companies to block those loopholes one by one.

For example, they've recently started threatening fines to websites that allow users to update their age to be higher than the threshold when they had previously informed they were younger than that, that allow a user to keep sending photos over and over and over until one is accepted as being higher than the threshold, and that accept known videogame characters to be accepted as photos of real people.

The game of cat and mouse will continue, and there's going to always be techniques that work, but they will become harder and harder, as well as more and more hidden, since revealing them in public where the authorities can also learn of them will trigger their banning. At some point it'll become so hard to bypass for anyone but the most dedicated teens that they expect most will simply give up such attempts and accept living under the imposed restrictions. Some will bypass them regardless, but as long as the percentage is tiny the law will be considered a success from the enforcers' perspective.

Like Meta (Score:2)

by bradley13 ( 1118935 )

Trying to hide their involvement, while pumping $billions into lobbying. No surprise that OpenAI is doing much the same. Bet: So are Google, Microsoft, Apple and other tech giants - they just haven't been caught yet.

The question is why? Why do the tech giants want to force ID checks in order to use basic service, or even to log into your own computer?

Re: (Score:2)

by leonbev ( 111395 )

The "why" is pretty easy to understand:

1) It makes them look like responsible citizens to government officials, who will now be more willing to turn a blind eye to their privacy raping default "privacy" settings. Who knows, it might even help with the permitting process to plop a new data center somewhere.

2) It adds a barrier of entry for startups and open source projects who can't afford an army of lawyers to ensure that they're meeting the specific age regulations for every US state and country.

3) It allo

Re: (Score:2)

by DarkOx ( 621550 )

Speaking as someone who does think we need stronger age, and locality verification on the internet; I too find the whole thing unseemly.

There are plenty of good reasons to want know if someone is over the age of majority whatever that is defined to be wherever they are, and what laws the other party to your interaction may or may not be subject to in terms of jurisdiction.

I also believe this is achievable while preserving some degree of privacy/anonymity. States could as part of issuing IDs for example prov

Age Verification is a Win Win for AI Services (Score:2)

by TheWho79 ( 10289219 )

This is about forcing their competitors (google, tiktok, meta, twitter, etc) to invest in validation of users identity before allowing public posting on those services.

Meanwhile, OpenAI has no public posting capability and is not required to authenticate users age. It also is mostly pay-for-play and one of the top users of AI are kids under 18,.

So, either Google/Meta socials validate users - which wont happen - and users gravitate to AI for 'time spent'. It's a win-win for AI services.

Also Facebook and Palantir (Score:4, Insightful)

by rsilvergun ( 571051 )

So this was always about AI slop. The problem these companies are facing is that AI slop is infesting the internet. It's starting to infect their data sets. It's becoming difficult to tell programmatically who's a real person and who is a slop bot.

This is an existential threat for both the AI companies who need real humans to train from and the social media companies who need clean data sets to sell to law enforcement and advertisers and corporations and governments.

If that data isn't clean none of these people have a product because you're the product and if you're mixed in 80/20 with highly sophisticated bots that date is going to become real worthless real fast.

So this not only improves their ability to track you but it lets them know you're a real person who's data can go into the set.

Re: (Score:2)

by sound+vision ( 884283 )

I heard, on a regular-people radio show, some commentary on the Sora thing. (They first had to explain what LLMs were, to give you an idea of the audience.) Even a lot of regular people now are getting tired of the slop. It's just not interesting to look at day-in and day-out.

On the flipside, I do know 1 person who seems to gravitate heavily toward the slop and repost it on a daily basis. He seems genuinely drawn to it. He has a learning disability.

In many ways, these times are really separating the wheat f

Typical pattern (Score:2)

by Chris Mattern ( 191822 )

Big players *love* regulation, as long as it's red tape but doesn't actually interfere with the business. It's a fixed cost, which they can spread out over their large operations while it strangles the smaller competition that might be a problem.

Clamping Down on Local Models (Score:1)

by Sauce Tin ( 1884020 )

Maybe they're using this to go after local models?

Age verification services (Score:2)

by PPH ( 736903 )

For a fee, of course. The whole AI thing is heading the way of tulips. And they need a revenue source* to backfill that hole before anyone notices.

*AI was best positioned as a tool for developers and a back end for smarter search. A small market in the final analysis. But if we can charge every user a little bit each month to get onto the MSN network (nee Internet), we can still do OK.

Intel engineering seem to have misheard Intel marketing strategy. The phrase
was "Divide and conquer" not "Divide and cock up"
(By iialan@www.linux.org.uk, Alan Cox)