News: 0180954022

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists

(Wednesday March 11, 2026 @06:00PM (BeauHD) from the identity-check dept.)


YouTube is [1]expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, [2]allowing them to identify and request removal of unauthorized AI-generated videos impersonating them . TechCrunch reports:

> The technology itself [3]launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life.

>

> With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

>

> To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.



[1] https://blog.youtube/news-and-events/expanding-likeness-detection-civic-leaders-journalists/

[2] https://techcrunch.com/2026/03/10/youtube-expands-ai-deepfake-detection-to-politicians-government-officials-and-journalists/

[3] https://news.slashdot.org/story/25/10/21/2250229/youtubes-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers



Good small step, but we need more (Score:3)

by MpVpRb ( 1423381 )

YouTube should clearly label ALL videos where AI was used

The label should describe which model was used and how much of the content was AI generated

A switch should be provided to allow or hide AI generated videos

Re: (Score:2)

by MBGMorden ( 803437 )

I think we really need some type of granular filter.

An AI thumbnail or a few seconds of AI generated content I don't care about in a video, as long as the video is MOSTLY a real manual production.

The videos where the animations, voiceover, and even the script is all clearly AI though, those are the ones where I want to skip it entirely.

Like if this video is more than 30% AI, then I would prefer it be culled from my feed.

Re: (Score:2)

by Baron_Yam ( 643147 )

What you will get is whatever makes YouTube the most money, which means whatever gets the most ads in front of your eyeballs without getting taken to court and losing the profits to judgements.

AI content is hilarious to me, because all these people throwing it at YouTube trying to make a buck are just teaching the YouTube team how to replace them with YouTube-AI that won't require paying out for views.

AI training engagement. (Score:2)

by Archangel Michael ( 180766 )

Using AI is training AI to replace you. If you can be replaced with AI, you will be, and should be.

If you don't use AI, your peers will be, and you will be replaced by AI anyway.

Good Luck

Re: (Score:2)

by Baron_Yam ( 643147 )

I think the difference between augmenting AI and replacing AI is a big one.

Augmenting AI is just doing tedious stuff that we will reclassify as 'make work' when humans continue to do it. Our economy will adjust eventually, the issue is that

it's happening so quickly to so many people that the short to mid term is extremely painful.

Replacement level AI, with real human-level capability, renders us all irrelevant. Whoever controls it will have no need of other humans, we'll just be competitors for resources.

Re: (Score:2)

by Kisai ( 213879 )

Yes they should, however I feel that onus is already on the uploader.

How to solve this is that AI generated videos should generate both visible and invisible metadata that clearly identifies the model and prompt. If a video is "prompted" and then just straight uploaded to youtube, then youtube can just automatically flag it as "AI" internally, and then look for the keywords of political/journalists. If there is a match, then it puts it in the list.

If the prompt is missing, eg post-processing is done to the

Re: (Score:2)

by Sloppy ( 14984 )

How would they do that?

And then if someone did invent a magic "I can figure out how any video was made" algorithm, couldn't we just run that in our own clients?

And who monitors this for abuse? (Score:5, Insightful)

by Sethra ( 55187 )

Just being AI based doesn't mean it's intention is to deceive. In most cases it's parody or protected free speech.

I have no issue with slapping a big "AI Deepfake" label on identified deepfake content, but when you start talking about giving politicians and government officials the ability to prevent you from even being heard, that's not ok.

Re: (Score:3)

by alvinrod ( 889928 )

This is my concern as well. There's a lot of great content out there that uses AI for parody or satire that could get swept up by this. The videos using AI voices of Trump, Biden, and Obama acting like college bros playing video games and shit taking each other from a few years ago were incredibly funny to me, but could obviously be flagged and removed when I don't think anyone would think these were real.

People need more exposure to these because the technology is only going to get better and if people

Re: (Score:2)

by Art Challenor ( 2621733 )

And the opposite problem. Politicians request the take-down of any embarrassing video as a deep-fake.

"Yeah, that guy handing me a large bag of used bills? - deep-fake., take it down."

Re: (Score:2)

by OngelooflijkHaribo ( 7706194 )

I just think the word “a.i.” is such a meme. THe term is “forgery” or “fake” which existed long before artifical neural networks were used to generate them. Is this meant to imply forgeries and fakes are fine so long as they be produced with other means? It's such a meme term.

One good thing that came out of it is that I notice that on many boards no one trusts anything any more and many people now think anything that looks slightly incredulous is potentially fake, or as t

Re: And who monitors this for abuse? (Score:2)

by devslash0 ( 4203435 )

What you call parody is often a cheap excuse for distributing otherwise plagiarised or offensive content.

Re: (Score:3)

by Sethra ( 55187 )

The solution isn't to censor, it's to clearly label it.

Community notes on X are a good example of how to expose deceitful content without the need for outright censorship, a similar solution would be appropriate here.

If you allow politicians and government officials to decide what the public can or cannot see you can 100% guarantee that system will be abused.

And I would wager these same officials would hire, at taxpayer expense, teams to make sure any negative content would be removed. The public would be

Re: (Score:2)

by kwelch007 ( 197081 )

Well said.

Re: (Score:2)

by martin-boundary ( 547041 )

That's the wrong incentive. Labelling can be easily removed, thus causing the problem of unlabelled fakery to remain in the system forever as long as one crazy troll in the population decides to act.

Censorship identifies and removes the fakery before enough eyeballs have seen it, which reduces the probability that a crazy troll will see it in the first place.

Re: (Score:2)

by spitzak ( 4019 )

The video should be modified to include the label. If somebody crops or removes the label and tries to post the result, this editing will be detectable and will cause the label to reappear, or whatever triggered the label will still be there and cause it to reappear.

"Community notes" are just another way to add a comment to a video and are useless. The company's rights to free speech allow it to add any logo to the video it wants, including "this is fake/misleading/a lie". That is NOT censorship no matter h

Re: (Score:2)

by unixisc ( 2429386 )

> Just being AI based doesn't mean it's intention is to deceive. In most cases it's parody or protected free speech.

> I have no issue with slapping a big "AI Deepfake" label on identified deepfake content, but when you start talking about giving politicians and government officials the ability to prevent you from even being heard, that's not ok.

I quite agree w/ this. We have seen YouTube abuse its power before. Whenever there is a post about something that YouTube disagrees w/, such as Climate Change, they attach an advisory Wiki link to that to indicate that it's misleading. And I won't even get into the Covid time when they were censoring doctors left and right

Also, YouTube has been increasingly enshittified of late, in several aspects:

- In filtering capabilities, one no longer gets the ability to sort things by date. A pretty major sort

Re: (Score:2)

by spitzak ( 4019 )

Adding an "advisory link" is NOT CENSORSHIP. No matter how much you cry about it.

You are right about their algorithms making it shitty. Everything should be required to show most-recent first with NO items that you have not somehow "subscribed" to, and this should be the default behavior. They can always put a button on there saying "things you may like" that will show their algorithm, but you can't read/view the contents unless you click on them, and the result is that you are temporarily subscribed to thi

Re: (Score:2)

by spitzak ( 4019 )

Yes I don't get this, it sounds very dangerous.

Absolutely they should be slapping a big "this is fake" notice on anything their detector flags as AI. If they make a mistake it is up to the person posting the video to complain and prove their stuff is real and get the notice removed. But the notice will not stop anyone from viewing the video.

It sounds like they are allowing politicians to censor parodies they don't like, and possibly not showing the fake notice on ones they do like.

Re: Deepfake (Score:2)

by newcastlejon ( 1483695 )

Appropriately enough I sometimes see deepfake Natalie Portman plugging âoeIQ testsâ on slashdot.

Yes, I see ads on slashdot, sometimes they cover 2/3 of the page.

Yes, I have an ad blocker, but for some reason it doesn't work on slashdot.

existing privacy policy guidelines t (Score:1)

by WorBlux ( 1751716 )

> YouTube would evaluate each request under its existing privacy policy guidelines to determine

Are you a corporation? If yes, do whatever you want. If no we shake our magic 8-ball to decide and leave you without effective appeal.

Re: (Score:2)

by irving47 ( 73147 )

> YouTube would evaluate each request under its existing privacy policy guidelines to determine

for which it uses..... AI agents.

And the rest of us can go f&&k ourselves? (Score:2)

by BrendaEM ( 871664 )

Youtube is a major source of AI deepfakes, and AI produced and AI enhanced videos. They cannot admit to the damage that they are causing.

The only right way (Score:2)

by devslash0 ( 4203435 )

Deep fakes should not be detected in isolation. Instead, fact checking AI should extract information from videos and cross-check it against the entire population of videos on the same subject to see if the information aligns with other sources.

If only they could do it in real life... (Score:2)

by Archfeld ( 6757 )

Now if only they could extend that technology into the "real" world. We would all be a great deal better off....

Dr. Richard Feynman Fakes (Score:2)

by crunchy_one ( 1047426 )

I wish they'd do something about the Dr. Richard Feynman AI generated fakes that are almost continuously recommended to me in my YouTube feed. It seems like as soon as I block one, another one appears. What gives? Anyone have an idea of why he's such a popular target for AI generated fakes?

And of course there's the Trump problem (Score:3)

by 93 Escort Wagon ( 326346 )

I mean that seriously. He will say pretty much *anything* when he's talking off the cuff. So it seems like one obvious way to detect deep fakes of other people won't work in his case.

Sorry guys (Score:2)

by PPH ( 736903 )

There go all the AOC fakes.

"The lymbic system in my brain is so electrically active, it qualifies
as a third brain. Normal humans have two brains, left and right.

- Jeff Merkey