YouTube is stepping up its fight against one of the most worrying applications of AI: deepfake videos that impersonate real people. The company announced that it is expanding its similarity recognition technology to a pilot group of journalists, government officials and political candidates. It is a move aimed at protecting public figures from AI-generated identity theft.
The feature works similarly to Content ID for faces. Participants submit a short video and government-issued ID so the system can learn their likeness. Once registered, YouTube scans uploads for AI-generated videos that mimic their appearance. If such content is seen, the person can review it and possibly request its removal.
A new protection against AI imitations
YouTube first introduced similarity detection for creators in the YouTube Partner Program last year. The company now believes that the next priority is to protect public figures whose identities are often used in misinformation campaigns, particularly related to elections and political discourse.
Deepfakes have become increasingly realistic thanks to generative AI tools, making it easier to create convincing videos of people saying or doing things they never actually did. In politics and journalism, this risk can have serious consequences, from misinformation to reputational damage. However, the system is not a simple “delete button”. YouTube says removal requests remain subject to existing privacy and moderation policies, meaning some videos may remain online if they are considered parody, satire or legitimate commentary.
Interestingly, YouTube stated that the initial rollout for creators did not result in many takedowns. Most of the content discovered turned out to be relatively harmless, although the company expects the situation will be different for public figures and political leaders, who are at higher risk of targeted deepfake attacks.
For now, the program remains limited to influential individuals and not the general public. But the expansion signals a broader shift across the tech industry: We must take quick action to put guardrails in place before AI-generated media becomes indistinguishable from reality.




