After content creators, politicians and journalists, YouTube will also enable celebrities to access its likeness detection tool, allowing them to remove deepfakes and stop unauthorized impersonation on the platform.

YouTube’s biometric likeness technology scans for AI-generated videos that match a verified user’s appearance.  The feature functions similarly to Content ID, a tool that helps detect and remove copyrighted material on the platform, according to a company blog post.

Users can verify their identity by submitting a government ID and a self-recorded video for face biometrics matching. Those enrolled in the program receive alerts when potential matches surface and can request removal if the content goes against YouTube’s privacy policy.

The feature was first offered to creators in the YouTube Partner Program last year, while in March it was expanded to government officials, journalists and political candidates. Expansion to other famous people is the next logical step.

YouTube says it has collaborated with talent agencies and management companies, such as CAA, UTA, WME, and Untitled Management, to provide the tech to entertainers.

While some celebrities have recoiled at the thought of seeing their deepfakes online, others are seeing it as a money-making tool. Talent agency CAA has, for instance, built a database with AI developer Veritone that stores their clients’ digital likenesses and voices to give them control and compensation in cases of AI use.

YouTube is not the only company that is introducing measures to protect famous people from unauthorized deepfakes, as AI-generated videos fuel scams, political misinformation and reputational manipulation.

Last year, OpenAI committed to “strengthen guardrails around replication of voice and likeness when individuals do not opt-in,” following Breaking Bad star Bryan Cranston’s decision to reach out to the actors’ union SAG-AFTRA over unauthorized AI-generated versions of his likeness.

Meanwhile, both OpenAI and YouTube have voiced support for the proposed NO FAKES Act, a U.S. federal law designed to protect individuals’ voices and visual likenesses from unauthorized, AI-generated digital replicas.

Article Topics

biometrics  |  deepfake detection  |  deepfakes  |  face biometrics  |  generative AI  |  likeness detection  |  YouTube

Latest Biometrics News

 

Apr 22, 2026, 2:50 pm EDT

Japan’s policymakers are considering their own version of age assurance for social media with content filtering taking the limelight. Nikkei…

 

Apr 22, 2026, 1:43 pm EDT

SkyBiometry, a subsidiary of biometric technology company Neurotechnology, has announced the launch of an AI factory and an infrastructure suite of…

 

Apr 22, 2026, 1:28 pm EDT

In 2025, few people on Earth logged as many travel miles as Iain Corby, the executive director of the Age…

 

Apr 22, 2026, 12:41 pm EDT

London’s Met Police force has won a legal challenge to its use of live facial recognition, allowing them to continue…

 

Apr 22, 2026, 12:10 pm EDT

Australia is expanding its Anti‑Money Laundering and Counter‑Terrorism Financing (AML/CTF) framework, with the most significant changes in nearly two decades. The…

 

Apr 22, 2026, 11:51 am EDT

Meta has introduced a new employee monitoring tool that tracks the keystrokes and mouse movements of the company’s U.S.-based workers…

Leave A Reply