YouTube Unveils “Likeness Detection” to Help Creators Fight—or Profit From—AI Clones
YouTube CEO Neal Mohan outlines upcoming tools to identify synthetic faces and voices, giving creators the power to monetize or remove deepfaked versions of themselves.
Image by jordy_pp on Freepik
YouTube CEO Neal Mohan is taking aim at one of the platform’s fastest-growing problems: AI impersonation. In an interview this week, Mohan confirmed that YouTube is developing “likeness detection” tools that can automatically flag videos using a creator’s face or voice generated by artificial intelligence. Once flagged, creators would be able to request removal—or opt to monetize the content if they choose.
“This is the number-one concern we hear from creators,” Mohan said, describing the system as a mix of proactive detection and transparent labeling. “We want them to have control over how their identity is used.” The feature, expected to roll out in 2026, would scan uploads for facial and vocal similarities to verified accounts, alerting creators when an AI version of them appears in another user’s video.
The move comes amid an explosion of synthetic content across platforms. Voice cloning tools can now replicate speech from a few minutes of audio, and visual deepfakes—often used for satire or misinformation—are becoming indistinguishable from real footage. For YouTube, which pays billions annually through its Partner Program, the challenge isn’t just authenticity but economics: if cloned creators drive views, who gets the money?
Mohan’s proposal tackles both problems at once. Creators who approve an AI version of themselves could receive a revenue share, similar to how music rights holders are paid when their songs appear in videos. Those who don’t consent could demand takedowns, supported by clearer disclosure tags visible to viewers. The system would also apply to brand impersonations and unauthorized use of celebrity likenesses, an issue that has grown since AI video tools like OpenAI’s Sora and Runway’s Gen-3 became publicly available.
Analysts see the initiative as a strategic hedge. By giving creators a legitimate path to profit from AI-driven derivative content, YouTube could reduce friction with its top talent while keeping user-generated innovation inside its ecosystem. The approach mirrors trends in the music industry, where major labels are exploring licensed “AI voice” libraries instead of fighting every clone in court.
Still, challenges remain. Detection accuracy must be high enough to avoid false positives that could remove parody or commentary videos, both protected under fair-use principles. Appeals systems will be critical, as will transparent disclosure for audiences. Mohan said AI-generated material will carry visible labels—akin to YouTube’s “Paid Promotion” notices—to make synthetic content obvious without penalizing creators who use it responsibly.
The stakes are significant. YouTube’s TV watch time and Shorts engagement have both surged, giving it enormous influence over how billions experience video online. If successful, “likeness detection” could set a global precedent for balancing free expression, creator rights, and AI creativity—a mix few other platforms have yet to manage.