YouTube CEO Neal Mohan Plans “Likeness Detection” to Spot AI Clones — and Maybe Pay the Originals

Mohan outlines upcoming tools to identify synthetic faces and voices, giving creators the power to monetize or remove deepfaked versions of themselves.

TechCrunch, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

With YouTube Shorts now a two-billion user firehose and TV watch time rising, YouTube’s plan to auto-spot synthetic faces and voices and route revenue back to originals could formalize a new market in personality rights, turning control into compensation instead of whack-a-mole takedowns.

In an interview this week, YouTube CEO Neal Mohan addressed what he called creators’ number one AI fear: losing control of their face and voice. Speaking on the All-In Podcast, Mohan said YouTube is building “likeness detection” features designed to identify videos that use a creator’s face or voice via generative tools. The goal is to let creators take one of two paths—removal or monetization—backed by platform-level disclosure so viewers know when content is AI-generated. For an ecosystem that pays out billions annually through the Partner Program, formalizing the economics of AI clones could be a turning point.

The proposal acknowledges the obvious: policing AI impersonations at YouTube’s scale is impossible with manual flags alone. Likeness detection would automate the first mile by scanning for synthetic speechprints and facial patterns, then routing matches to rights holders. Crucially, Mohan framed monetization as a viable option, not just a moral hazard. If a creator opts in, cloned voiceovers and face-swaps could funnel ad revenue or licensing fees back to the original artist. That mirrors moves in the music industry, where platforms are testing artist-centric payouts and watermarking to manage AI covers. For YouTube, which reported TV-screen watch time surging and Shorts exceeding two billion logged-in users monthly, the incentive is to keep creators on platform, not in court.

There are caveats. Detection models will need to avoid false positives that nuke satire, commentary, or transformative works; they’ll also need robust appeals so fair-use content isn’t collateral damage. Mohan’s remarks suggested YouTube would notify viewers when a video contains AI elements, adding a layer of transparency that could calm advertisers. If adopted widely, disclosure could become a default signal akin to the paid promotion tag or music credits, giving audiences context without killing reach.

For celebrities and influencers, the offer is pragmatic. AI clones are already out there; a platform-native path to control or cash in is better than whack-a-mole. It’s also a hedge against the “AI slop” problem that critics say is flooding feeds with low-effort, engagement-hacked content. If creators can reclaim revenue from impersonations, the slop becomes less free and less attractive to make. YouTube’s approach won’t resolve the underlying legal debates over publicity rights and training data, but it signals where the business is heading: toward a world where personality is licensable at scale and the platform is the broker.

Previous
Previous

Netflix’s New $700K AI Job Goes to a Human, Not a Machine

Next
Next

Sora Explodes in Popularity - and in Controversy - as OpenAI’s Video App Floods Feeds with Disturbing Clips