Deepfake Scams Exploit Celebrity Likenesses to Defraud Fans
Advancements in AI technology have led to a surge in deepfake scams, with fraudsters impersonating celebrities to deceive and exploit fans—highlighting urgent concerns around trust, regulation, and digital identity.
Harald Krichel, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons
The explosion of generative AI tools is giving scammers a new kind of star power. From fake video calls to AI-cloned voices, fraudsters are impersonating celebrities with startling accuracy—and using these deepfakes to trick fans into handing over money, attention, and personal information. In an era where authenticity is currency, AI is becoming the ultimate counterfeiter.
One case that recently gained international attention involved a French woman who was led to believe she was in an online relationship with actor Brad Pitt. The scammer used AI-generated voice messages, digitally altered photos, and even deepfaked video snippets to keep up the ruse. Over time, the woman sent the fake Pitt nearly €1 million before realizing the entire relationship was fabricated.
These scams aren’t isolated. According to the UK’s National Crime Agency, incidents involving deepfaked celebrity impersonations have risen more than 300% in the past year alone. The targets are usually vulnerable fans—people who’ve expressed admiration publicly, or those who are part of fan clubs, message boards, or social media groups. Once targeted, victims are often subjected to long-term psychological manipulation, with fraudsters spinning narratives around charity appeals, exclusive meet-and-greets, or even romantic connections.
The Guardian’s report this week, which surveyed cybercrime watchdogs and AI ethicists, paints a worrying picture of where this trend is headed. As deepfake technology becomes cheaper and more user-friendly, scammers are no longer just targeting high-profile celebrities. Social media influencers, YouTubers, and even local news personalities are increasingly being cloned for use in these kinds of grifts.
What makes these scams so effective is the level of personal intimacy they can simulate. Through voice cloning and facial mapping, AI can now create media that feels convincingly interactive. A fan might receive a personalized birthday message or hear their name spoken in a celebrity’s voice. The illusion of direct connection makes the deception more potent—and the emotional fallout even greater.
Industry experts are calling for tighter safeguards around digital identity. Some are advocating for watermarking standards for AI-generated media, while others suggest implementing authentication tools like digital voice “passphrases” to verify real communications. But so far, legislation has struggled to keep pace with the speed of the technology. In most countries, laws around impersonation don’t specifically cover synthetic media, making prosecution difficult even when victims come forward.
Actors and public figures are also beginning to push for more control over their digital likenesses. SAG-AFTRA’s most recent agreement includes clauses aimed at regulating the use of AI-generated performances, but enforcement outside union productions remains murky. Meanwhile, tech platforms continue to host or distribute the tools used in these scams, raising questions about corporate accountability.
For now, the best defense for fans may simply be awareness. If a celebrity reaches out asking for money, promises secret access, or makes requests that seem out of character, skepticism is warranted. As AI blurs the line between real and fake, the entertainment industry is learning that even fame isn’t immune to exploitation—it may in fact be the bait.