Security firm isFake.ai has warned that a surge in deepfake production is feeding a rise in celebrity impersonation scams, as fraud groups use synthetic video and audio across social media and private messaging apps.
The company pointed to recent data from DeepStrike, which projected deepfake production would exceed 8 million files in 2025. DeepStrike said that figure would represent a sixteenfold increase since 2023.
Europol has also issued a separate warning about the broader information environment. It said up to 90 percent of online content may be synthetically generated by 2026.
The team at isFake.ai said the shift in volume and distribution channels has changed the practical challenge of verification. The company described celebrity content as a particular risk area because images and videos of public figures already circulate widely online.
Scam operations
Additionally, isFake.ai described what it called a structural change in how celebrity fraud works. It said scams have moved away from small-scale impersonations and towards coordinated operations that use several AI systems at once.
The company said one system can gather background information on targets, while another creates synthetic video or voice. It also said a third system can adjust messages based on responses. It described the result as campaigns that run continuously and change over time.
“We’re seeing scams shift from isolated impersonations to coordinated AI systems that learn and adapt,” said Olga Scryaba, AI Detection Specialist and Head of Product, isFake.ai. “That makes celebrity scams more persistent and harder to disrupt.”
The team at isFake.ai also highlighted the use of so-called “persona kits”. It described these as bundles that can include synthetic faces, cloned voices and background stories. It said such kits reduce the skill required to run impersonation scams, and make repeated fraud more straightforward.
The company said public figures face specific exposure because their footage is readily available. It said scammers can draw on legitimate interviews, clips and social posts when assembling impersonations.
Human judgement
The security firm, isFake.ai, said improvements in voice cloning and video synthesis have reduced the reliability of human judgement. It said even trained professionals can struggle to identify manipulation without dedicated tools.
Scryaba linked that challenge to the way people consume content online. “The problem is not just better fakes,” said Scryaba. “AI content is published and consumed in spaces designed for speed and emotional engagement, like social media or news feeds, shorts, reels etc.
She also adds that people online just scroll without stopping to fact-check, without critical evaluation, and rarely pausing to question whether what they’re seeing is authentic.
“In that context, the line between real and AI content blurs because synthetic content shows up more and more often, that people stopped noticing or questioning it altogether,” said Scryaba.
Documented case
The firm pointed to a celebrity impersonation case involving the actor Steve Burton, known for his role on General Hospital. It said scammers used AI-generated video and voice messages as part of a romance scam.
In the case described by isFake.ai, scammers persuaded a fan that she was in a private relationship with the actor. The victim then transferred more than $80,000 through gift cards, cryptocurrency and bank-linked services. isFake.ai said the fraud came to light after the victim’s daughter discovered it.
Moreover, the team at isFake.ai said analysis of the media used in the scam showed characteristics consistent with synthetic content. It cited cloned voice patterns and visual inconsistencies, which it said can prove hard to identify without technical tools.
“The risk is no longer limited to obviously fake videos,” said Scryaba. “Modern deepfake scams rely on realism, repetition, and personalization. Victims are often targeted over weeks or months, which reduces skepticism and increases financial harm.”
Warning signs
Furthermore, isFake.ai said deepfake scams often combine convincing media with familiar social engineering tactics. It said requests involving money, urgency or secrecy remain common signals of fraud, even when video or audio appears realistic.
The company said private outreach from a celebrity account should raise suspicion. It said public figures do not solicit money, investments or relationships through unsolicited direct messages.
It also highlighted payment methods as another indicator. It said gift cards and cryptocurrency often appear in scam demands. It added that high-pressure requests for rapid action and secrecy also remain consistent patterns.
It also said scams can take the form of public promotions as well as private conversations. It said adverts that use a celebrity’s face or voice to promote “investment opportunities” or medical products warrant scepticism, particularly on platforms such as Facebook and Instagram.
For high-stakes situations, the company said people should verify claims through independent channels. It also recommended the use of detection and verification tools when available.
Lastly, isFake.ai added that public figures can reduce exposure by limiting the amount of personal audio and video available online. It said less publicly available material can make cloning more difficult.
“As synthetic content becomes more common, verification has to become a habit,” said Scryaba. “The cost of assuming something is real is simply too high.”
