Top
April 21, 2026

The New Frontier of Fraud in P&C Insurance: Synthetic Identities and Deepfakes

Insurance fraud in property and casualty lines isn’t new. Staged accidents, exaggerated damages, and fraudulent claims have been part of the industry for decades. But what is new is the technology driving both fraud and defenses. Artificial intelligence (AI), generative tools, and synthetic media have become the latest tools in the fraudster’s toolkit, raising the stakes for insurers and policyholders alike.

Why This Matters: A Surge in Sophistication and Scale

Recent industry analysis shows that fraud linked to identity theft, particularly synthetic identity, is on the rise, with the NICB expecting a 49% increase in 2025 alone. Nearly one-quarter of identity-related fraud referrals involve synthetically created identities, constructed from a mix of real and fabricated personal data.

Fraudsters are using AI faster than insurance companies, making it even more difficult to detect and combat. Synthetic personas are increasingly used to establish fake accounts, take out policies, and execute fraudulent claims across auto, home, and commercial lines, often bypassing traditional detection systems and automated checks.

Trends to Know

Synthetic Identity: The Invisible Threat

Synthetic identity fraud doesn’t steal one person’s identity outright; it creates a new one. Fraudsters combine real elements (like Social Security numbers or birthdates) with fake names, addresses, employment histories, or digital footprints to build credible personas. These identities can age slowly, build apparent credit histories, and then be used to:

  • Open fake insurance policies
  • Submit staged multi-policy claims
  • Funnel fraudulent payouts to shell entities

Because no single real person is victimized up front, detection is harder than with traditional identity theft and the losses can be massive before the fraud is uncovered. They use this information to buy policies, pay their premiums, and build trust with the carriers before they submit a claim.

Synthetic identity schemes are one of the fastest-growing financial crimes globally.

In insurance, these cases increasingly appear across both personal and commercial lines.

The danger? A synthetic persona can maintain dozens of bogus policies, submit coordinated claims across carriers, and leave a network of bad data behind — all without tying directly back to a real, individual, or company.

Deepfakes: Beyond Photos and Videos

Deepfakes are AI-generated or manipulated media that convincingly depict people saying or doing things they never did were once the stuff of sci-fi. Today, they’re an active threat in insurance fraud.

Fraudsters use deepfake technology to:

  • Clone voices and impersonate policyholders during customer support calls
  • Fake video evidence of accidents or injuries
  • Forge medical scans, damage photos, or repair invoices
  • Bypass biometric identity checks by presenting AI-generated faces or recordings
  • Impersonate claims adjusters or company executives to redirect payouts or steal sensitive data

These fabricated media can fool both humans and automated systems — leaving insurers with false verification cues and adjusters with convincing yet fraudulent evidence. There are several technologies that claim cloned voices cannot get past them, but the fraudsters are proving more sophisticated.

Insurance contact centers saw a 475% increase in synthetic voice fraud attacks in 2024.

Deepfakes now appear regularly in claims submissions, not just in spoof videos but also in supporting documentation such as invoices and repair photos.

The result? Traditional trust signals, like a familiar voice on a phone call, are rapidly losing reliability.

The New Race: Fraudsters vs. Insurers

While fraud techniques have grown more sophisticated, so too have the tools defenders use to fight back. Insurers are increasingly adopting AI-driven fraud detection systems that can:

  • Spot anomalous identity creation patterns
  • Analyze image and video metadata for manipulation artifacts
  • Track coordinated fraud networks across carriers
  • Deploy machine learning to score risk and flag suspicious claims

This race pits automated detection against automated deception, and staying ahead requires continuous investment in technology and data science. It will be critical to train frontline call centers and adjusters to spot this fraud daily.

Steps to Tackle AI-based fraud

  • Invest in AI and cross-platform analytics to detect non-obvious synthetic identity patterns. Buying technology that does not provide a comprehensive view will only create more gaps for fraudsters to exploit.
  • Use multimodal verification (biometric + behavioral + contextual) when onboarding customers.
  • Share fraud indicators industry-wide to catch repeat or networked fraud attempts.
  • Educate policyholders of new fraudster tactics.

The P&C insurance landscape is changing, and fraudsters are no longer limited by skill alone. With accessible generative AI tools, deepfake capabilities, and synthetic identity generators, fraud schemes are now scalable, automated, and dangerously convincing.

The good news? Fraud detection technologies are evolving, too. But the battle? It’s no longer about spotting a forged signature or staged accident. It’s about discerning what’s real in an era where even reality can be artificially manufactured.

Next Steps

If you would like to learn more about upcoming Auriemma Roundtables for P&C Claims, where members will discuss what actions their organizations are taking to plan and implement to fight AI-based fraud, please reach out to Ron Kifer, SVP of Insurance Services, for more information.

You are now leaving the Auriemma Roundtables website and being redirected to Auriemma Group.

Go Back Continue