Insurance fraud in property and casualty lines isn’t new. Staged accidents, exaggerated damages, and fraudulent claims have been part of the industry for decades. But what is new is the technology driving both fraud and defenses. Artificial intelligence (AI), generative tools, and synthetic media have become the latest tools in the fraudster’s toolkit, raising the stakes for insurers and policyholders alike.
Recent industry analysis shows that fraud linked to identity theft, particularly synthetic identity, is on the rise, with the NICB expecting a 49% increase in 2025 alone. Nearly one-quarter of identity-related fraud referrals involve synthetically created identities, constructed from a mix of real and fabricated personal data.
Fraudsters are using AI faster than insurance companies, making it even more difficult to detect and combat. Synthetic personas are increasingly used to establish fake accounts, take out policies, and execute fraudulent claims across auto, home, and commercial lines, often bypassing traditional detection systems and automated checks.
Synthetic identity fraud doesn’t steal one person’s identity outright; it creates a new one. Fraudsters combine real elements (like Social Security numbers or birthdates) with fake names, addresses, employment histories, or digital footprints to build credible personas. These identities can age slowly, build apparent credit histories, and then be used to:
Because no single real person is victimized up front, detection is harder than with traditional identity theft and the losses can be massive before the fraud is uncovered. They use this information to buy policies, pay their premiums, and build trust with the carriers before they submit a claim.
Synthetic identity schemes are one of the fastest-growing financial crimes globally.
In insurance, these cases increasingly appear across both personal and commercial lines.
The danger? A synthetic persona can maintain dozens of bogus policies, submit coordinated claims across carriers, and leave a network of bad data behind — all without tying directly back to a real, individual, or company.
Deepfakes are AI-generated or manipulated media that convincingly depict people saying or doing things they never did were once the stuff of sci-fi. Today, they’re an active threat in insurance fraud.
Fraudsters use deepfake technology to:
These fabricated media can fool both humans and automated systems — leaving insurers with false verification cues and adjusters with convincing yet fraudulent evidence. There are several technologies that claim cloned voices cannot get past them, but the fraudsters are proving more sophisticated.
Insurance contact centers saw a 475% increase in synthetic voice fraud attacks in 2024.
Deepfakes now appear regularly in claims submissions, not just in spoof videos but also in supporting documentation such as invoices and repair photos.
The result? Traditional trust signals, like a familiar voice on a phone call, are rapidly losing reliability.
While fraud techniques have grown more sophisticated, so too have the tools defenders use to fight back. Insurers are increasingly adopting AI-driven fraud detection systems that can:
This race pits automated detection against automated deception, and staying ahead requires continuous investment in technology and data science. It will be critical to train frontline call centers and adjusters to spot this fraud daily.
The P&C insurance landscape is changing, and fraudsters are no longer limited by skill alone. With accessible generative AI tools, deepfake capabilities, and synthetic identity generators, fraud schemes are now scalable, automated, and dangerously convincing.
The good news? Fraud detection technologies are evolving, too. But the battle? It’s no longer about spotting a forged signature or staged accident. It’s about discerning what’s real in an era where even reality can be artificially manufactured.
If you would like to learn more about upcoming Auriemma Roundtables for P&C Claims, where members will discuss what actions their organizations are taking to plan and implement to fight AI-based fraud, please reach out to Ron Kifer, SVP of Insurance Services, for more information.