Shallow, deepfakes could impact insurance claims

Shallow, deepfakes could impact insurance claims

The insurance industry is likely to grapple with shallow and deepfakes or materials that are altered using editing software or audio, photos and video that has been generated using artificial intelligence. 

While AI could help identify these potential fake materials, Rajesh Iyer, global head of machine learning and generative AI for financial services at Capgemini, said that a safe bet is for insurance carriers to verify documents when there are inconsistencies.

“I’m always going to make a call, isn’t happening near as much as we need to do,” Iyer said. “I think it’s because it hasn’t become that big of a problem. But I think that’s what we need to do. We can’t trust documents, we can’t trust photos, we can’t trust videos, we can’t trust recordings. It’s just gotten to that world where I think we just need to have that second layer of, let’s say, corroboration that needs to happen with these documents.”

A recent report from Coalition, a cyber insurer, on cyber claims indicates that phishing emails are becoming harder to detect because they’re being crafted with AI.

Shelley Ma, incident response lead at Coalition Incident Response said in an email to Digital Insurance: “We’ve witnessed firsthand a disturbing trend of attackers exploiting AI tools to extort money from our policyholders. A particularly alarming example is the use of deepfakes, a technology that can convincingly mimic senior business leaders, to coerce defenders into making fraudulent bank transfers to an adversary. These deepfakes are so realistic that they can easily deceive even the most vigilant individuals.”

Iyer suggested that there may be a future where corroboration can be made automatic using generative AI. 

See also  Feds developing cybersecurity process for defence contractors  

“Can you troll through the internet and see if it actually makes sense? And if you have issues you can surface that to a person that is going to make the calls and do an investigation?”

Scot Barton, chief product officer at Carpe Data, a data provider for insurance carriers, said his company is working on exposing its model to fake images. 

Bartson said: “You have to use AI to battle AI. AI-powered tools are highly effective in combating deepfake fraud and are becoming more advanced. These tools can analyze patterns in claims to identify inconsistencies, review photo metadata (creation date, camera type, etc.) for validation, and scan public databases for potential matches. Additionally, they closely examine image details to identify signs of manipulation in textures, lighting, or backgrounds. 

“Carpe Data is experimenting with how we train our model by exposing it to a combination of open-source datasets and in-house generated fake images. This allows the model to become better at differentiating between real and manipulated images and could be instrumental in catching fraudulent deepfakes before they lead to payouts.”

Barton added that some images are likely easy to spot as fake. “Free image generation technologies often leave experienced adjusters with a sense of an ‘uncanny valley’ that is easily identified as fake– but when [people] are working on small screens, strapped for time or dealing with a large number of claims, the subtler clues to falsification often slip by unnoticed.”

Iyer said that there needs to be additional scrutiny.

“There has got to be this education that we can’t believe what we see with our eyes. I think it’s a very different way of thinking about things. … We always say, ‘I won’t believe it, till I see it with my own eyes,’ well that makes no sense in today’s world. I’m not going to believe it even if I see it with my own eyes is more like it.”

See also  ICE bans could put next-generation VW Golf on ice

We’ll be discussing some of these topics at this year’s DIGIN conference in Boca Raton, Florida on June 27-28.