Fraud fighters seek guardrails for data ethics

Fraud fighters seek guardrails for data ethics

The digital age has changed the way insurers tackle fraud detection. A practice that once involved putting boots on the ground and conducting personal interviews exclusively now has the ability to detect and establish fraudulent behavior using data analysis. And, as data analytics and rules-based engines give way to artificial intelligence and machine learning, fraud detection faces many of the same concerns as other areas of insurance when it comes to ethics

The speed and efficiency by which fraud investigations can take place mean that more can be conducted. But that doesn’t mean there’s necessarily more fraud out there. As special investigation units tune their digital processes to raise red flags, the Coalition Against Insurance Fraud (CAIF) has partnered with Protiviti to conduct a survey on ethical use of data in the fraud-fighting practice.

“We realized that this is the future of where investigation is headed, and there is a gap of no one doing high quality structured research around the appropriate guardrails for using that data to help protect both consumers and insurers from insurance fraud,” says Matthew Smith, executive director of the CAIF.

CAIF regularly surveys the use of technology in insurance fraud. Smith said that in its most recent version of that survey, it found that “more referrals are coming to insurers by using data analysis than ever before” and there are fewer “false positives,” where a referral has led to a case that was not fraudulent. Still, he says, it’s important to have a standard that reassures customers that they aren’t at risk of being accidentally tagged more often now that more data and analytics or AI are being applied to their cases.

See also  Hyundai 'Nfluencer' reports a $20K dealer markup on Ioniq 5 N

“Every consumer right now has valid concerns over how their personal data is being collected, analyzed and used,” he says. “And insurers don’t want to use the data to chase down rabbit holes.”

Some examples of the kinds of fraud that are detectable at a high level using advanced data and AI, according to Smith, are:

Personal identity verification: “Stolen identities went up 600%, according to the United Nations during COVID 19. A lot of those stolen identities are used on the dark web, and then they turn up buying insurance policies. In one case that we’re aware of, by the time the person realized their identity had been stolen, three claims had already been submitted under an automobile insurance policy.”Photo identification:  “As more and more carriers allow photos of damage to be submitted, some of those photos on the fraud side are ones that people purchase or re-use off the internet. The ability for data to analyze millions of those photographs instantly and say, ‘This photo of this bumper has been used 16 times on other insurance claims,’ already helps to identify fraud.”

Another goal of the survey is to be able to present its findings to appropriate regulators. As with most insurance issues, the state-based regulatory framework for insurance means that carriers and groups like CAIF will be looking for consistent standards to make it easier for data- and AI-powered fraud fighting to continue. 

“Our hope is that as that patchwork of laws and regulations starts being pieced together in the United States, that there’s some standard that states can agree to at this level,” Smith says. “And if we do get to the point where the Congress adopts a data privacy law, this study can be a crucial framework of making certain that we put on the appropriate guardrails, but we don’t slam the door on using data to protect our citizens from insurance fraud.”

See also  Max Verstappen Unstoppable at F1's Miami Grand Prix

Finally, Smith says, it’s important for insurers to realize that digital alone can’t tackle fraud. As ethical questions are considered, it’s going to take intervention from humans in digital processes to keep them locked in place.

“Some insurance carriers – and we have called them out for this – incorrectly believed that the use of AI, machine learning, and data analysis simply allows them to eliminate the human side of the anti-fraud fight from their special investigations unit,” he says. “Those types of CEOs or CFOs are going to put their companies at high risk. Because they’re flat out wrong.”