'Rigorous protections essential': AI bias threatens human rights 

Report proposes 'self-funding' insurance model for export industries

Without adequate safeguards, algorithmic bias in use of artificial intelligence (AI) might cause discrimination due to age, race, disability, gender and other characteristics, the Actuaries Institute and the Australian Human Rights Commission say. 

The two organisations have jointly published a new guide to help insurers avoid breaching anti-discrimination laws when using AI. Underwriters must be careful all assumptions are based on reasonable evidence, they say. 

While AI promises “faster and smarter” decision making, this “may affect people’s basic rights,” Human Rights Commissioner Lorraine Finlay said. 

“It is essential that we have rigorous protections in place to ensure the integrity of our anti-discrimination laws,” she said. 

The joint publication provides practical guidance and case studies to help proactively address the risk when using AI, which can aid pricing, underwriting, marketing, claims management and internal operations. 

Actuaries Institute CEO Elayne Grace says there is urgent need for guidance to assist actuaries as there is limited guidance and case law available to practitioners. 

“The complexity arising from differing anti-discrimination legislation in Australia at the federal, state and territory levels compounds the challenges facing actuaries, and may reflect an opportunity for reform,’ she said, adding the “explosive” growth of big data increases use and power of AI and algorithmic decision-making. 

“Actuaries seek to responsibly leverage the potential benefits of these digital megatrends. To do so with confidence, however, requires authoritative guidance to make the law clear,” Ms Grace said.  

KPMG also says fundamental ethics questions are being raised by widespread adoption of AI, and this requires careful governance and oversight to manage a “new and ill-understood” set of trust issues. 

“The danger is that these technologies, if badly handled, raise cybersecurity and privacy risks with potential for reputational damage and regulatory sanction,” KPMG said. 

Microsoft is taking action on “adversarial” AI, it says, such as data poisoning, machine drift and AI targeting which it expects “will be the next wave of attack”. 

KPMG Partner Sander Klous says the technology has the potential to “drive inequality and violate privacy, as well as limiting the capacity for autonomous and individual decision-making”. 

“You can’t simply blame the AI system itself for unwanted outcomes. Trustworthy, ethical AI is not a luxury, but a business necessity,” he said. 

Management should carefully assess compliance with laws and regulations, he says, with “traceable and auditable” decisions. 

Click here to see the guide.