Artificial intelligence is rapidly transforming the insurance landscape – from underwriting and pricing to customer engagement and fraud detection. But with this transformation comes a pressing question: who should govern AI to ensure it is fair, reliable, ethical, and accountable?
As AI and automation advance, actuaries are already applying AI governance principles – such as bias assessment, explainability, model controls, and validation – within their own actuarial models.
However, with their deep expertise in risk, statistical techniques, model oversight, and regulation, actuaries are exceptionally well placed to lead AI governance at a broader organisational level. Here’s why.
AI governance is, at its core, about managing risk: model risk, reputational risk, regulatory risk, operational risk, and ethical risk. These are not new territories for actuaries. For decades, we have built frameworks to quantify and manage long-term, uncertain outcomes – precisely the kind of thinking AI governance demands.
Our ability to take a holistic view of risks, assess them proportionately, and embed controls across the model lifecycle positions actuaries as trusted stewards of AI-enabled systems. While AI introduces new forms of uncertainty, the underlying discipline of risk management remains comfortably familiar.
Predictive models are central to both actuarial work and modern AI. Actuaries are already well versed in model design, validation, and performance monitoring. We understand how models can fail, how they can be misunderstood, and how bias – whether in assumptions, data, or methodology – can creep in unnoticed.
AI models, especially those used in pricing, claims triage, or risk scoring, require rigorous oversight. Actuaries are trained to question data quality, test for robustness, and communicate uncertainty. These capabilities are essential in AI governance – particularly in regulated environments where explainability and fairness are not optional.
As professionals, actuaries are bound by well-established codes of conduct and standards of practice. These frameworks emphasise transparency, objectivity, and accountability – qualities central to the responsible deployment of AI. The actuarial profession is built around the idea of exercising sound judgement in high-impact decisions and clearly documenting our rationale, limitations, and assumptions. In this regard, actuaries bring a deeply embedded culture of ethical governance to AI use.
One of the most underestimated aspects of AI governance is the ability to translate technical complexity into business reality. Actuaries routinely explain model outcomes to boards, justify pricing decisions to regulators, and work with underwriters, IT teams, and compliance officers.
In an AI governance context, this cross-functional fluency allows actuaries to:
AI governance requires a broad, systemic view – exactly what actuarial training is designed to cultivate.
With jurisdictions and professional bodies around the world introducing AI regulations and frameworks, actuaries can play a central role in ensuring compliance. We are already deeply embedded in regulatory dialogue and familiar with the documentation, audit trails, and model risk standards that regulators demand.
Beyond compliance, actuaries bring a long-term perspective. That’s essential in anticipating unintended consequences of AI and evaluating the societal impact of algorithmic decisions over time. We are trained to think in systems, operate with prudence, and bring future-focused judgement to bear on emerging risks.
AI governance is not just about data or technology. It’s about trust, judgement, and accountability – areas where actuaries are particularly well suited to contribute. Our profession has a critical role to play in shaping responsible, human-centric AI adoption in insurance and beyond.
To rise to this challenge, actuaries must continue to:
The future of insurance may be shaped by algorithms but it will be guided by human oversight. Actuaries are among the best equipped to ensure that guidance is principled, proportionate, and worthy of trust.
This is the second in a series of blogs by the AI Ethics Working Party. The first appeared in The Actuary:
Check your AI: a framework for its use in actuarial practice | The Actuary