15/01/2026

Ethical checklist for developing AI models

Ethical checklist for developing AI models Greg Pataki and James Fardell of the IFoA’s AI Ethics Working Party lay out the most important ethical issues to be addressed during AI model development. This is the sixth in a series of blogs by the working party.

The typical phases in developing a model are planning, data input, model development, implementation and communication and oversight. Although these steps are applicable to any model, there are aspects that need to be considered specifically for AI models. One of these aspects is the ethical development of the AI model. 

In this article we aim to identify the most important ethical issues to be addressed during model development and give examples using the following case study. Although this is based on a life application, we believe that there are useful takeaways for non-life or banking practitioners. 

Note that we will aim to mention issues specifically related to ethics. Requirements generically applicable to any model, such as documenting data lineage or communicating workings of the model, are generally not mentioned. 

Case study

A life insurance company currently uses its own internal mortality model for projecting mortality rates for annuities it has written. It uses these rates both in pricing and reserving. The current mortality model (current model) projects improvement factors for each age group and gender using a Lee Carter model and PCA for fitting the parameters. 

The company is considering using deep neural networks (a form of AI), using all available internal data, both current and historical (new model). This will include accessing death information which includes personal data such as forename, surname and national insurance numbers where applicable. 

The parameters within the current model are reviewed on a yearly basis where any recalibrations need to be approved by the company’s assumptions committee.  It is intended that the AI model will use dynamic parameters which are updated as new mortality information becomes available and is processed through the model. 

The directors are aware of challenges relating to the transparency and interpretability of large language models but are eager to press ahead with the new model. A project has been set up comprising of actuaries and data scientists to develop the new AI model.

What ethical aspects would need to be considered at each stage of the project?

1. Project planning

  • Engage with internal stakeholders (for example compliance teams, underwriters, modelers) to gather initial perceptions on potential ethical issues that could arise using an unsupervised model for projecting mortality rates.
  • Identify protected characteristics (for example age, gender, ethnicity, religion) and identify vulnerable customer groups. 
  • Determine specific regulatory expectations (for example FCA’s Consumer Duty) and ensure alignment with Technical Actuarial Standards (TAS), FRC guidance, and professional codes. More specifically, appendix 2 of Technical Actuarial Guidance on Models from the FRC on use of AI and machine learning’ describes specific requirements. 
  • Define additional governance and oversight requirements. For example, consider extensive testing of outcomes on a granular level for a large number of model points (pricing) and individual policyholders (reserving). Compare outcomes from the new model with the current model.
  • Establish and perform an ethical assessment. Ensure that the new model supports fair and transparent outcomes for both pricing and reserving.
  • Ensure clear accountability in all project stages (data management, model development, implementation) with a single person or a committee taking responsibility for decisions and outcomes of the particular stage. The assumption committee needs to be able to rely on any assessment regarding data and the model when considering final approval.

2. Data management

  • Ensure ethical use of data that complies with actuarial standards and GDPR, especially for personal and health data.
  • Document consent, especially for third-party or telematics data or any personal health related information that may have been used to train the new model.
  • Identify variables that may indirectly reflect protected characteristics and apply actuarial judgement to potentially disregard sensitive data. For example, a combination of address and benefit information may have been used by the new model to infer a protected characteristic, such as ethnicity or gender. 
  • Consider establishing or revising current data protection assessments for using the new model.

3. Model development

  • Ensure fairness in the new model, for example by applying preprocessing techniques to training data by reweighting different groups to balance representation in the portfolio. Apply data transformation techniques to remove indirect or proxy discrimination.
  • Ensure that the new model remains interpretable and larger deviations to the current model are explainable. Maintain traceability of model inputs, outputs and decision logic. Consider using another (AI) model to check for any potential bias in the new model.
  • The layers of the neural network and the interaction between them should be intuitive.
  • Quantify model uncertainty and stress test assumptions. 
  • Include adverse scenario testing using synthetic data or historical stress events.

4. Implementation and delivery

  • Clearly communicate outputs to non-technical stakeholders (for example boards and regulators) using plain language that avoids technical jargon. Visualise improvement factors from the new and current models and any independent models available (for example CMI projections).
  • Provide rationale for automated decisions and use comparatives against the current model to understand the significance of the outcomes. 
  • Conduct peer reviews and independent validations to verify expected actuarial outcomes potentially making use of the current model.
  • Consider using another model to document traceability.
  • Ensure more frequent (quarterly) performance monitoring to be able to spot any potential bias early. 
  • Quantify the impact on vulnerable customers and consider implementing limits on premiums and benefits quoted in pricing. Regularly check overall reserve adequacy and adequacy per customer groups.
  • Apply fairness metrics to assess level of bias and adjust post-process model outputs to reduce potential bias against certain groups.

5. Communication and oversight

  • Explain assumptions, workings and limitations of the new model where appropriate to clients, regulators, and the public. 
  • Disclose how fairness, bias and uncertainty is managed considering using fairness audits across different demographic groups.
  • The impact of the automated annual update of the dynamic parameters should be reviewed and explicitly approved by the assumption committee. 
  • Embed ethical AI principles into actuarial governance frameworks which may include cross-disciplinary panels including actuaries, data scientists and senior leadership.
  • Ensure senior actuaries champion responsible AI use promoting ethical awareness
  • Consider the societal impact of actuarial AI (for example access to insurance, pension fairness) for example ensuring AI mortality model does not reinforce exclusionary practices. 
  • Be transparent about how AI supports or challenges actuarial professionalism ensuring the mortality model does not obscure accountability.

 

Read more

See all blogs in this series by the AI Ethics Working Party.

The first appeared in The Actuary: Check your AI: a framework for its use in actuarial practice | The Actuary

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter