Imagine the scenario: you’re tweaking the parameters of the latest autonomous car software, and you have the unenviable task of deciding how it should behave in an inevitable road collision. The car has suddenly detected a family of four crossing the road ahead and cannot stop in time. The car (well, the software) must decide whether to proceed and kill the family or take action to swerve into a barrier and kill the driver. Which choice should it make?
MIT’s Moral Machine study asked five million participants across 130 countries just such a question. The scenarios included collisions of a car with varying passenger and pedestrian character combinations of males, females, children, the elderly, animals and more. In this example, most study participants indicated the car should swerve and kill the driver.
This is consistent with Jeremy Bentham’s consequentialist theory of utilitarianism, which considers it ethical for humans to take actions that minimise total harm. And superficially many actuaries appear to agree. When a similar question was surveyed at GIRO 2023 the audience majority was in favour of the car swerving and killing the driver – or to put it another way, the audience would choose to kill a driver to save a family… In this scenario, the audience actuaries considered the ends justified the means.
There is an alternative view: Immanuel Kant’s deontologist theory of categorical imperative. In layman’s terms the individual actions taken to achieve an outcome must themselves be ethical even if the end result is overall considered less desirable. In this case, taking an action that kills the driver, even though fewer deaths would arise, is considered unethical. It is considered more ethical for the car to continue its path and kill the family. A sizeable minority of our GIRO audience voted in favour of this decision.
This case study highlights that people can make opposing choices and both still be acting ethically – they’re just following different ethical doctrines.
Stepping into the actuarial field more broadly, the Actuaries Code states that “Members should speak up if they believe, or have reasonable cause to believe, that a course of action is unethical or is unlawful.” As we can see, whether something is ethical or not may not be obvious. Two actuaries can look at the same situation and reasonably deem two opposing views to both be ethical.
The challenge for actuaries is therefore to balance the relative merits of ethical doctrines to decide which to apply in a given scenario. This is hard, for several reasons:
Most actuaries don’t know what formal ethical doctrines exist in the first place, let alone know their relative strengths and appropriateness. There is limited formal education on ethical theory for actuaries, albeit the annual professionalism courses provide some light touch ethical teaching in a practical environment.
Assuming actuaries did know all the possible ethical doctrines and the relative pros and cons of each, applying the theory to real world scenarios is difficult. This can particularly be the case where conflicts of interests, biases or incentives may exist that could encourage individuals to consider one perspective more ethical than another.
Different cultures around the world have different standards and expectations. For example, in the Moral Machine study, the UK showed a relative preference for inaction compared to, say, India. This presents a challenge to an international organisation such as the IFoA that needs to consider ethical applications across international boundaries.
Societal attitudes change over time, so decisions made in the past may not be deemed ethically acceptable by the same society in the future.
The IFoA, in collaboration with the Royal Statistical Society recently published ‘A guide for ethical data science’. This document provides a useful and much needed practical contribution to understanding how to implement ethical practice into actuaries’ day-to-day work.
The guide sets out five themes within data science together with useful practical examples of how each of these can be delivered:
While the guide is a great starting point, one should be aware there can remain some innate potential conflicts. For example, actuaries should “avoid harm” – a predominantly Kant philosophy but also “seek to enhance the value of data science for society” a predominantly Bentham philosophy.
As we saw in our case study, in circumstances where you cannot clearly meet both categories or where you have to break one, it is not clear which decision an actuary should make. As the field of data science in the profession evolves, actuaries will need to recognise the real-world impact of their analysis, advice and decisions in this context.
Disclaimer: All views expressed are those of the author, Murray Lidgitt, and may not reflect those held by his employer or any other associated entities or individuals.