01/10/2025

All bugs are insects, not all insects are bugs: automation and AI in insurance

All bugs are insects, not all insects are bugs: automation and AI in insurance This blog explores the crucial differences between automation and AI in the insurance industry, highlighting how each presents unique ethical risks and governance challenges. This is the third in a series of blogs by the AI Ethics Working Party.

“All bugs are insects, but not all insects are bugs.” 

Although the saying is widely known and used in non-scientific contexts, the differences between insects and bugs are confusing to many, probably including actuaries. Insects and bugs go through dramatic transformations throughout their lifetime and the process can take anywhere between eight days (fruit flies) and 15 years (cicadas). 

Automation and AI, like bugs and insects, are often conflated – even though they evolve in distinct ways and pose different challenges. 

Automation is setting up systems to perform tasks automatically based on a pre-determined set of rules with minimal human intervention. A certain set of inputs will always produce the same output. Human intelligence, such as interpretation and decision making, is not involved. 

AI refers to computer systems capable of performing tasks that require human intelligence, such as reasoning, decision-making, problem-solving or understanding natural language. AI encompasses a wide range of technologies, including machine learning, deep learning, and natural language processing (NLP). Both the development and use of such computer systems require execution of a (very) high number of iterations and are highly automated. 

In summary, all AI systems require automation, but not all automated systems involve AI. 

 

Why is the distinction important?

The distinction between automation and AI is important in addressing ethical issues around two major trends in the insurance industry: portfolio consolidation and increasing customer segmentation. 

An example of the former is closed book life and pension portfolios (life aggregators and bulk annuity purchasers) requiring a high degree of automation. Efficiency is achieved by performing repetitive and unchanging tasks with a high degree of accuracy leading to lower operating costs. 

An example of the latter is multinational insurance or tech companies (Ping An Insurance, Alibaba, Google) tapping into their vast customer database to differentiate segments using AI and offering insurance coverage at tailor-made prices. This is sometimes referred to as individualisation of insurance. Efficiency is achieved on growing the (profitable) customer base by quoting premiums that more accurately reflect the risks of these customers. This business model focuses primarily on growth of the business, although arguably all benefits of automation apply here as well. 

These two trends can exist simultaneously within the same company. Increased automation leads directly to lower running costs and release of expense reserves. Applied with other portfolio optimisation techniques, such as hedging or reinsurance, risk capital can also be decreased. Freed-up capital resources can then be re-employed in acquiring new customers using sophisticated AI models.

 

Ethical dilemmas in automation and AI

Both automation and AI present ethical issues and dilemmas, but the level of risk and the potential harm are quite different as shown below.

 

Transparency

Risk: Lack of transparency

  • Automation – low risk: Processes being automated need to be first understood, mapped out, documented and tested.
  • AI – high risk: Complex, multi-dimensional models making autonomous decisions. 

 

Data privacy

Risk: Misuse of private data

  • Automation – low risk: Uses structured and well-defined data. 
  • AI – high risk: Uses unstructured data from various obscure sources.

 

Bias

Risk: Biased decision-making

  • Automation – low risk: Decisions are programmed; output is deterministic based on a set of pre-defined inputs. 
  • AI – high risk: Autonomous decision making; feature extraction / identification without regard to protected features.

 

Unintended consequences

Risk: Exploiting vulnerable customers

  • Automation – low risk: Process is deterministic; outcomes are predictable, traceable, unintended consequences could be detected.
  • AI – high risk: Outcomes are not easily predictable or traceable, harm could be identified only based on outcomes.

 

Accountability

Risk: Levels of required controls and governance

  • Automation – low risk: Controls and testing at implementation and then periodically.
  • AI – high risk: Continuous control testing on all aspects (data, model and outcomes).

 

Harm potential

Risk: Likelihood of causing harm

  • Automation – low risk: Automated solutions have many providers, herding behaviour is less likely. 
  • AI – high risk: AI models are currently developed by a handful of companies, herding behaviour is more likely.

 

Risk management approach

The different levels of risk require different approaches to managing them, often within the same company. 

As an example, streamlining and automating a reporting process end-to-end presents low risk of bias as each step can be well defined and traced to its inputs and outputs. The risk of unintended outcomes is low, although still present, as humans can also make errors. 

On the other hand, determining the pricing mortality basis with an AI model using structured and unstructured data from both internal and external sources can lead to unintentional discrimination of vulnerable customer groups (by using protected characteristics). 

The risk management approach should also consider where AI or automation is applied in the insurance company. An AI model making investment decisions is also subject to bias but is likely to cause less harm as there are no ‘vulnerable’ investments. As investments are driven by the duration and nature of liabilities, bias could be mitigated by traditional risk management tools such as limit setting both individually and aggregately by factors such as credit rating, geographies and industries. 

The approach to risk management should also be cognisant that bugs in automation can cause issues which are usually uncovered through robust testing. Bugs in AI (such as hallucination) can never be fully anticipated or avoided given the probabilistic nature of LLMs, which makes risk management and robust governance even more important.

Actuaries and risk managers need to accurately identify the presence of AI in an automated process or model. That is, correctly distinguish between insects and bugs.  

 

Level of assurance

We expect that a high level of assurance is required to evidence presence or lack of AI in a process. This would mean that a negative assurance is unlikely to be sufficient (‘there are no signs of AI’) and a positive assurance is required (‘there is no AI involved based on testing performed’). 

This requirement should be extended to existing processes that were developed before the age of AI or are meant to be used purely as automation. This is to avoid unintentionally upgrading systems using only automation to AI.  Actuaries will then not only be able to tell insects and bugs apart but also true bugs from other bugs.

True bugs are a special class of bugs that undergo incomplete metamorphosis by skipping a stage in the insect lifecycle (about 2% of all insect species).

 

Summary

  • Automation and AI require different levels of governance and risk management. We would expect to see a more straightforward governance and risk management process for portfolio consolidation that involves mainly automation, than for customer segmentation where AI is more widely used.
  • AI governance should be tailored to the business application. An AI model used for selecting investments cannot cause as much harm as an AI model used in pricing and potentially exclude vulnerable customers.
  • Presence or lack of AI should be explicitly evidenced using substantive testing leading to positive assurance.

 

Read more

This is the third in a series of blogs by the AI Ethics Working Party.

The first appeared in The Actuary: Check your AI: a framework for its use in actuarial practice | The Actuary

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter