22/05/2024

AI actuarial policy update 3

AI actuarial policy update 3 This blog is the third in a series setting out significant aspects of AI policy development from the IFoA, UK government, and international jurisdictions

IFoA AI thought leadership

The IFoA’s recently launched AI Thought Leadership webinar draws on expertise from the worlds of policy, science, academia, and financial services to demystify AI and its growing role in society. The webinar programme tackles the big debates around AI ethics and regulation and sets out the work the IFoA is doing in this space.

In our third webinar, on 9 May, we were joined by IAA President Charles Cowling. He led a discussion focusing on how the actuarial sector is reacting to AI developments globally and how different jurisdictions are approaching this challenge.

Other speakers included:

  • Bogdan Tautan, co-Vice Chair for the Artificial Intelligence and Data Science working group at the Actuarial Association of Europe
  • Dr Fei Huang, Senior Lecturer in Risk and Actuaries Studies and Lead, Data and AI Tech at UNSW Business AI Lab

Watch the recording

Our fourth and final webinar is on Monday 10 June from 11:00 to 12:00 BST. We will take a deep dive into what the increased prevalence of AI means for actuaries and their work. Both now and in the future.

Speakers include:

  • Matthew Edwards, innovation lead at WTF’s UKI life division
  • Catherine Drummond, chair of the IFoA’s General Insurance Board
  • Dr Alexey Mashechkin of Allianz Partners

Sign up to this webinar for free

IAA AI summit

As active participants on the International Actuarial Association (IAA) Taskforce on AI, IFoA President Kalpana Shah joined IFoA Council members and volunteers in Singapore from 4 to 5 April for the IAA’s new AI global summit to plan the taskforce’s work.

Throughout the conference, delegates delved into 5 workstreams which explored the ethics, education, governance, and innovation of AI and how AI will change the roles of actuaries. Contributing speakers in plenaries included IAA President Charles Cowling, Bogdan Tautan, and Dr Li Xuchun.

Through this summit, the IAA sought to advance the competency of the profession with respect to AI by:

  • creating awareness of the risks and opportunities related to AI and educating actuaries
  • promoting the role of the actuary in existing and emerging wider fields and raising the profile of the actuary
  • engaging with supranational organisations on AI-related risks to provide actuarial perspectives in their own AI related initiatives

Seoul global AI safety summit

The second global AI safety summit, co-hosted by South Korea and the UK, is planned for 21 and 22 May. The summit aims to build on the legacy of the first AI safety summit. It will bring together international governments, AI companies, academia, and civil society to advance global discussions on AI.

Discussions will address the potential capabilities of the most advanced AI models, building on the Bletchley Declaration and wider agreements including commitments from developers on AI safety.

With the ambition of helping to shape a coherent global strategy on AI governance, the summit’s primary goal is for countries and industry leaders to converge on the 3 critical priorities of:

  • safety: to reaffirm the commitment to AI safety and to further develop a roadmap for ensuring AI safety
  • innovation: to emphasise the importance of promoting innovation within AI development
  • inclusivity: to champion the equitable sharing of AI’s opportunities and benefits

The UK government released the official agenda for the summit on 7 May. Sessions of relevance include:

  • ‘action to strengthen AI safety’: participants will present the work of the national AI Safety Institute and discuss how to further establish the norms and practices required to make AI safe
  • ‘approach for sustainability and resilience’: participants will look at how to bring about a coordinated approach to AI development, exploring resolutions to the negative social and economic impacts exacerbated by AI

AI Safety Institute releases new AI safety evaluations platform

On 10 May, the UK’s newly created AI Safety Institute released a new testing platform to strengthen AI safety evaluation.

According to the UK government, the release of the AI Safety Institute’s homegrown Inspect evaluations platform to the global community will accelerate the work on AI safety evaluations carried out across the world, leading to better safety testing and the development of more secure models. This, the government hopes, will allow for a consistent approach to AI safety evaluations globally.

Inspect is a software library which enables testers – like startups, academics, AI developers, and international governments – to assess specific capabilities of individual models and then produce a score based on their results. The software can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open-source licence, it is now freely available for the AI community to use.

GAD introducing AI on quality assurance checks on pensions

The Government Actuary Department has announced it is working alongside an AI company to develop coding in order to perform quality assurance checks on pension administrators’ calculations.

GAD states this will add to its current suite of data science expertise where it is using programming languages to produce results for clients.

Examples of this under GAD include:

USA and China hold ‘miscalculation’ talks on AI

On 14 May, Chinese and US officials held a first meeting in Geneva, Switzerland, in an organised dialogue that was first agreed in San Fransico last year to reduce the risk of AI ‘miscalculation’ and ‘unintended conflict’.

The meeting focused on risk and safety, with an emphasis on advanced AI systems. But it was not designed to deliver outcomes such as promoting technical collaboration or co-operation on frontier research.

A US official told the Financial Times that the US would outline its stance on tackling AI risks, explain its approach on norms and principles of AI safety, and voice concerns about Chinese AI activity that threaten American national security.

More in the AI actuarial policy series

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter