The IFoA’s recently launched AI Thought Leadership webinar draws on expertise from the worlds of policy, science, academia, and financial services to demystify AI and its growing role in society. The webinar programme tackles the big debates around AI ethics and regulation and sets out the work the IFoA is doing in this space.
In our third webinar, on 9 May, we were joined by IAA President Charles Cowling. He led a discussion focusing on how the actuarial sector is reacting to AI developments globally and how different jurisdictions are approaching this challenge.
Other speakers included:
Our fourth and final webinar is on Monday 10 June from 11:00 to 12:00 BST. We will take a deep dive into what the increased prevalence of AI means for actuaries and their work. Both now and in the future.
Speakers include:
Sign up to this webinar for free
As active participants on the International Actuarial Association (IAA) Taskforce on AI, IFoA President Kalpana Shah joined IFoA Council members and volunteers in Singapore from 4 to 5 April for the IAA’s new AI global summit to plan the taskforce’s work.
Throughout the conference, delegates delved into 5 workstreams which explored the ethics, education, governance, and innovation of AI and how AI will change the roles of actuaries. Contributing speakers in plenaries included IAA President Charles Cowling, Bogdan Tautan, and Dr Li Xuchun.
Through this summit, the IAA sought to advance the competency of the profession with respect to AI by:
The second global AI safety summit, co-hosted by South Korea and the UK, is planned for 21 and 22 May. The summit aims to build on the legacy of the first AI safety summit. It will bring together international governments, AI companies, academia, and civil society to advance global discussions on AI.
Discussions will address the potential capabilities of the most advanced AI models, building on the Bletchley Declaration and wider agreements including commitments from developers on AI safety.
With the ambition of helping to shape a coherent global strategy on AI governance, the summit’s primary goal is for countries and industry leaders to converge on the 3 critical priorities of:
The UK government released the official agenda for the summit on 7 May. Sessions of relevance include:
On 10 May, the UK’s newly created AI Safety Institute released a new testing platform to strengthen AI safety evaluation.
According to the UK government, the release of the AI Safety Institute’s homegrown Inspect evaluations platform to the global community will accelerate the work on AI safety evaluations carried out across the world, leading to better safety testing and the development of more secure models. This, the government hopes, will allow for a consistent approach to AI safety evaluations globally.
Inspect is a software library which enables testers – like startups, academics, AI developers, and international governments – to assess specific capabilities of individual models and then produce a score based on their results. The software can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open-source licence, it is now freely available for the AI community to use.
The Government Actuary Department has announced it is working alongside an AI company to develop coding in order to perform quality assurance checks on pension administrators’ calculations.
GAD states this will add to its current suite of data science expertise where it is using programming languages to produce results for clients.
Examples of this under GAD include:
On 14 May, Chinese and US officials held a first meeting in Geneva, Switzerland, in an organised dialogue that was first agreed in San Fransico last year to reduce the risk of AI ‘miscalculation’ and ‘unintended conflict’.
The meeting focused on risk and safety, with an emphasis on advanced AI systems. But it was not designed to deliver outcomes such as promoting technical collaboration or co-operation on frontier research.
A US official told the Financial Times that the US would outline its stance on tackling AI risks, explain its approach on norms and principles of AI safety, and voice concerns about Chinese AI activity that threaten American national security.