Between March and June 2024, the IFoA ran a hugely successful thought leadership webinar series on artificial intelligence. The series drew on expertise from the worlds of policy, science, academia, and financial services to demystify AI and its growing role in society. The series also sought to tackle the big debates around AI and set out the work we as an organisation are undertaking in this space in the interests of actuaries.
The first three events, focusing on how to use AI responsibly, its ethics and regulation as well as international approaches to technology, attracted a combined live audience of almost 1,000 attendees and featured distinguished speakers from the Ada Lovelace Institute and the Alan Turing Institute to the Responsible Technology Unit within the UK Government.
Our fourth and final webinar was a technical session deep diving into what the increased prevalence of AI means for actuaries and their work specifically. Speakers included Matthew Edwards, Atreyee Bhattacharyya, Catherine Drummond and Dr Alexey Mashechkin.
You can watch this session, and all other sessions, back on the IFoA’s Virtual Learning Environment.
In June, IFoA President-elect Kartina Tahir Thomson addressed the Nigerian Actuarial Association conference on AI. While recognising that AI will change the role of actuaries in the future, Tahir Thomson emphasised that rather than this viewing this as something to fear, embracing the technology would be fundamental to strengthening and showcasing the skills of actuaries in the future. She added that, in adopting a ‘growth mindset’ to the technology, there are opportunities for actuaries to shape how AI is utilised in, and how it could augment actuarial work, from data processing, model calibration and predictive analytics.
In raising the International Actuarial Association’s ongoing global review of AI education, the President-elect also called for the development of enhanced curricula and training models for actuaries to include AI, to ensure that the skills actuaries of the future will need in the AI domain are considered. The matter of keeping up with the rapid pace of change in the technology was also recognised as key component of future proofing how actuaries certify under IFoA qualifications.
On professionalism and code of conduct, the President-elect expressed that the high standards actuaries operate under will not change with AI, stating that ‘AI professionalism’ would be key to building and maintaining confidence in the actuarial professional in an AI context. This would, Tahir Thomson emphasised, enable actuaries to use AI to develop new products and services on an ethical basis.
Finally, on the governance of the use of AI by actuaries in their work – Tahir Thomson said that creating safe and responsible uses of AI could only be achieved with a “good governance framework in place.” The President-elect noted that the IAA has begun work on governance, which includes developing principles of AI adoption by actuaries and facilitating dialogue with the relevant regulators.
Whilst artificial intelligence did not feature as an issue in the core general election campaign, it did emerge as a policy area in the major parties’ manifestos, with each offering distinctive and contrasting visions. With Labour winning a decisive victory on 4 July, their return to power signals a significant step change in the previous Government’s pro-innovation, sector-by-sector regulatory approach to the rapidly growing technology.
In contrast, Labour’s manifesto claims that regulators are “ill-equipped” to keep up with the rapid innovation and development of AI to provide effective sector-by-sector regulation as advocated by the previous Conservative government. To remedy this, Labour has said it will introduce “binding regulation” via legislation on the most powerful and largest companies developing AI models. This new regulatory framework will be overseen through the incorporation of a new Regulatory Innovation Office. Labour’s plans arguably represent a step more in line with the EU’s precautionary principle where regulation of AI is concerned.
On 21 and 22 May, the UK Government co-hosted the Seoul AI Safety Summit with the Republic of Korea. The second meeting of international governments, AI companies, academia and civil society aimed to build on the work of the first AI Safety Summit in Bletchley Park back in November 2023.
Discussions focused on AI safety, addressing the full spectrum of capabilities of most advanced AI models, building on agreements made in the first summit and pursuing wider agreements including commitments from developers on AI safety. Countries and industry leaders converged on the critical priorities of safety, innovation and inclusivity.
The summit concluded with 27 countries and the EU agreeing to the Seoul Ministerial Statement, which sees signatory nations agreeing to develop shared risk thresholds for frontier AI development and deployment, including agreeing when model capabilities could pose ‘severe risk’ without appropriate mitigations.
Furthermore, an agreement between 10 countries and the EU committed those nations to collaborating to launch an international network of publicly backed AI Safety Institutes, after the launch of the UK’s last year in a world-first. The network will aim to build “complementary and interoperability” between their technical work and approach to AI safety.
Participating countries have now set an ambition of developing further proposals to safeguard the development of AI alongside non-Government actors ahead of the AI action summit due to be hosted by France later this year.