After the excitement of ‘AI Week’ in November 2023, which saw the UK government host the world’s first global summit on AI safety, work on AI has continued apace in the first months of 2024.
In February, the government published its response to the March 2023 AI regulation white paper, which laid the foundations for the UK’s now existing ‘pro-innovation’ and principled-based approach to regulating AI, a position presupposed in the 2021 National AI Strategy and in a 2022 policy paper.
Of particular interest to actuaries in the government’s response was that major regulators from the FCA to the CMA have been asked to publish plans by the end of April for how they are responding to AI risks and opportunities. The response also laid out the government’s initial thinking for future binding requirements which could be introduced for developers building the most advanced AI systems. There were further commitments of £90 million to go towards launching nine new research hubs across the UK and a partnership with the US on responsible AI, as well £19 million for 21 projects to develop innovative trusted and responsible AI and machine learning solutions to accelerate deployment of these technologies. Finally, the response detailed the creation of a new steering committee to support and guide the activities of a formal regulator coordination structure within government.
Separately, the FRC published its own research on the use of artificial intelligence and machine learning in UK actuarial work.
This month, the Chancellor’s Spring Budget facilitated further steps on AI. A commitment of £100 million of state investment was made over the next 5 years to the Alan Turing Institute, the UK’s national institute for data science and AI research. Other announcements included a £7.4 million upskilling fund pilot for SMEs to develop AI skills to support their future growth, as well as £1 billion of investment to transform the use of data to reduce time spent on unproductive administrative tasks by NHS staff – with pilots to test the ability of AI to automate back-office functions which could unlock an annual productivity benefit of £500 to £850 million.
On 13 March, the European Parliament passed the Artificial Intelligence Act. Notably, the act, takes a rights-based approach to the technology, a different approach to the UK’s principle-based approach. Aiming to protect human rights, democracy, the rule of law and environmental sustainability from ‘high-risk AI’, the legislation is in marked contrast to anything we have seen so far in the UK. New rules under the act include banning certain AI applications that threaten citizen rights, such as biometric categorisation systems and the use of biometric identification systems by law enforcement except in ‘exhaustively listed’ situations. Moreover, clear obligations are also envisaged for ‘high-risk’ technology, such as assessing and reducing risk, maintaining use logs, and ensuring human oversight. The EU is defining ‘high-risk’ AI uses that include critical infrastructure, education, and employment.
In February the IFoA published its seventh thematic report: ‘Actuaries using data science and artificial intelligence techniques’. The IFoA carries out regular thematic reviews and information gathering exercises looking at topics, roles, and areas of work relevant to actuaries.
This review used case studies, provided by participating members and their organisations, to examine how actuaries are using, applying, or developing new ways to use existing and emerging data science and AI techniques to their areas of practice in the field.
The main conclusions of the report were:
In March 2023, the IFoA launched a new thought leadership webinar series on artificial intelligence. The series intends to draw on expertise from the worlds of policy, science, academia, and financial services to demystify AI and its growing role in society. The series seeks to tackle the big debates around AI ethics and regulation and set out the work we as an organisation are undertaking in this space.
In the first event of the series, IFoA President Kalpana Shah was joined by Felicity Burch, Executive Director of the Responsible Technology Adoption Unit in the UK government. Burch delivered a keynote presentation focusing on exploring AI, by explaining what it is, the latest development in its capabilities, and where it is being used across society and financial services. Of particular interest, Burch discussed ‘tools for trustworthy AI’. These include assurance mechanisms and standards developing organisations (SDOs) developed standards. Burch explained that such tools were critical for enabling the responsible adaptation of AI by supporting the implementation regulatory framework and increasing international interoperability.
Watch the first webinar recording
Our next session will take place on Thursday 11 April and will explore AI ethics and regulation, and how this can impact the work of actuaries. We will be joined by representatives from the Alan Turing Institute, the Ada Lovelace Institute, and the London Institute for Banking and Finance. Members and non-members can sign up for free.
The IFoA are active participants on the International Actuarial Association (IAA) Taskforce on AI. IFoA President Kalpana Shah will join IFoA Council members and volunteers at the AI Global Summit being held in Singapore on 4 to 5 April to plan the work of the Taskforce. 5 workstreams will explore the ethics, education, governance, and innovation of AI and how AI will change the roles of actuaries.
As AI continues to irreversibly emmesh itself across our society and economy, with governments, regulators, and the IFoA responding in-kind, we will keep actuaries updated on all the latest developments in this field. Stay tuned for our next AI actuarial update coming in May, where we will provide insight into further governmental and IFoA work, as well as the all-important second global AI Safety Summit hosted by South Korea.
See our first piece: AI week and the AI Safety Summit: what actuaries need to know.