09/11/2023

AI week and the AI Safety Summit: what actuaries need to know

AI week and the AI Safety Summit: what actuaries need to know Last week hearts and minds across Westminster turned to the rapidly rising technology of ‘frontier’ AI, in what political commentators referred to as ‘AI week’

Often, usually during periods of parliamentary recess, the government unofficially designates entire working weeks of the political calendar to specific policy areas by government. In this case, AI took centre stage on 1 and 2 November as the Prime Minister, Rishi Sunak, hosted the world’s first global summit devoted to ensuring safety of frontier AI.

But before going any further, what exactly is frontier AI – and what separates it from ‘normal’ AI?

Frontier AI

Artificial intelligence has been with us for some time, be it in the form of Amazon’s ‘Alexa’ or an everyday smartphone. It is at its plainest a technology that combines science and datasets to enable problem-solving.

Frontier AI on the other hand represents the forefront of AI research. It can perform a wide variety of tasks like never before. It is being constantly augmented with tools to enhance its capabilities, and is being integrated into systems that may have a transformative, irreversible impact on the economy and society.

According to some experts, the rise of frontier AI has created a popular illusion that AI is a new phenomenon. It is not. But this belief stems from the awareness-impact of the striking new and ongoing developments in frontier AI over the past year.

These developments can be best captured and understood through the rapidly widespread public adoption and recognition of large language models (LLMs) such as ‘ChatGPT’. These LLMs have been developed by AI companies such as OpenAI, DeepMind, and Anthropic.

In what seems like an overnight occurrence, everyday people suddenly have the power to create convincing imagery depicting any scenario or individual they wish using the technology. Professionals can commission frontier AI language models to design complex, bespoke, and highly technical undertakings such as the design of business plans or research papers.

The government’s response to frontier AI

The UK government believes that the opportunities presented are game changing. Advancing drug discovery, making transport safer and cleaner, improving public services, speeding up and improving diagnosis and treatment of diseases like cancer. These are just some of the potential benefits.

But it also believes there is significant risk if the technology is not developed safely and under strict supervision. The potential for frontier AI to act as a tool for mass disinformation, cyber-attacks, and fraud are all cited as areas of concern.

With this, in September 2023, ahead of the AI Safety Summit, the UK government established the Frontier AI Taskforce. Backed by an initial £100 million, the taskforce brings together scientific experts who have been charged with advising the government on the risks and developments of frontier AI – and how best to safeguard the technology from any malign development.

This was followed in October 2023 by a paper on frontier AI’s capabilities and risks. The UK’s approach to managing the rise of frontier AI is juxtaposed to that of the US and EU. Those jurisdictions are taking an overarching regulatory approach, with President Biden issuing an AI Executive Order on 30 October and the EU expected to legislate for an AI bill in December.

The UK favours a delegated, sector-by-sector approach to regulation. The overall purpose of legislators and regulators centres on ensuring safety and having the ability to malleate the development of the technology.

The UK’s position is perfectly reflected in the emphasis of the AI summit being on ‘safety’ primarily. The fruition of voluntary agreements between nations and the courting of private sector partners and cooperation were key aspects of the event.

In the run up to the summit, government and regulators shifted into AI gear, releasing a plethora of AI related announcements and research. Most notable for actuaries was research on the use of artificial intelligence and machine learning in UK actuarial work. The Financial Reporting Council commissioned the Government Actuary’s Department to carry out the study.

Significantly, the research found general insurance pricing to be the actuarial field making greatest use of AI and machine learning. This includes for determining claims risks, forecasting the price-sensitivity of policyholder groups, and informing customer-facing processes. The research also found widespread plans among actuaries to increase its use in future, particularly because of rapid developments in large language models such as ChatGPT.

AI Safety Summit: key takeaways

Of course, the main aspect of AI week was the summit itself. Over 2 days international leaders, representatives, industry organisations, and academics gathered in Bletchley Park, the codebreaking hub during World War Two. The focus was on certain types of frontier AI systems, the risks they may pose, and how to manage them.

On day one, King Charles addressed the summit. He stated that AI needs to be tackled with “urgency, unity, and collective strength.” Amidst a series of roundtables, the government published a groundbreaking ‘Bletchley Declaration on AI Safety’ signed by 28 countries including the US, China, France, and India. The essence of the agreement is to give urgent priority to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.

On day 2 were further roundtables between leaders. The Prime Minister delivered a speech championing the ability of AI to better the human experience, and he outlined details of a new UK AI Safety Institute. The institute has a mission to minimise surprise to the UK and humanity from rapid and unexpected advances in AI. It will work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.

A plan for safety testing

Beyond this, senior government representatives from leading AI nations, and major AI organisations, agreed a plan for safety testing of frontier AI models. Some of the plan’s important features include the following:

Testing models before and after deployment

This includes a role for governments in testing, particularly for critical national security, safety, and society harms.

Delivering a ‘state of the science’ report

Led by the ‘godfather of AI’ Yoshua Bengio, the state of the science report will help build a shared understanding of the capabilities and risks posed by frontier AI.

Reaching shared ambition

The plan will also involve governments reaching a shared ambition to invest in public sector capacity for testing and other safety research. One aim is to share outcomes of evaluations with other countries where relevant. Another is to work towards developing, at the right time, shared standards in this area. All in all, the idea is to lay the groundwork for future international progress on AI safety in years to come.

What’s next?

Overall, the summit represented a global first step towards greater awareness and proactivity on AI following its undeniable strides over the past 12 months. South Korea will hold a second, virtual summit in 6 months, with France hosting a third, in-person summit in one year.

But leading governments and corporations will not be the only ones responding to the incredible growth of the technology. As a royal chartered body charged with serving the public interest, we will be taking an increasing interest in and proactive stance on AI in the months and years ahead. I advise readers to simply ‘watch this space’!

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter