30/01/2024

Changing landscape of AI regulations

Changing landscape of AI regulations The IFoA’s Data Science, Sustainability and Climate Change Working Party discuss recent updates to AI regulations

In the ever-evolving landscape of artificial intelligence, regulation is coming to the fore and will shape the future of the AI industry and its responsible use.

There are wider concerns within the AI industry that any regulatory requirement may stem future development – both speed and breadth of development.

But the hope is that governments and regulatory bodies focussing on AI regulations early will lead to a more meaningful and positive framework for businesses, individual users, and developers to work within.

Increase in AI and the need for AI regulations

The take-up of AI solutions is increasing, both for businesses and for personal use, with for example the launch of ChatGPT in 2022.

A report published by PwC in 2017 estimates that “UK GDP will be up to 10.3% higher in 2030 as a result of AI”. An article by Goldman Sachs in August 2023 forecasts global investment in AI to reach about 200 billion US dollars by 2025.

Within the actuarial profession, we are seeing an increase in demand across all specialisms for the use of AI to augment existing processes. The number of jobs advertised for actuarial skills coupled with AI skills is increasing.

An Oxford University talk in October 2023 discussed the development and regulation of AI. It said that the AI sector in the UK is thriving, but “...the need for good regulation was acknowledged by all.” For more details of the talk, see: Risks, regulation – but opportunities too: AI is thriving in the UK, say experts.

Similarly, Deloitte published an article in April 2023 discussing survey results where “80% of consumers are worried about decisions being made by AI using inaccurate information”.

An area that regulation aims to address is potential unfair bias within AI algorithms. Reuters reported in October 2018 how Amazon scrapped a “secret AI recruiting tool that showed bias against women”.

Regulation will hopefully address other ethical areas posed by new AI technology such as the use of AI technology to generate ‘deepfakes’. An article published by Bloomberg in November 2023 discusses problems of AI including the issue of deepfakes within India.

Regulation will also help ensure that consumers can maintain confidence in the tools being developed. And it will protect both businesses and society from potential risks posed by AI.

Recent AI regulatory updates in the US, EU, and UK

In October 2023, The White House issued an executive order on “safe, secure and trustworthy AI” use. It reflects the US government’s commitment to addressing the ethical challenges posed by advanced AI technologies. For more information, see the accompanying fact sheet.

The executive order outlines principles and considerations for the responsible development and deployment of AI systems. That includes the need for consumer privacy, addressing ‘algorithmic discrimination’, and protecting “against the risk of using AI to engineer dangerous biological materials”.

On the other side of the Atlantic, in April 2021, the European Commission proposed its first regulatory framework for AI (unofficially referred to as the ‘EU AI Act’). Provisional agreement was reached in December 2023 between the European Council and European Parliament. (See: Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI.) For this to transpose to law, it would need to be formally adopted by the council and parliament, which may take a few months.

The aim of the EU regulation is to ensure “that AI systems used in the EU are safe, transparent, ethical, unbiased and under human control”. For more details, see: Excellence and trust in artificial intelligence. Some of the main points of note from the act include:

AI models categorised as ‘unacceptable risks’ will be banned for wider use. Those classified as ‘high’ and ‘limited’ may be subject to further scrutiny.

The ‘EU AI Act’ forms part of the wider EU strategic focus on digital technology. Digital technology is one of the 6 European Commission strategic priorities for 2019 to 2024. As part of these priorities, its digital targets for 2030 include to:

  • empower businesses and people in a human-centred, sustainable, and more prosperous digital future
  • reach a point where 75% of EU companies are using cloud, AI, or big data

See: Europe’s Digital Decade: digital targets for 2030

As mentioned above, AI is an area that falls within the remit of digital technology for the purposes of this strategic review. For more details, see: The European Commission’s priorities.

Looking beyond the EU and US, the UK is following suit with its March 2023 white paper on AI regulation. The paper proposes a principles-based framework, which includes safety and security, accountability, and fairness. For commentary on this, see: UK’s approach to regulating the use of artificial intelligence.

Conclusion

On average, as individuals, we have positive experiences of using AI, for example via ChatGPT, where AI tools enhance our day-to-day work and personal lives. However, the need for regulation is apparent where we should tackle potential misuse of this technology such as creating malicious deepfakes or manufacturing AI-enhanced cyber attacks.

Though we have mentioned legislation in the EU, UK, and US, we expect the trend for discussion and adoption across various local governments to grow over the next few years.

Further reading

Share your views

What are your thoughts on the points raised in this article? We would love to hear your views in the comments via the IFoA’s LinkedIn Data Science Group. We will be looking to create a separate channel via IFoA communities for wider discussion as well. 

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter