United Kingdom: Government confirms principles-based approach to regulating AI

In brief

The UK government has published its long-awaited response to the AI White Paper, "A pro-innovation approach to AI regulation" published in March 2023. In this article, we distill the essential information you need to know.


Contents

1. Endorsement of principles-based approach

The UK is sticking with its principles-based regulatory framework for AI, continuing to take a very different path from its EU neighbors (as discussed in our previous alert). Unlike the EU, there will be no new AI regulator, new legislation or new penalties at this time. Indeed, the UK government states that in response to "widespread support" to its "pro-innovation" approach to regulating AI, it remains committed to "a context-based approach that avoids unnecessary blanket rules that apply to all AI technologies regardless of how they are used."

As a reminder, the UK government has identified the following five principles that it expects UK regulators to interpret and apply within their remit.

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The government’s view is that a non-statutory approach to AI allows for more flexibility, which means that these principles will not be binding. However, this approach will remain under review. The UK government is not ruling out targeted binding measures in the future (indeed, it suggests in the response that future regulation is inevitable): "We will legislate when we are confident it is the right thing to do."

Of course, this is the current administration's approach to AI regulation – the UK has an election coming up later in 2024 and it is possible that a new administration may reassess this position, with the Labour Party (currently ahead in the polls) indicating that it would implement a 'stronger regulatory framework' than that proposed by the current government.

2. The regulation of highly capable general-purpose AI systems

It remains the government’s view that for the large majority of AI systems, it will be more effective to focus on how AI is used within a specific context than to regulate specific technologies.

However, the government has recognized the risks of gaps when it comes to "highly capable general-purpose AI systems" (i.e., foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models). The government’s response indicates that targeted mandatory interventions may be required but there are no immediate plans to propose such measures. This means that the voluntary measures adopted by industry will remain the only measures focused on foundation models for the time being.

However, to inform the government’s evaluation of how effectively these voluntary measures address AI risks and ensure AI safety, the new AI Safety Institute will lead the testing of next-generation AI models in the UK. Importantly, the government has made it clear that the goal of such evaluations will not be to deem a model "safe" and that the AI Safety Institute is not a regulatory body. However, the government has said that if the AI Safety Institute identifies a potentially dangerous capability through its evaluation of advanced AI systems, the Institute may address risks by engaging developers on suitable safety mitigations and collaborating with the government’s AI risk management and regulatory architecture.

The government will provide an update on its approach to highly capable general-purpose AI systems by the end of 2024 (i.e., past the next election).

3. Known AI risks and upcoming legal and regulatory activity

The response touches on some known AI risks and related developments.

  • Intellectual property – The Government has confirmed that a working group of rightsholders and AI developers set up by the Intellectual Property Office "will not be able to agree an effective voluntary code" on AI and copyright. The Government’s response indicates that it will explore "…mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models" and that further proposals on the way forward will be set out "soon" but what those proposals look like remains unclear.
  • Data protection: The government notes that the UK’s data protection framework, which is being reformed through the Data Protection and Digital Information Bill (DPDI), will complement their current approach to regulating AI. The DPDI aims to expand on and simplify the current rules on automated decision-making, which are "confusing and complex."
  • Competition: The Digital Markets, Competition and Consumers Bill, which is currently progressing through Parliament will give the CMA additional tools to identify and address any competition issues in AI markets and other digital markets affected by recent developments in AI.
  • Misinformation: The Online Safety Act 2023 places new responsibilities on online service providers and captures attempts by foreign state actors to manipulate information.
  • Security – The National Cyber Security Centre (NCS) published guidelines for secure AI system development in November 2023. The government has indicated that it will release a call for views in spring 2024 to obtain further input into securing AI models, which will include a potential Code of Practice for cyber security of AI, based on the NCSC’s guidelines. In addition, the Government’s response also highlights the security regime in the Product Security and Telecommunications Infrastructure Act (PSTI Act), which is scheduled to come into effect in 2024. The PSTI Act will require manufacturers of consumer connectable products (e.g., AI-enabled smart speakers) to comply with minimum security requirements.

4. Guidance for regulators

Alongside the White Paper response, the UK government has provided initial guidance for UK regulators on how to interpret and apply the AI principles. Further updated guidance will be issued by summer 2024. The guidance is not intended to be a prescriptive guide and how the principles are considered will ultimately be at each regulator's discretion. Regulators are encouraged to develop tools and guidance. The government notes that certain UK regulators have already published AI guidance, for example, the Information Commissioners Office (ICO) and the Competition and Markets Authority (CMA). Remaining regulators have been asked to publish an update outlining their strategic approach to AI by 30 April 2024.

The Government will continue to evaluate any potential gaps in existing regulatory powers and remits. It will also provide support to regulators through a new GBP 10 million fund for new tools and research projects, as well as the DRCF AI and Digital Hub. (Of course, GBP 10 million across all regulators is hardly generous and it remains to be seen how this will be allocated.)

5. Centralized AI function within government

Recognizing that individual regulators cannot successfully address opportunities and risks presented by AI in isolation (concerns had been raised regarding a risk of regulatory overlaps, gaps and poor coordination across the various UK regulators), a central function will be established within the UK government to "support effective risk monitoring, regulator coordination, and knowledge exchange."

The Government has started undertaking cross-sectoral risk monitoring and has committed to launching a targeted consultation on a cross-economy AI risk register during 2024. The aim of the register will be to provide a single source of truth on AI risks that regulators, government departments, and external groups can use.

In addition, the government is considering the added value of developing a specific risk management framework for AI, similar to the one developed in the US by the National Institute of Standards and Technology.

Practical Impact

For organizations developing or deploying AI systems in the UK, they may be relieved by the news that they won’t have to comply with a prescriptive mandatory regime for AI. However, companies will need to ensure compliance with existing law governing AI development and deployment, plus keep a close eye out for AI updates from all applicable UK regulators. Despite the Government’s best efforts, there is certainly scope for divergence in the approach taken by different regulators, which may prove challenging.

The ICO, for example, has already initiated official enforcement and investigations where the deployment of AI includes personal data processing (and with particular interest in some areas, for example, biometrics and children’s data). So, there is a notable 'beware' sign. Whilst it may appear that the general approach will be one of principles as opposed to prescriptive AI legislation in the UK, this should not be taken to mean that AI won’t attract scrutiny or enforcement risk.

Also, although the UK has intentionally taken a very different approach from the EU AI Act, if you are a UK-based organization that has cross-border operations in the EU, you will still need to assess the potential impact of the EU AI Act on your business. For the latest on the progress of the EU AI Act, please check out our latest EU AI Act update.

And for global businesses, you will need to build a Responsible AI governance framework that takes into account an increasingly complex regulatory framework of local AI regulation in different jurisdictions and can be flexed to take account of new developments.

It’s only February and we have already had EU and UK updates on AI regulation. Who’s next? To keep up-to-date on all AI news, keep an eye on InsightPlus for further updates.

Contact Information
Karen Battersby
Director of Knowledge for Industries and Clients
London
karen.battersby@bakermckenzie.com
Kathy Harford
Lead Knowledge Lawyer – IP and Data & Technology
London
kathy.harford@bakermckenzie.com

Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.