European Union: EU reaches landmark deal on AI regulation

In brief

8 December 2023 was a historic moment for AI regulation in Europe. Following three days of extensive debates, the EU Parliament, Council and Commission finally announced a provisional agreement on the EU AI Act, the bloc’s landmark legislation regulating the development and use of AI in Europe. It is one of the world’s first comprehensive attempts to regulate the use of AI.


Contents

Recommended actions

While awaiting the formal adoption of the EU AI Act, companies should conduct risk assessments, if they haven't done so already, to identify the impact of the EU AI Act on their businesses.

Accordingly, we recommend that companies:

  • audit the development and use of AI within their organization and supply chains
  • decide what AI principles and redlines should be (likely to include ethical considerations that go beyond the law, including parameters set by the EU AI Act)
  • assess and augment existing risks and controls for AI where required (including to meet applicable EU AI Act requirements), both at an enterprise and product lifecycle level
  • identify relevant AI risk owners and internal governance team(s)
  • revisit existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third-party services, products and deliverables, which may be created using AI (in particular, generative AI systems)
  • assess existing contract templates and any updates required to mitigate AI risk, and
  • continue to monitor AI and AI adjacent laws, guidance and standards around the world to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.

In more detail

The EU AI Act has been some time in the making, starting with the EU Commission’s Proposal for a Regulation on AI in 2021. Following the explosion of interest in AI large language models in 2023, the nature of the regulation has had to evolve rapidly to keep pace with technological advancements. Recent delays in the passing of the legislation relate to debates over whether and how the Act should regulate AI foundation models, the advanced generative AI models that are trained on large sets of data with the ability to learn and perform a variety of tasks, as well as over the use of AI in law enforcement.

The Act takes a prescriptive, risk-based approach to the regulation of AI products. AI is defined in line with the approach of the OECD to distinguish it from simpler software systems. Obligations are imposed on technology producers and deployers based on the risk category into which their technology fits. Technologies that pose "unacceptable" levels of danger are forbidden, while "high-risk" technologies face heavy restrictions. The list of prohibited technologies includes biometric identification systems, with narrowly defined law enforcement exceptions, as well as any other systems that use purposely manipulative techniques or social scoring, such as predictive police systems and emotional recognition systems. Untargeted scraping of facial images from the internet and CCTV is banned and AI used to create manipulated images, such as 'deep fakes' will need to make clear that the images are generated by AI.

Foundation models have been brought within the scope of the Act, which takes a similar tiered and risk-based approach to the obligations imposed on these models. While details of the legislation are still to emerge, the EU has agreed on a two-tiered approach for these models with "transparency requirements for all general-purpose AI models (such as ChatGPT)" and "stronger requirements for powerful models with systemic impacts." An AI Office within the European Commission will be set up to oversee the regulation of the most advanced AI models.

In terms of obligations under the Act, those looking to provide and deploy AI face specific transparency and safety constraints. To limit threats to areas such as health, safety, human rights, and democracy, providers of high-risk AI must utilize protections in stages such as design and testing. This entails assessing and mitigating hazards, as well as registering models in an EU database. Certain users of high-risk AI systems that are public entities must also register in the EU database.

Penalties related to prohibited practices are up to EUR 35 million or 7% of a company’s annual global revenue, while violation of the Act’s obligations, or the incorrect supply of information, attract penalties of EUR 15 million or 3% of turnover, and EUR 7.5 million or 1.5% respectively. There is provision for more proportionate caps on administrative fines for SMEs and start-ups in the case of a breach of the provisions of the AI Act. Exactly how the Act will be enforced remains to be seen.

The provisional agreement makes clear that the EU AI Act does not apply outside the scope of EU law, which still catches providers of AI systems placed in the EU market irrespective of whether they are established in the EU, and does not affect member states’ competencies in national security. Nor does it apply to AI systems used solely for research and innovation or to people using AI for non-professional reasons. The Act will apply two years after it comes into force, with some exceptions for specific provisions.

Some technology groups and European companies have raised concerns with the legislation, fearing that it will stifle innovation in Europe, particularly with respect to foundation models. Technology groups argued that the uses of AI, rather than the technology itself, should be regulated (which more closely reflects the approach currently being taken in many other parts of the world.) However, EU representatives believe that their final negotiations have achieved a better balance between enabling innovation and promoting responsible technology.

Companies should remember that compliance with the EU AI Act will only form a part of a business' Responsible AI governance program. While the EU AI Act may be heralded by the EU as the first comprehensive AI law, there are many AI-related developments being introduced by lawmakers across the world and, of course, regulators are already scrutinizing organizations’ compliance with existing laws when it comes to AI (including with respect to data privacy, consumer protection, and discrimination).

Currently, the EU AI Act awaits formal adoption by both the European Parliament and Council before it will become EU law. Recent statements by both supporters and opponents of the provisional agreement indicate a risk that the provisional agreement will not receive its final approval anytime soon. We will continue to monitor this development.

* * * * *

With thanks to Karen Battersby (Director of Knowledge for Industries and Clients) and Kathy Harford (LKL for IP and Data & Technology) for contributing to this alert.

Baker McKenzie’s recognized leaders in AI are supporting multinational companies with strategic guidance for responsible and compliant AI development and deployment. Our industry experts with experience in technology, data privacy, intellectual property, cybersecurity, trade compliance and employment can meet you at any stage of your Responsible AI journey to unpack the latest trends in legislative and regulatory proposals and the corresponding legal risks and considerations for your organization. Please contact a member of our team to learn more.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.