Recommended actions
While awaiting the formal adoption of the EU AI Act, companies should conduct risk assessments, if they haven't done so already, to identify the impact of the EU AI Act on their businesses.
Accordingly, we recommend that companies:
- audit the development and use of AI within their organization and supply chains
- decide what AI principles and redlines should be (likely to include ethical considerations that go beyond the law, including parameters set by the EU AI Act)
- assess and augment existing risks and controls for AI where required (including to meet applicable EU AI Act requirements), both at an enterprise and product lifecycle level
- identify relevant AI risk owners and internal governance team(s)
- revisit existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third-party services, products and deliverables, which may be created using AI (in particular, generative AI systems)
- assess existing contract templates and any updates required to mitigate AI risk, and
- continue to monitor AI and AI adjacent laws, guidance and standards around the world to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.
In more detail
The EU AI Act has been some time in the making, starting with the EU Commission’s Proposal for a Regulation on AI in 2021. Following the explosion of interest in AI large language models in 2023, the nature of the regulation has had to evolve rapidly to keep pace with technological advancements. Recent delays in the passing of the legislation relate to debates over whether and how the Act should regulate AI foundation models, the advanced generative AI models that are trained on large sets of data with the ability to learn and perform a variety of tasks, as well as over the use of AI in law enforcement.
The Act takes a prescriptive, risk-based approach to the regulation of AI products. AI is defined in line with the approach of the OECD to distinguish it from simpler software systems. Obligations are imposed on technology producers and deployers based on the risk category into which their technology fits. Technologies that pose "unacceptable" levels of danger are forbidden, while "high-risk" technologies face heavy restrictions. The list of prohibited technologies includes biometric identification systems, with narrowly defined law enforcement exceptions, as well as any other systems that use purposely manipulative techniques or social scoring, such as predictive police systems and emotional recognition systems. Untargeted scraping of facial images from the internet and CCTV is banned and AI used to create manipulated images, such as 'deep fakes' will need to make clear that the images are generated by AI.
Foundation models have been brought within the scope of the Act, which takes a similar tiered and risk-based approach to the obligations imposed on these models. While details of the legislation are still to emerge, the EU has agreed on a two-tiered approach for these models with "transparency requirements for all general-purpose AI models (such as ChatGPT)" and "stronger requirements for powerful models with systemic impacts." An AI Office within the European Commission will be set up to oversee the regulation of the most advanced AI models.
In terms of obligations under the Act, those looking to provide and deploy AI face specific transparency and safety constraints. To limit threats to areas such as health, safety, human rights, and democracy, providers of high-risk AI must utilize protections in stages such as design and testing. This entails assessing and mitigating hazards, as well as registering models in an EU database. Certain users of high-risk AI systems that are public entities must also register in the EU database.
Penalties related to prohibited practices are up to EUR 35 million or 7% of a company’s annual global revenue, while violation of the Act’s obligations, or the incorrect supply of information, attract penalties of EUR 15 million or 3% of turnover, and EUR 7.5 million or 1.5% respectively. There is provision for more proportionate caps on administrative fines for SMEs and start-ups in the case of a breach of the provisions of the AI Act. Exactly how the Act will be enforced remains to be seen.
The provisional agreement makes clear that the EU AI Act does not apply outside the scope of EU law, which still catches providers of AI systems placed in the EU market irrespective of whether they are established in the EU, and does not affect member states’ competencies in national security. Nor does it apply to AI systems used solely for research and innovation or to people using AI for non-professional reasons. The Act will apply two years after it comes into force, with some exceptions for specific provisions.
Some technology groups and European companies have raised concerns with the legislation, fearing that it will stifle innovation in Europe, particularly with respect to foundation models. Technology groups argued that the uses of AI, rather than the technology itself, should be regulated (which more closely reflects the approach currently being taken in many other parts of the world.) However, EU representatives believe that their final negotiations have achieved a better balance between enabling innovation and promoting responsible technology.
Companies should remember that compliance with the EU AI Act will only form a part of a business' Responsible AI governance program. While the EU AI Act may be heralded by the EU as the first comprehensive AI law, there are many AI-related developments being introduced by lawmakers across the world and, of course, regulators are already scrutinizing organizations’ compliance with existing laws when it comes to AI (including with respect to data privacy, consumer protection, and discrimination).
Currently, the EU AI Act awaits formal adoption by both the European Parliament and Council before it will become EU law. Recent statements by both supporters and opponents of the provisional agreement indicate a risk that the provisional agreement will not receive its final approval anytime soon. We will continue to monitor this development.
* * * * *
With thanks to Karen Battersby (Director of Knowledge for Industries and Clients) and Kathy Harford (LKL for IP and Data & Technology) for contributing to this alert.
Baker McKenzie’s recognized leaders in AI are supporting multinational companies with strategic guidance for responsible and compliant AI development and deployment. Our industry experts with experience in technology, data privacy, intellectual property, cybersecurity, trade compliance and employment can meet you at any stage of your Responsible AI journey to unpack the latest trends in legislative and regulatory proposals and the corresponding legal risks and considerations for your organization. Please contact a member of our team to learn more.