European Union: EU AI Act published – Dates for action

In brief

The long-awaited EU AI Act ("Act") was published in the Official Journal of the European Union on 12 July 2024. The countdown for its implementation has now started for companies developing or deploying AI technologies, with the Act entering into force 20 days after its publication on 1 August 2024. The Act is generally applicable two years after this date, on 2 August 2026 — however, companies should be aware now of a number of provisions with different implementation deadlines, reflecting the risk-based categorization of AI systems.


Contents

Recommended actions

The Act regulates activities across the AI lifecycle. For developers and deployers of AI technologies who haven't already conducted a risk assessment to identify the Act's impact on their businesses, now is the time to get started – they should assess their AI systems to determine whether they will be subject to the Act once it enters into force and becomes applicable and identify in which risk category their AI systems will fall.

Read our previous post for specific recommendations on how to meet these obligations.

In more detail

The EU AI Act is generally applicable on 2 August 2026 – however, companies should be aware now of a number of provisions with different implementation deadlines, reflecting the risk-based categorization of AI systems:

CASE3484132_Picture1

1 August 2024: The EU AI Act enters into force.

2 February 2025: The ban on 'prohibited systems' takes effect. These include the use of subliminal techniques, systems that exploit vulnerable groups, biometric categorization, social scoring, individual predictive policing, facial recognition systems using untargeted scraping, emotion recognition systems in workplaces and educational institutions, and 'real-time' remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, with, in some use cases, qualifying thresholds for the system to be prohibited and certain limited exceptions.

2 May 2025: The AI Office to facilitate the development of codes of practice covering obligations on providers of general-purpose AI (GPAI) models, with Member State and industry participation. A general-purpose AI model is defined under the Act as a model "trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market." If these codes of practice cannot be finalized by 2 August 2025, or if the AI Office does not consider them adequate, common rules for the implementation of obligations of providers of GPAI will be adopted.

2 August 2025:

  • GPAI governance obligations become applicable. While the obligations imposed on GPAI are generally less onerous than for high-risk systems, they are subject to requirements in relation to technical documentation, having a policy in place to comply with copyright law, and making available a "sufficiently detailed" summary of the content of the training dataset. GPAI systems deemed to present "systemic risk" are subject to additional requirements.
  • Provisions on notifying authorities become applicable, and Member States must have appointed competent authorities and implemented rules on penalties and administrative fines.

2 February 2026: The European Commission, in consultation with the European Artificial Intelligence Board, to develop guidelines on the practical implementation of the Act along with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

2 August 2026:

  • The Act becomes generally applicable. Specifically, obligations on high-risk AI systems listed in Annex III (including AI systems in biometrics, critical infrastructure, education, employment, access to essential public and defined private services, law enforcement, immigration, and administration of justice) come into effect. These include pre-market conformity assessments, quality and risk management systems, and post-marketing monitoring.
  • Member States are required to have implemented at least one regulatory sandbox on AI at a national level.

2 August 2027: Obligations on high-risk systems apply to products already required to undergo third-party conformity assessments. This includes products such as toys, radio equipment, in-vitro medical devices, and agricultural vehicles. GPAI systems placed on the market before 2 August 2025 become subject to the Act's provisions.

31 December 2030: AI systems that are components of the large-scale IT systems listed in Annex X that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the Act.

Baker McKenzie has a team of dedicated experts who can help you with all aspects of EU AI Act compliance, Responsible AI governance, and related policies and processes.

We would like to thank our colleagues Karen Battersby, Helen Davenport, Kathy Harford, and Megan McGleenon for their contributions to this article.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.