Recommended actions
The Act regulates activities across the AI lifecycle. For developers and deployers of AI technologies who haven't already conducted a risk assessment to identify the Act's impact on their businesses, now is the time to get started – they should assess their AI systems to determine whether they will be subject to the Act once it enters into force and becomes applicable and identify in which risk category their AI systems will fall.
Read our previous post for specific recommendations on how to meet these obligations.
In more detail
The EU AI Act is generally applicable on 2 August 2026 – however, companies should be aware now of a number of provisions with different implementation deadlines, reflecting the risk-based categorization of AI systems:
1 August 2024: The EU AI Act enters into force.
2 February 2025: The ban on 'prohibited systems' takes effect. These include the use of subliminal techniques, systems that exploit vulnerable groups, biometric categorization, social scoring, individual predictive policing, facial recognition systems using untargeted scraping, emotion recognition systems in workplaces and educational institutions, and 'real-time' remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, with, in some use cases, qualifying thresholds for the system to be prohibited and certain limited exceptions.
2 May 2025: The AI Office to facilitate the development of codes of practice covering obligations on providers of general-purpose AI (GPAI) models, with Member State and industry participation. A general-purpose AI model is defined under the Act as a model "trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market." If these codes of practice cannot be finalized by 2 August 2025, or if the AI Office does not consider them adequate, common rules for the implementation of obligations of providers of GPAI will be adopted.
2 August 2025:
- GPAI governance obligations become applicable. While the obligations imposed on GPAI are generally less onerous than for high-risk systems, they are subject to requirements in relation to technical documentation, having a policy in place to comply with copyright law, and making available a "sufficiently detailed" summary of the content of the training dataset. GPAI systems deemed to present "systemic risk" are subject to additional requirements.
- Provisions on notifying authorities become applicable, and Member States must have appointed competent authorities and implemented rules on penalties and administrative fines.
2 February 2026: The European Commission, in consultation with the European Artificial Intelligence Board, to develop guidelines on the practical implementation of the Act along with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
2 August 2026:
- The Act becomes generally applicable. Specifically, obligations on high-risk AI systems listed in Annex III (including AI systems in biometrics, critical infrastructure, education, employment, access to essential public and defined private services, law enforcement, immigration, and administration of justice) come into effect. These include pre-market conformity assessments, quality and risk management systems, and post-marketing monitoring.
- Member States are required to have implemented at least one regulatory sandbox on AI at a national level.
2 August 2027: Obligations on high-risk systems apply to products already required to undergo third-party conformity assessments. This includes products such as toys, radio equipment, in-vitro medical devices, and agricultural vehicles. GPAI systems placed on the market before 2 August 2025 become subject to the Act's provisions.
31 December 2030: AI systems that are components of the large-scale IT systems listed in Annex X that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the Act.
Baker McKenzie has a team of dedicated experts who can help you with all aspects of EU AI Act compliance, Responsible AI governance, and related policies and processes.
We would like to thank our colleagues Karen Battersby, Helen Davenport, Kathy Harford, and Megan McGleenon for their contributions to this article.