Background
Workplace technology is already regulated by a patchwork of legislation at EU and local level. This includes the General Data Protection Regulation1, the AI Act2 and local legislation implementing directives on information and consultation requirements3 and transparent working conditions4. The Platform Work Directive5, which must be implemented by the Member States by the end of 2026, will also apply in some cases.
While the Committee draft report acknowledges the existing protections, it highlights a regulatory gap in addressing the broader impact of digital management tools on workers' rights, working conditions, and social dialogue. The AI Act, for example, is primarily a product safety regulation and does not apply to algorithmic management systems that do not fall within the definition of 'AI' in the Act. The Platform Work Directive has a broader reach in terms of the regulation of algorithmic management systems but protects only those carrying out platform work and not workers more generally. The general prohibition on automated decisions of the GDPR only applies to fully automated decision-making processes. Similarly, the Information and Consultation Directive provides only general information on the collective right to consultation while the Directive on Transparent and Predictable Working Conditions is not specific enough to address the complexity of an algorithmic management system and the provision of information around this.
Against this background, the draft report makes a number of recommendations and calls on the European Commission to initiate legislation that, for the first time, specifically and comprehensively regulates algorithmic management in the workplace with the goal of fostering trust, protecting workers' rights, and supporting fair digital transformation across the EU labour market.
The key provisions of the draft directive
Key provisions of the draft directive include:
- Defining 'algorithmic management' very broadly as 'the use of automated systems to monitor, supervise, evaluate, or make or support decisions—by electronic means—regarding the work performance and working conditions of workers including systems that process personal data to oversee activities within the work environment, as well as systems that take or support decisions significantly affecting workers or solo self-employed persons, such as the organisation of work assignments, earnings, safety and health, working time, access to training, promotion, and contractual status.
- Ensuring workers, their representatives, and solo self-employed individuals are informed in writing about algorithmic management systems affecting their work to include:
- A clear statement that algorithmic management systems are in use or intended to be implemented, including a general description of their purpose;
- Any data categories collected and processed, including behaviour and performance-related data, as well as the types of actions monitored; and
- Whether this data is being used to carry out automated decision-making, and if so, a description of the nature and scope of such decisions.
The information would need to be provided on or before the first working day, before implementing any changes to current systems substantially affecting working conditions and at any time upon request.
- Mandating consultation with employee representatives before deploying or updating such systems to include the objectives behind the deployment or update, the work processes and the workers affected, impacts on workload, scheduling, working time, health and safety, types of data collected, protection measures, human oversight mechanisms and training for those affected.
- Prohibiting the processing of certain sensitive personal data (e.g., emotional state, private conversations, off-duty behaviour).
- Requiring effective human oversight for critical decisions such as hiring, termination, and changes to compensation, and empowering workers and solo self-employed individuals with the right to request a review of the functioning of algorithmic systems.
- Introducing health and safety assessments for algorithmic management tools, including the evaluation of risks on possible work-related accidents, psychosocial and ergonomic risks and undue work pressure, and the introduction of appropriate safeguards to mitigate risks. National labour inspectorates would also be assigned oversight responsibilities, not only with respect to health and safety requirements but also regarding the non-discriminatory use of algorithmic systems.
Next steps
The Committee on Employment and Social Affairs must still approve the draft report before it is considered and voted on by the European Parliament. On current schedules, this will not take place until November this year. If passed, as the effective 'gatekeeper' of most EU legislation, the European Commission would then need to consider the European Parliament's request to initiate legislation. If it agrees, the usual legislative process would then start, involving the participation of and negotiation between the European Parliament and the Council of the EU.
Practical impact
The definition of 'algorithmic management' in the draft directive is currently incredibly broad and would capture a wide range of existing workforce systems and future technological developments, including those that do not incorporate AI (as defined under the AI Act or otherwise).
That said, the continuing and rapid development of AI models and their use in all kinds of business processes including people processes is clearly one of the key drivers behind the Committee recommendations and proposed directive.
Further breakthroughs in development are to be expected - for example the development of Artificial General Intelligence (AGI) - so that even complex management tasks will be partially performed by AI in the future. The proposed directive appears to recognize the fundamental legal permissibility of these near-future work environments. In this context, it is to be welcomed that employee representatives - as is the case in some EU jurisdictions - will not generally be able to block this.
However, if enacted through incorporation into national laws, these regulations would require companies to conduct comprehensive audits of existing systems to identify risks and assess the impact of algorithmic management on workers and solo self-employed individuals. Organizations would need to establish robust grievance mechanisms to support employees affected by decision-making of algorithmic systems, and critically review, or develop anew, policies across key areas such as data protection, performance evaluation, and employee rights (potentially including a right to disconnect). Targeted training would also become essential, not only for those directly interacting with relevant systems, but also for managers and HR professionals who must understand, oversee, and navigate these technologies.
Although the implementation of any workforce specific regulation is still some way off, given the potential effect of these proposals, it is worth paying close attention to their progress through the legislative process.
1 Regulation - 2016/679 - EN - gdpr - EUR-Lex
2 Regulation - EU - 2024/1689 - EN - EUR-Lex
3 Directive - 2002/14 - EN - EUR-Lex
4 Directive - 2019/1152 - EN - EUR-Lex
5 Directive - EU - 2024/2831 - EN - EUR-Lex