North America: The legal playbook for AI in HR - Five practical steps to help mitigate your risk

In brief

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.


Contents

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for "unlawfully excessive" or "disproportionate" collection. For example, the French data protection authorities recently imposed a USD 34 million fine related to a multinational e-commerce company's use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:

(1) Understand current use of AI technologies

As a starting point, organizations should understand the AI tools used, how tools are deployed, what data is collected, and how the data is used. Organizations may be using these tools throughout the employment life cycle, including in recruitment, onboarding, and HR and performance management. Many organizations have utilized these technologies for years, without much legal oversight. Circulating a questionnaire or survey amongst HR professionals, hiring managers and business leaders (and including members of the company's IT and information security department), or otherwise taking an inventory of all the existing tools, is an essential first step towards mitigating risk.

(2) Review recent changes to the regulatory and enforcement landscape

The EU AI Act, which became effective 1 August 2024, is the first comprehensive AI regulatory framework. Enforcement of the Act is set to begin in 2026, and it aims to regulate AI systems that may impact people in the EU. Like the EU's General Data Protection Regulation, this means that the Act will impact organizations around the world if the system is used within the EU. Under the Act, systems are ranked by risk levels, which determines the compliance requirements. Some systems may be identified as posing an "unacceptable risk," and use would be prohibited; this includes the use of AI-based emotion-recognition systems in the workplace.

Recruitment systems, including systems to place targeted job advertisements, to analyze and filter applications, to evaluate job candidates, to monitor and evaluate performance, or to make decisions about employment are considered high risk under the Act. Employers are required to explain and document their use of such systems. Additionally, GDPR principles for lawful processing of data, transparency around processing, accuracy of data, purpose limitation, data minimization, storage limitation, data integrity, and confidentiality also still apply. In most cases, when using these tools, organizations will be required to conduct a data protection impact assessment and there will be a requirement for human intervention. In some jurisdictions, introduction of new technology that impacts the workforce will trigger information and consultation obligations with worker representatives under local law; in some cases, their consent to implementation may be required.

In the US, while the AI regulatory landscape is evolving, many of the existing laws in place are focused on AI tools in human resources. There is a patchwork of federal and state regulations in the employment and privacy space that may regulate these technologies, including from the Federal Trade Commission, the Department of Labor, and states Attorneys General, among others. Recently, the US Consumer Financial Protection Bureau issued guidance reminding organizations that compliance with the Fair Credit Reporting Act is still required when utilizing AI for employment decisions.

In August, Illinois became the second state, after Colorado, to target workplace algorithmic discrimination. H.B. 3773, effective 1 January 2026, makes it unlawful for organizations to use HR AI tools that could discriminate based on a protected class. Like Colorado, Illinois companies must also notify applicants and employees when using AI for various HR functions. (Read more about the IL and CO legislation here). Additionally, in 2023, New York City began enforcing a law imposing strict requirements on employers that use automated employment decision tools to conduct or assist with hiring or promotion decisions in NYC. The law prohibits the use of automated decision tools unless the company provides on its website the summary of an independent audit of the tool for bias. (Read more about NYC's AEDT ordinance here).

(3) Data minimization is still paramount

Before the deployment of these technologies, employers should review the tool in detail and determine the legal basis and necessity for the collection and processing of personal data. Like the EU GDPR, the California Consumer Protection Act includes data minimization principles that require all data processing activities be assessed for necessity and proportionality. (Read more about obligations under the CCPA here).

(4) Always keep a human in the loop

In October, the US Department of Labor published "Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers." This non-binding guidance prioritizes the well-being of workers in the development and deployment of AI in the workplace. DOL urges employers to establish governance structures to oversee the implementation of AI systems and keep a human in the loop for any employment decisions. Thus, training HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions is critical. DOL's recommendations align with jurisdiction-specific AI laws and regulations in places like New York City, Colorado, and Illinois, and will likely dovetail with further regulation to come in this space.

(5) Assess and document risk

Organizations should review these technologies for legal, ethical, and reputational risk, including for issues related to data privacy, cybersecurity, intellectual property, employment, and vendor and supplier risks. While some laws already require such assessments, it is expected that many new laws will also include this requirement.


Baker McKenzie has put significant resources into the development of our AI capabilities.

Wherever your organization is on its AI journey, our cross-disciplinary team of experts can assist in balancing these new opportunities while mitigating risk.

For legal counsel responsible for mitigating risk related to AI in HR, we've developed a three-part framework for risk mitigation.

  1. Know the law: Multijurisdictional Matrix, addressing all of the applicable regulations and compliance steps where you have headcount
  2. Know what to do: Practical AI in HR Implementation Step Lists and Compliance Checklists for use in certain key jurisdictions, including where / when it is necessary or advisable to negotiate with works councils or worker representatives
  3. Know how to document: Appropriate employment documentation (e.g., standard policies, notices and consents, etc.) and internal training programs.

Please contact us for more information.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.