Austria: Artificial intelligence and HR compliance - What to consider?

In brief

Artificial intelligence (AI) systems can help improve work processes, yet they also carry the risks of liability, penalties, and reputational damage. Companies deploying AI must understand their responsibilities and obligations under the current regulatory frameworks within the EU and the anticipated requirements of the EU AI Act. Particularly for the HR department, it is prudent for HR managers to implement the following basic principles regarding AI.

Introduction of standardized AI rules

Before implementing AI, companies need to carefully plan their AI strategy and introduce internal company guidelines on AI. These guidelines should be set out in a corporate-wide AI policy and should include, among others, the following rules:

  • The use of AI should only be permitted following the completion of AI introductory training. Employees should only be allowed to continue using AI after completing AI training courses as prescribed by the company.
  • Employees may only use AI systems approved by the company and only for business purposes.
  • AI may only be used for certain tasks, e.g., creating tables and texts, revising presentations, summarizing articles, and creating LinkedIn posts.
  • Employees should ensure that end products or work results produced by AI are clearly labeled as being generated by AI.
  • Employees should not input business secrets, confidential information and personal data into the AI system.
  • Employees must check AI-generated content for accuracy and compliance with applicable laws. If they are unclear about whether the content is compliant, employees must seek the opinion of the legal department.
  • Employees using AI are required to consult with the legal department before publishing AI-generated content to ensure compatibility with IP rights, personal rights, and trademark rights.

Compliance with data protection

AI should only be used in the workplace if the company implements a comprehensive data security strategy. To be compliant with data protection laws, companies must take the following steps before implementing AI systems:

  • The company must assess the types of data categories that AI can process and ensure that the processing is based on a specific legal basis. For non-sensitive data, processing can usually be justified on grounds of legitimate interest. However, when processing sensitive data (e.g., health data, data on trade union membership, information on the origin or sexual orientation, etc.), the processing must be based on a regulation provided by law or a collective agreement (e.g., on the basis of payroll obligations or to fulfill reporting obligations to authorities) or alternatively be based on a works council agreement.
  • Furthermore, data processing via AI must serve a specific and legitimate purpose (e.g., organizing working timetables and shift schedules, organizing employee absences) and may only take place if necessary to achieve that purpose.
  • The company must also inform employees about the data that is being processed, the purpose of the data processing, the means used for the data processing, and the legal basis for the processing.
  • Depending on the technical capabilities of the AI system, the company must also carry out a data protection impact assessment. This is particularly necessary if the impact of AI on the workforce is difficult to assess.
  • When transferring employee data to third parties (e.g., the AI software is not hosted on the company's local servers or if the company allows group companies to access employee data), the company is also obliged to agree terms with third parties on protecting customer and employee-related data. There are further obligations to consider when transferring data to third countries.
  • If AI systems enable automated decision-making in HR management, the company must ensure that the automated decision-making process is only a preliminary process and that the final decision is made by a human being.

In addition, employees can submit requests to the company for information about their personal data being processed. If necessary, employees can escalate their concerns to the data protection authority.

Compliance with the co-determination rights of the workforce

  • A company must obtain the works council's consent via a works council agreement before implementing AI models.
  • Where there is no competent works council, a company needs to obtain the individual contractual consent of the employees if the AI system is invasive.
  • Where there is a competent works council, companies must proactively inform the works council about the following before using AI for the first time: the data categories of the processed employee data,.the software used, the specific programs installed, any evaluation and processing procedures (e.g., the possibility of linking, duplicating, changing data), and any recipients of this data.
  • Furthermore, the works council has a right to be consulted and advised on the health effects of the AI system on the workforce. The works council can also request access to the AI system.

Risks of non-compliance

Penalties: Employees can initiate a complaint against their employer for breaches of data protection obligations. If the data protection authority confirms the existence of violations, this may trigger administrative fines of up to EUR 20 million or up to 4% of the annual global turnover, whichever is higher.

Damages and discrimination cases: Employees have the right to sue for damages if there are data protection violations and discriminatory AI results and to initiate discrimination proceedings before the authorities and courts.

Injunctions: In the event of non-compliance with co-determination rights, the works council can enforce the deactivation of the AI system via a court ruling, in some cases even via a preliminary injunction.

Reputational damage: Companies will not only be exposed to financial and legal risks. As proceedings before administrative authorities and courts are open to the public, it can also trigger reputational damage.


Several steps must be taken before implementing any AI systems to leverage their potential while ensuring HR compliance. These steps will also help organizations to prepare for their obligations under the EU AI Act. Not only is it necessary to introduce clear AI guidelines, but it is also vital to create awareness within the workforce regarding AI. This is the only way to prevent or at least reduce liability, penalties, and reputational damage. In addition, early engagement with the works council on the potential implementation of AI will help accelerate the implementation process of AI.

To access the German version, please click here

Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.