Europe: Legal safeguarding against AI hallucination pitfalls

In brief

In our commitment to keep you informed of emerging legal challenges, we would like to draw your attention to the issue of AI hallucinations and their implications for the responsible use of AI in your business. This alert aims to provide an overview of the legal ramifications of AI hallucinations, as well as recent regulatory developments in this area.


Hallucinating law

AI hallucinations, incorrect or misleading results from AI models, are often caused by inadequate or biased training data and incorrect model assumptions. These factors can lead to AI learning flawed patterns, resulting in hallucinations.

  • A newly released study by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) documents the particular prevalence of AI hallucinations related to legal queries. When confronted with questions on US case law, popular AI chatbots like ChatGPT were more likely to give false than correct answers.
  • The study’s findings highlight the risks of imprudent and unsupervised use of AI for complex legal questions. Users need to be aware of the risk of receiving incorrect or misleading answers.

Mitigation of AI hallucinations

The risks of AI hallucinations can be mitigated through careful model evaluation, establishing human oversight, and prioritizing transparent data sets to make AI outcomes understandable and traceable. While further innovation is needed to address the issue of AI hallucinations, sensible regulation can guide responsible progress.

The EU AI Act, which is set to be adopted soon, addresses challenges posed by AI hallucinations, emphasizing data transparency obligations.

  • All general-purpose AI models will have to meet data transparency obligations. Their results must be traceable and explainable. This may include providing explanations of how an AI system arrived at its decision, as well as information on the data used to train the system and the accuracy of the system.
  • High-risk AI systems are faced with much stricter transparency obligations, as well as the requirement of appropriate human oversight.
  • The US Executive Order on AI includes similar provisions mandating transparency. 

Sensible use of AI in business

AI is already an important tool for many businesses, and in the future it will become indispensable. However, an awareness of AIs shortcomings is crucial. Unsupervised use carries a variety of risks, for instance:

  • Hidden biases in AI may lead to unwitting discrimination within a company, for example, in hiring decisions.
  • Commissioning AI with complex legal issues harbors the risk of incorrect application of the law and could expose companies to liability.

Harnessing the power of AI

Companies need to pursue a structured approach to AI. With a robust AI risk management strategy in place, companies can ensure responsible and forward-thinking usage that maximizes the immense potential of AI.

We are here to assist you with the legal aspects of this journey. Please don't hesitate to reach out to us with any questions you may have. Our dedicated team at Baker McKenzie is here to help.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.