Hallucinating law
AI hallucinations, incorrect or misleading results from AI models, are often caused by inadequate or biased training data and incorrect model assumptions. These factors can lead to AI learning flawed patterns, resulting in hallucinations.
- A newly released study by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) documents the particular prevalence of AI hallucinations related to legal queries. When confronted with questions on US case law, popular AI chatbots like ChatGPT were more likely to give false than correct answers.
- The study’s findings highlight the risks of imprudent and unsupervised use of AI for complex legal questions. Users need to be aware of the risk of receiving incorrect or misleading answers.
Mitigation of AI hallucinations
The risks of AI hallucinations can be mitigated through careful model evaluation, establishing human oversight, and prioritizing transparent data sets to make AI outcomes understandable and traceable. While further innovation is needed to address the issue of AI hallucinations, sensible regulation can guide responsible progress.
The EU AI Act, which is set to be adopted soon, addresses challenges posed by AI hallucinations, emphasizing data transparency obligations.
- All general-purpose AI models will have to meet data transparency obligations. Their results must be traceable and explainable. This may include providing explanations of how an AI system arrived at its decision, as well as information on the data used to train the system and the accuracy of the system.
- High-risk AI systems are faced with much stricter transparency obligations, as well as the requirement of appropriate human oversight.
- The US Executive Order on AI includes similar provisions mandating transparency.
Sensible use of AI in business
AI is already an important tool for many businesses, and in the future it will become indispensable. However, an awareness of AIs shortcomings is crucial. Unsupervised use carries a variety of risks, for instance:
- Hidden biases in AI may lead to unwitting discrimination within a company, for example, in hiring decisions.
- Commissioning AI with complex legal issues harbors the risk of incorrect application of the law and could expose companies to liability.
Harnessing the power of AI
Companies need to pursue a structured approach to AI. With a robust AI risk management strategy in place, companies can ensure responsible and forward-thinking usage that maximizes the immense potential of AI.
We are here to assist you with the legal aspects of this journey. Please don't hesitate to reach out to us with any questions you may have. Our dedicated team at Baker McKenzie is here to help.