Australia: New safety measures introduced for AI

The Australian Government has introduced its first iteration of the 'Voluntary AI Safety Standard' and released a proposals paper on mandatory guardrails for AI in high-risk settings.

In brief

On 5 September 2024, the Australian Government (Department of Industry, Science and Resources) introduced the Voluntary AI Safety Standard (Voluntary Standard). The Voluntary Standard provides practical guidance for organisations involved in the AI supply chain through ten voluntary guardrails, which focus on testing, transparency, and accountability requirements.

In addition to this Voluntary Standard, ten mandatory guardrails for AI systems in "high-risk" settings have been proposed in the Australian Government's Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings' (Proposals Paper).

The Proposals Paper also outlines three potential regulatory options to mandate the proposed guardrails in high-risk AI settings:

  1. A domain specific approach: adapting existing regulatory frameworks to include the guardrails through targeted review of existing legislation.
  2. A framework approach: adapting existing regulatory frameworks through framework legislation.
  3. A whole of economy approach: introducing a new AI-specific Act to implement the proposed mandatory guardrails for AI in high-risk settings.

Contents

The Australia Government is seeking submissions on the Proposals Paper as part of its Consultation. The Consultation closes Friday, 4 October 2024.  

In more detail

The new safety measures are based on the Australian Government's interim response to the 'Safe and Responsible AI in Australia' discussion paper, which was released earlier this year and discussed in our client alert. That interim response committed to developing the Voluntary Standard and considering the introduction of mandatory safeguards for AI in high-risk settings.

The Guardrails under the Voluntary Standard

At a glance, the ten voluntary guardrails in the Voluntary Standard include: 

  1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. Establish and implement a risk management process to identify and mitigate risks;
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance;
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight;
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. Establish processes for people impacted by AI systems to challenge use or outcomes;
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. Keep and maintain records to allow third parties to assess compliance with guardrails; and
  10. Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Proposed Mandatory Guardrails

The first nine proposed mandatory guardrails set out in the Proposals Paper are identical in form to the first nine voluntary guardrails under the Voluntary Standard. The tenth guardrail of the proposed mandatory guardrails differs as follows:

  1. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

Both the voluntary guardrails and proposed mandatory guardrails are intended to align with national and international standards, including ISO/IEC 42001:2023 (Artificial Intelligence Management System) and the developments in AI regulation in jurisdictions such as the EU, Canada, and the UK. 

Application

Despite this alignment in form, the Voluntary Standard has a wider application than the proposed mandatory guardrails. The Voluntary Standard will apply to all organisations, including:

  • AI developers: an organisation or entity that designs, develops, tests and provides AI technologies, such as AI models and components; and
  • AI deployers: an individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be internal to an organisation, or external and impacting others, such as customers or other people who are not deployers of the system. As most AI deployers rely on AI systems developed or provided by third parties, the Voluntary Standard also provides procurement guidance.

The proposed mandatory guardrails will apply to AI developers and AI deployers, but only in the context of "high-risk" AI settings. The use of AI may be "high-risk" based on:

  • its intended and foreseeable uses. For example, due to the risk of adverse impacts to an individuals' human rights, health and safety, or legal rights. High-risk use cases identified in other countries include AI used in biometrics, employment, law enforcement, and critical infrastructure; or
  • in the case of "general-purpose AI" (GPAI), the mandatory guardrails will apply to all GPAI models. GPAI models are defined in the Proposals Paper as "an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems".  

Next steps

Businesses that either develop or deploy AI should consider adopting the voluntary guardrails to ensure best practice. The Australian Government suggests starting with guardrail one to create core foundations to your business' use of AI.

Businesses should also consider making a submission in response to the Proposals Paper for the mandatory guardrails, which is open until Friday, 4 October 2024.

This Voluntary Standard and Proposals Paper come after a flurry of AI regulatory developments in 2023, as discussed in our 'Year in Review' alert. The Australian Government has flagged that the Voluntary Standard is a first iteration, which it will update over the next six months.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.