Australia: ASIC urges licensees to review governance frameworks in light of increased AI usage

ASIC has published its first report analysing increased AI adoption by licensed entities, but urges entities to exercise caution to ensure that their governance frameworks remain up to date in the face of increased AI usage.

In brief

On 29 October 2024, ASIC issued a media release warning financial services and credit licensees to be aware of potential gaps in governance policies in light of increased artificial intelligence (AI) adoption by these licensed entities. Following an analysis of AI use cases, ASIC has issued Report 798 which highlights their findings and provides recommendations to entities to strengthen their risk management and governance frameworks. Joseph Longo, the ASIC Chair, has said ensuring that governance frameworks account for the planned use of AI is crucial to ensure that governance is adequate for the potential surge in consumer-facing AI. ASIC has urged entities to be proactive in addressing the governance issues posed by AI to mitigate risks and ensure that it is used ethically and responsibly.


Background

In 2023, ASIC conducted an analysis of generative AI and advanced data analytics (ADA) model usage across 23 licensees in the banking, credit, insurance and financial advice sectors. ASIC identified 621 use cases which directly or indirectly impacted consumers. Using these findings, ASIC  released Report 798, which offers insight into the most frequent uses of AI by licensed entities and identified the common deficiencies of the policy frameworks of these entities.

Findings from Report 798

Use of AI by licensed entities

Although the majority of the current in-use use cases examined in ASIC’s analysis relied on traditional machine-learning techniques, it identified that there was a significant uptick in the use of generative AI in use cases that were reported to be under development. ASIC identified that although generative AI only accounted for 5% of in-use use cases, it made up 22% of use cases in development.

In in-use and under development use cases where generative AI were employed, most cases were internal facing and involved supporting staff and increasing operational efficiency. In cases where generative AI was used to engage with consumers, it was most commonly used to:

  • Generate first drafts of documents, such as marketing material or correspondence
  • Summarise call transcripts or consumer correspondence
  • Power chatbots for internal use and customer engagement
  • Provide internal assistance

ASIC also identified across these licensed entities, the majority of AI use was internal facing and was used to assist human decision making or increase efficiency.

Commonly identified gaps in governance frameworks related to AI use

Based on the information gained from its analysis, ASIC identified that although many licensees had documented policies and procedure for managing general risks, such as privacy and security, they did not have specific AI-related policies in place. Challenges posed by the use of AI, such as transparency and contestability, were also found to be lacking management arrangements.

It was identified that some licensees were considering the risks of AI through a business-specific lens, rather than in a consumer-focused sense. These licensees failed to identify some AI-specific risks such as algorithmic bias, or fully consider the impact of AI use on their regulatory obligations.

ASIC indicated that licensees whose AI governance frameworks and policies were spread across multiple documents were at risk of facing issues related to overseeing AI use and failing to comply with frameworks due to the fragmented nature of the documents. Of significance, ASIC identified that 30% of use cases employed AI models developed by third parties, however some of these licensees failed to have adequate third-party management procedures in place.

Next steps for licensed entities

ASIC has urged licensees to review their existing regulatory obligations when using AI and review their corporate governance arrangements to ensure it adequately aligns with these obligations. ASIC has released 11 questions for licensees to consider when determining the robustness of their frameworks. These questions relate to:

  • Determining where AI is used within an organisation and ensuring an AI inventory exists and is being adequately maintained
  • Establishing a clear AI strategy
  • Considering the ethical implications of AI use
  • Establishing accountability for AI use and outcomes
  • Clarifying conduct and regulatory compliance risks from AI, particularly as it relates to consumers
  • Ensuring governance arrangements remain at the forefront of current and future AI usage plans
  • Confirming that AI policies and procedures are fit for purpose for current and anticipated future use
  • Ensuring adequate technological and human resourcing
  • Establishing clear human oversight for monitoring AI usage and procedures for when issues arise
  • Managing the challenges of relying on third-party AI models
  • Ensuring regular engagement with regulatory AI reform proposals

For any queries about Report 798, please contact our team.


Copyright © 2025 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.