Background
In 2023, ASIC conducted an analysis of generative AI and advanced data analytics (ADA) model usage across 23 licensees in the banking, credit, insurance and financial advice sectors. ASIC identified 621 use cases which directly or indirectly impacted consumers. Using these findings, ASIC released Report 798, which offers insight into the most frequent uses of AI by licensed entities and identified the common deficiencies of the policy frameworks of these entities.
Findings from Report 798
Use of AI by licensed entities
Although the majority of the current in-use use cases examined in ASIC’s analysis relied on traditional machine-learning techniques, it identified that there was a significant uptick in the use of generative AI in use cases that were reported to be under development. ASIC identified that although generative AI only accounted for 5% of in-use use cases, it made up 22% of use cases in development.
In in-use and under development use cases where generative AI were employed, most cases were internal facing and involved supporting staff and increasing operational efficiency. In cases where generative AI was used to engage with consumers, it was most commonly used to:
- Generate first drafts of documents, such as marketing material or correspondence
- Summarise call transcripts or consumer correspondence
- Power chatbots for internal use and customer engagement
- Provide internal assistance
ASIC also identified across these licensed entities, the majority of AI use was internal facing and was used to assist human decision making or increase efficiency.
Commonly identified gaps in governance frameworks related to AI use
Based on the information gained from its analysis, ASIC identified that although many licensees had documented policies and procedure for managing general risks, such as privacy and security, they did not have specific AI-related policies in place. Challenges posed by the use of AI, such as transparency and contestability, were also found to be lacking management arrangements.
It was identified that some licensees were considering the risks of AI through a business-specific lens, rather than in a consumer-focused sense. These licensees failed to identify some AI-specific risks such as algorithmic bias, or fully consider the impact of AI use on their regulatory obligations.
ASIC indicated that licensees whose AI governance frameworks and policies were spread across multiple documents were at risk of facing issues related to overseeing AI use and failing to comply with frameworks due to the fragmented nature of the documents. Of significance, ASIC identified that 30% of use cases employed AI models developed by third parties, however some of these licensees failed to have adequate third-party management procedures in place.
Next steps for licensed entities
ASIC has urged licensees to review their existing regulatory obligations when using AI and review their corporate governance arrangements to ensure it adequately aligns with these obligations. ASIC has released 11 questions for licensees to consider when determining the robustness of their frameworks. These questions relate to:
- Determining where AI is used within an organisation and ensuring an AI inventory exists and is being adequately maintained
- Establishing a clear AI strategy
- Considering the ethical implications of AI use
- Establishing accountability for AI use and outcomes
- Clarifying conduct and regulatory compliance risks from AI, particularly as it relates to consumers
- Ensuring governance arrangements remain at the forefront of current and future AI usage plans
- Confirming that AI policies and procedures are fit for purpose for current and anticipated future use
- Ensuring adequate technological and human resourcing
- Establishing clear human oversight for monitoring AI usage and procedures for when issues arise
- Managing the challenges of relying on third-party AI models
- Ensuring regular engagement with regulatory AI reform proposals
For any queries about Report 798, please contact our team.