Canada: Amplified risks for financial institutions from AI, OSFI and FCAC Report

In brief

On 24 September 2024, following an in-depth consultation with industry participants, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC)1 published their findings concerning the use and adoption of artificial intelligence (AI) by federally regulated financial institutions. The report highlighted that a significant majority of financial institutions will adopt AI by 2026, and also set out a number of key risks that arise for financial institutions from AI usage. OSFI and FCAC emphasized the need for financial institutions to adopt a dynamic and responsive risk management system with respect to AI, and confirmed their commitment to work towards more specific best practices for industry participants.


Contents

In depth

On 24 September 2024, OSFI and FCAC published a report on AI uses and risks at federally regulated financial institutions (hereafter, the "AI Report"), which included findings from a recent voluntary questionnaire that had been issued to financial institutions in December 2023 that sought feedback on their AI and quantum computing preparedness.

The results from the questionnaire revealed that the use of AI at financial institutions is increasing rapidly, with 70% of financial institutions expecting to use AI by 2026. In fact, the AI Report found that financial institutions are now using AI for more critical use cases, such as pricing, underwriting, claims management, trading, investment decisions and credit adjudication. In addition, financial institutions are facing competitive pressures to adopt AI, leading to further potential business or strategic risks. As such, according to the AI Report, it is critical that financial institutions be vigilant and maintain adaptable risk and control frameworks in order to address both internal and external risks from AI.

Amplified risks from AI adoption

The AI Report outlined key risks that arise for financial institutions from the use of AI, which can come from both internal AI adoption or from the use of AI by external actors.

  1. Data governance risks were identified as a top concern about AI usage. The AI Report noted that addressing AI data governance is crucial, whether through general data governance frameworks, specific AI data governance, or model risk management frameworks.
  2. Model risk and explainability were identified as a key risk, as risks associated with AI models are elevated due to their complexity and opacity. The AI Report noted that financial institutions must ensure that all stakeholders – including users, developers and control functions – are involved in the design and implementation of AI models. In addition, financial institutions need to ensure that there is an appropriate level of explanation in order to inform internal users/customers and also for compliance and governance purposes.
  3. Legal, ethical and reputational risks are a challenge for financial institutions implementing AI systems. The AI Report recommended, among other things, that financial institutions take a comprehensive approach to managing the risks associated with AI, as narrow adherence to jurisdictional legal requirements could expose the financial institution to reputational risks. The report also noted that consumer privacy and consent should be prioritized.
  4. Third-party risks and reliance on third-party providers for AI models and systems were also noted to be formidable challenges, including when seeking to ensure that the third-party complies with a financial institution's internal standards.
  5. Operational and cybersecurity risks can also be amplified through AI adoption. The AI Report noted that as financial institutions integrate AI into their processes, procedures and controls, operational risks will increase. In addition, cyber risks can stem from using AI tools internally, and can be elevated through complex relationships with third parties. Without proper security measures in place, the use of AI could increase the risk of cyber attacks. Accordingly, the AI Report warned that financial institutions must apply sufficiently robust safeguards around their AI systems to ensure resiliency.
  6. Business and financial risks were noted to include risks associated with financial and competitive pressures for financial institutions that do not adopt AI. Among other things, OSFI and FCAC warned that if AI begins to disrupt the financial industry, firms that lag in adopting AI may find it difficult to respond without having in-house AI expertise and knowledge.
  7. Emerging credit, market and liquidity risks. The AI Report noted that there are macroeconomic impacts of AI on areas like unemployment levels that could lead to credit losses. In addition, as adoption increases, AI models could have significant impacts on asset price volatility and the movement of deposits between financial institutions.

Recommendations for financial Institutions in AI risk management

In response to the risks identified in the AI Report, OSFI and FCAC made a number of recommendations for financial institutions to manage or mitigate such risks within their organizations. The following recommendations were made:

  1. Financial institutions need to conduct risk identification and assessment in a rigorous manner, and establish multidisciplinary and diverse teams to deal with AI use within their organizations.
  2. Financial institutions must be open, honest and transparent in dealing with their customers when it comes to both AI and data.
  3. Financial institutions should plan and define an AI strategy, even if they do not plan to adopt AI in the short term. 
  4. As a transverse risk, AI adoption must be addressed comprehensively, with risk management standards that integrate all related risks. Boards and oversight bodies of financial institutions must be engaged, to ensure that their organizations are properly prepared for AI outcomes, by balancing both the benefits and risks of AI adoption.

Next steps

In AI Report, OSFI and FCAC highlighted their plans to respond dynamically and proactively to an evolving risk environment surrounding AI, as the uncertain impacts of AI represent a challenge for regulators as well. OSFI and FCAC, in partnership with other industry participants, will also aim to build upon prior work on AI to establish more specific best practices.

On 2 October 2024, after having issued the AI Report, OSFI published a semi-annual update noting that, while the risks it had previously identified in its Annual Risk Outlook (Fiscal Year 2024-2025) persisted, integrity and security risks continue to "intensify and multiply" particularly due to, among other things, the risks of artificial intelligence which has "risen in significance since the release of the Annual Risk Outlook". While its assessment of the impact and interrelation of AI adoption on the risk landscape remains ongoing, OSFI noted that it plans to strengthen existing guidelines to support mitigation of AI-related risks. To that end, as a first step, it will issue an updated Model Risk Management guideline in the summer of 2025 which will include greater clarity on expectations around AI models.

For more information on AI in financial services, please visit our landing page, participate in our events and talk with us.


1 OSFI and FCAC are federal regulators in the banking and financial services sector in Canada.


Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.