In depth
On 24 September 2024, OSFI and FCAC published a report on AI uses and risks at federally regulated financial institutions (hereafter, the "AI Report"), which included findings from a recent voluntary questionnaire that had been issued to financial institutions in December 2023 that sought feedback on their AI and quantum computing preparedness.
The results from the questionnaire revealed that the use of AI at financial institutions is increasing rapidly, with 70% of financial institutions expecting to use AI by 2026. In fact, the AI Report found that financial institutions are now using AI for more critical use cases, such as pricing, underwriting, claims management, trading, investment decisions and credit adjudication. In addition, financial institutions are facing competitive pressures to adopt AI, leading to further potential business or strategic risks. As such, according to the AI Report, it is critical that financial institutions be vigilant and maintain adaptable risk and control frameworks in order to address both internal and external risks from AI.
Amplified risks from AI adoption
The AI Report outlined key risks that arise for financial institutions from the use of AI, which can come from both internal AI adoption or from the use of AI by external actors.
- Data governance risks were identified as a top concern about AI usage. The AI Report noted that addressing AI data governance is crucial, whether through general data governance frameworks, specific AI data governance, or model risk management frameworks.
- Model risk and explainability were identified as a key risk, as risks associated with AI models are elevated due to their complexity and opacity. The AI Report noted that financial institutions must ensure that all stakeholders – including users, developers and control functions – are involved in the design and implementation of AI models. In addition, financial institutions need to ensure that there is an appropriate level of explanation in order to inform internal users/customers and also for compliance and governance purposes.
- Legal, ethical and reputational risks are a challenge for financial institutions implementing AI systems. The AI Report recommended, among other things, that financial institutions take a comprehensive approach to managing the risks associated with AI, as narrow adherence to jurisdictional legal requirements could expose the financial institution to reputational risks. The report also noted that consumer privacy and consent should be prioritized.
- Third-party risks and reliance on third-party providers for AI models and systems were also noted to be formidable challenges, including when seeking to ensure that the third-party complies with a financial institution's internal standards.
- Operational and cybersecurity risks can also be amplified through AI adoption. The AI Report noted that as financial institutions integrate AI into their processes, procedures and controls, operational risks will increase. In addition, cyber risks can stem from using AI tools internally, and can be elevated through complex relationships with third parties. Without proper security measures in place, the use of AI could increase the risk of cyber attacks. Accordingly, the AI Report warned that financial institutions must apply sufficiently robust safeguards around their AI systems to ensure resiliency.
- Business and financial risks were noted to include risks associated with financial and competitive pressures for financial institutions that do not adopt AI. Among other things, OSFI and FCAC warned that if AI begins to disrupt the financial industry, firms that lag in adopting AI may find it difficult to respond without having in-house AI expertise and knowledge.
- Emerging credit, market and liquidity risks. The AI Report noted that there are macroeconomic impacts of AI on areas like unemployment levels that could lead to credit losses. In addition, as adoption increases, AI models could have significant impacts on asset price volatility and the movement of deposits between financial institutions.
Recommendations for financial Institutions in AI risk management
In response to the risks identified in the AI Report, OSFI and FCAC made a number of recommendations for financial institutions to manage or mitigate such risks within their organizations. The following recommendations were made:
- Financial institutions need to conduct risk identification and assessment in a rigorous manner, and establish multidisciplinary and diverse teams to deal with AI use within their organizations.
- Financial institutions must be open, honest and transparent in dealing with their customers when it comes to both AI and data.
- Financial institutions should plan and define an AI strategy, even if they do not plan to adopt AI in the short term.
- As a transverse risk, AI adoption must be addressed comprehensively, with risk management standards that integrate all related risks. Boards and oversight bodies of financial institutions must be engaged, to ensure that their organizations are properly prepared for AI outcomes, by balancing both the benefits and risks of AI adoption.
Next steps
In AI Report, OSFI and FCAC highlighted their plans to respond dynamically and proactively to an evolving risk environment surrounding AI, as the uncertain impacts of AI represent a challenge for regulators as well. OSFI and FCAC, in partnership with other industry participants, will also aim to build upon prior work on AI to establish more specific best practices.
On 2 October 2024, after having issued the AI Report, OSFI published a semi-annual update noting that, while the risks it had previously identified in its Annual Risk Outlook (Fiscal Year 2024-2025) persisted, integrity and security risks continue to "intensify and multiply" particularly due to, among other things, the risks of artificial intelligence which has "risen in significance since the release of the Annual Risk Outlook". While its assessment of the impact and interrelation of AI adoption on the risk landscape remains ongoing, OSFI noted that it plans to strengthen existing guidelines to support mitigation of AI-related risks. To that end, as a first step, it will issue an updated Model Risk Management guideline in the summer of 2025 which will include greater clarity on expectations around AI models.
For more information on AI in financial services, please visit our landing page, participate in our events and talk with us.
1 OSFI and FCAC are federal regulators in the banking and financial services sector in Canada.