Based on the Principles and referring to the rules or guidelines of Bank for International Settlements (BIS), International Organization of Securities Commissions (IOSCO), EU, Singapore and US, on 28 December 2023, the FSC further released the draft of "Guidelines for the Application of AI in the Financial Industry" ("Guidelines") for a 60-day public consultation. After gathering opinions from the public and the industry, the FSC published the final version of Guidelines on 20 June 2024.
Overview of the Guidelines
The aim of the Guidelines is to encourage the financial industry to adopt, use, and manage AI under controllable risk conditions. The Guidelines consist of a general section and six chapters. The general section addresses common issues including AI definitions (AI system and Generative AI), life cycles, factors of risk assessment, implementation of core principles based on risks, and third-party supervision. The six chapters in the Guidelines mirror the six core principles in the Principles to provide guidance on implementing the core principles and emphasize the importance of risk control when a financial institution introduces and applies AI systems in financial operations.
The FSC noted in the Guidelines that it is a non-binding administrative guidance. The Guidelines also acknowledges that there could be ways to achieve the goal of properly managing AI risks, and financial institutions can adopt more cost-effective methods to achieve the same goal. If industry associations are looking to establish self-regulatory rules for the use of AI, the Guidelines may serve as a reference. Before the establishment of self-regulatory rules, it is recommended that financial institutions follow the Guidelines for the application of AI. The Guidelines specially mention that branches of international groups in Taiwan may follow existing rules of the group if the AI systems are provided by the group.
Summary of the contents
In the general section, the life cycle of application of AI systems is categorized into four stages: (1) system planning and design; (2) data collection and input; (3) model building and validation; and (4) system deployment and monitoring. Throughout the Guidelines, stages one to three are collectively referred to as the "introduction" of AI, stage four alone is referred to as the "use" of AI, and the entire life cycle is referred to as the "application" of AI. Since the AI systems may be developed by financial institutions themselves, commissioned third-parties or acquired from other companies, not all financial institutions will go through all four stages. The Guidelines encourage financial institutions to identify the extent to which they can monitor the risks, and to allocate the responsibilities via contracts or other means to partner companies to monitor the matters which financial institutions have less control.
The general section also outlines the factors that should be considered in risk assessment, including (1) whether it directly provides client service or has material impact on operation; (2) the extent of use of personal data; (3) the extent of AI autonomous; (4) the complexity of AI systems; (5) the extent and breadth of impact on different stakeholders; and (6) the completeness of recourse options.
Furthermore, in circumstances where AI systems are provided by third-party suppliers, the Guidelines suggest financial institutions to assess the knowledge, expertise, and experience of third-party suppliers and evaluate the concentration risks (the concentration risks of the financial institution's own delegation to the same entity). Based on the assessment, appropriate oversight should be applied to third-party suppliers to prevent potential risks or issues. Financial institutions are also advised to enter into written contracts with third-party suppliers. If client data will be transmitted to third-party suppliers for processing, it is advised that the contract with such suppliers includes data protection clauses, which should clearly cover encrypted data transmission, security of storage, and disposal of data after service termination. Moreover, it is advised that financial institutions ask third-party suppliers to retain written or digital records of execution of the delegated matters. If the practice involves outsourcing of operations, they should comply with the outsourcing regulations of each industry.
The general section is then followed by six chapters which corresponds to the six core principles published by the FSC earlier, namely: (1) Establishment of governance and accountability mechanisms; (2) Emphasis on fairness and human-centric values; (3) Protection of privacy and customer rights; (4) Ensuring system robustness and security; (5) Implementation of transparency and explainability; and (6) Promotion of sustainable development. Under each chapter, the principle at issue is elaborated by guidelines on how to implement the principle, including in the 4 stages of the life cycle of application of AI systems.
Key changes in the final version
It is clarified throughout the final version that the risk management methods in the Guidelines are just "examples" for reference.
The final version adds more third-party supervision methods. For example, if client data will be transmitted to third-party suppliers for processing, it is advised that the contract includes data protection clauses as mentioned.
While the final version still suggests financial institutions to provide recourse options to the consumers who are potentially affected by the unfavorable result, it adds that recourse options may not be offered if the AI system is related to anti-money laundering or fraud detection and is not suitable for providing recourse options.
The final version also differentiates the methods for AI systems that are developed by financial institutions themselves, by other companies commissioned by financial institutions, or simply acquired. For example, the requirement of explainability is limited to the AI systems that are developed by financial institutions themselves or other companies commissioned by financial institutions, given financial institutions may not be able to know the details of operation of acquired AI systems due to commercial confidentiality.
Future prospects
Even though the Guidelines are non-binding, as the Guidelines will likely be incorporated into the self-regulatory rules to be established by industry associations, the Guidelines may become quasi-binding. The Bankers' Association of Taiwan has published its self-regulatory rules on 6 May 2024.
Therefore, it is recommended that financial institutions start to consider how to incorporate the Guidelines into their AI governance model. For financial institutions operating in multiple jurisdictions, it is further recommended to check if there are potential conflicting obligations between the Guidelines and regulations in other jurisdictions to ensure compliance globally.