Following feedback from the banking industry, on 5 November 2019 the HKMA issued a set of guiding principles on consumer protection aspects relating to the use of big data analytics and artificial intelligence (BDAI).[1] Following the launch of its Veritas initiative in November 2019 [2], the Monetary Authority of Singapore (MAS) has recently announced the commencement of the first phase of development of fairness metrics in credit risk scoring and customer marketing.[3] In implementing their AI strategies, financial institutions in both markets will continue to be held to a high standard and are expected to operate ethically and fairly. We consider the approach of both regulators in more depth.
Background
AI offers multiple potential benefits to banks including:
-
improving the customer onboarding and service experience;
-
increased efficiency and accuracy for credit scoring, loan processing and risk management through task automation; and
-
reputational enhancement and cost reductions through reduction in delay.
To achieve these aims, AI needs access to significant amounts of data including transaction history, market trends and customer credit history. The storage of this data has become significantly cheaper and easier with increasing access to private and public cloud storage which can be increased or decreased as demand fluctuates making it a variable rather than fixed cost. The open API networks that are being developed in Hong Kong and Singapore will enable multiple parties to access the same information, innovate and compete whilst, depending on the specific regime, giving back the customer ownership and control of their banking information. An appropriate AI tool allied to secure information sharing enabled by open API's can allow a customer access to a more personalised combination of offerings from third party service providers. Products can be matched specifically to a customer based on their spending history and credit scoring accuracy and risk management can be increased by tracking online presence for negative news and information about a customer. AI and biometric authentication tools relying on customer's iris, voice or facial scans are also being used as part of efforts to prevent fraud. However, there is also an inherent risk that if AI is not implemented and monitored in an appropriate way, it can result in customers being disenfranchised and unfairly locked out of or offered unsuitable financial services leading to financial loss and undermining existing protections. Personal data, particularly biometric information, can be misused if it is not protected appropriately and the ever expanding sophistication of AI may not be adequately captured or contemplated by existing consents. The HKMA and MAS have sought to strike a balance between enabling these newer technologies to be explored and implemented through enhancement or clarification of the application of their respective regulatory regimes.
Hong Kong - Clear Guidance
The HKMA guiding principles on consumer protection aspects relating to BDAI focus on governance and accountability, fairness, transparency and disclosure, and data privacy and protection. The HKMA expects banks to adopt a risk-based approach commensurate with the risks involved in their BDAI applications.
Governance and accountability
-
The board and senior management of authorised institutions (AIs) should remain accountable for all BDAI-driven decisions and processes, including establishing an appropriate governance and accountability framework and ensuring that BDAI models (including any algorithms) are explainable, properly validated and understood by the AI.
-
The AI's board and senior management should ensure that consumer protection principles set out in various regulatory codes and industry charters are adhered to and that BDAI applications are consistent with the AI's corporate values and ethical standards.
Fairness
-
AIs should ensure BDAI models produce objective and fair outcomes to customers, including ensuring that customers' access to basic banking services is not unjustifiably denied and their financial capabilities, situation and needs are taken into account.
-
The models for BDAI-driven decisions should be robust, and manual intervention to mitigate irresponsible lending decisions should be possible, where necessary.
Transparency and disclosure
-
AIs should provide appropriate transparency to customers regarding BDAI applications through proper disclosure, including when BDAI will be used, associated risks and explanations of the types of data used and the factors/models impacting BDAI-driven decisions.
-
AIs should provide proper disclosure to customers to help them understand the AI’s approach to using customer data, enable customers to request reviews on the decisions made by the BDAI applications, and ensure that any complaint handling and redress mechanism for BDAI-based products and services are accessible and fair.
-
Appropriate consumer education to enhance consumers’ understanding on BDAI technology should be carried out which ensure that customer communications are clear and simple to understand.
Data privacy and protection
-
AIs should implement effective protection measures to safeguard customer data, which ensures compliance with the Personal Data (Privacy) Ordinance (PDPO) and related principles and codes of practice approved by the Privacy Commissioner for Personal Data (PCPD), including good practices issued by the PCPD related to BDAI and Fintech, as well as other applicable local and overseas statutory or regulatory requirements.
-
AIs should consider embedding data protection in the design of a product or system from the outset and storing only the minimum amount of data for the minimum amount of time.
-
AIs should ensure that any consent for the collection and use of personal data relating to a banking product or service powered by BDAI technology is as clear as possible in the interests of achieving informed consent.
Singapore - The Veritas Framework
In November 2019, the MAS made its initial announcement that it would work with financial industry partners to create the Veritas framework. The framework aims to provide financial institutions a means of verification that the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT Principles) in the Use of Artificial Intelligence and Data Analytics (AIDA) released in November 2018 [4] have been incorporated into their AIDA solutions. A copy of our client alert on the FEAT Principles may be found here.[5] The Veritas framework will comprise open source tools that can be applied to different business lines like retail banking and corporate finance and in different markets. The aim of Veritas and the FEAT Principles is to help foster the development of a trusted environment enabling progress and AI adoption within financial services.
The MAS has now confirmed that the first phase of development of the Veritas framework will commence with fairness metrics in credit risk scoring and customer marketing. The key questions for the metrics to address are:
-
Is there any systemic disadvantage to any particular individuals or groups through the credit scoring process; and
-
Are the customer data and products analysed correctly so that the customer is recommended the right product at the right time?
Whilst the Veritas consortium now has 25 members, the initial development work will be undertaken by 2 core teams. The metrics will be documented through preparation and publication of a white paper and subsequently distributed by an open-source code by the end of 2020. It is envisaged that individual firms will be free to implement the metrics in their preferred way through integration with their existing IT environment. The APIX platform will also be used to facilitate access by financial institutions and Fintechs to other providers who have been able to validate their AIDA solutions.
Next Steps
Notwithstanding the benefits that AI may offer to financial institutions, it is clear that regulators are concerned to ensure that the consumer experiences no detriment from the introduction of these solutions. The release of guidance by the HKMA and the commencement of the Veritas framework by the MAS represent important steps in facilitating technological growth whilst maintaining cornerstone protections for consumers. Financial institutions will continue to be held to high cybersecurity and data protection standards to ensure the information that they gather cannot be misused. They will also need to demonstrate that the AI solutions that they implement meet suitability requirements for advice, do not arbitrarily exclude customers from access to financial services and that they do not inadvertently disclose client information to unapproved parties. Failure to meet all of these expectations will result in regulatory impact and loss of consumer trust.
Financial institutions will need to track ongoing developments in this area and consider:
-
whether their proposed AI models are contemplated by existing client terms and conditions;
-
to what extent biometric data including voice, facial features and other personal information is required and who does it need to be shared with;
-
are cybersecurity protections sufficient; and, most importantly,
-
continually monitor and test to ensure that any AI systems are operating in a fair, ethical manner and delivering suitable and appropriate recommendations for customers.
[1] https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191105e1.pdf
[2] https://www.mas.gov.sg/news/media-releases/2019/mas-partners-financial-industry-to-create-framework-for-responsible-use-of-ai
[3] https://www.mas.gov.sg/news/media-releases/2020/fairness-metrics-to-aid-responsible-ai-adoption-in-financial-services
[4] https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf
[5] https://www.bakermckenzie.com/-/media/files/insight/publications/2019/02/al_consultationonsingaporemodelai_feb2019.pdf
***
Baker McKenzie Wong & Leow is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "partner" means a person who is a partner or equivalent in such a law firm. Similarly, reference to an “office” means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.