Singapore: MAS publishes consultation paper on proposed guidelines on AI risk management for financial institutions

In brief

On 13 November 2025, the Monetary Authority of Singapore (MAS) published the Consultation Paper on Guidelines on Artificial Intelligence Risk Management (“Consultation Paper”).

The Consultation Paper sets out the MAS’ proposed Guidelines on Artificial Intelligence (AI) Risk Management (“AIRM Guidelines”), which are intended to complement the existing principles on Fairness, Ethics, Accountability and Transparency. The AIRM Guidelines outline the MAS’ high-level supervisory expectations relating to AI risk management in financial institutions (FIs).

The AIRM Guidelines provide guidance on four key areas:

  1. AI oversight
  2. Key AI risk management systems, policies and procedures
  3. AI life cycle controls
  4. AI capability and capacity

Contents

It is intended for the AIRM Guidelines to apply to all FIs and for FIs to implement the AIRM Guidelines in a manner commensurate with the size and nature of their activities.

If you have any feedback or comments for the MAS on the proposals concerning the AIRM Guidelines, please reach out to the MAS via this link. The consultation will close on 31 January 2026.

Alternatively, if you have any questions on how this may impact your business or operations, please feel free to reach out to us.

Overview: scope and application of the AIRM Guidelines

Definition of AI

For the purpose of the AIRM Guidelines, AI may refer to an AI model, system or use case, defined as follows:

  1. Model: method or approach that converts assumptions and input data into outputs such as estimates, decisions or recommendations
  2. System: can comprise one or more models and other machine-based components
  3. Use case: specific real-world context that the model or system is applied to

Calculators or tools whose outputs are solely based on predefined programming logic or rules would not be regarded as AI.

Application of the AIRM Guidelines

The AIRM Guidelines are designed to be applied in a proportionate manner across all FIs of different sizes and risk profiles. FIs should implement the AIRM Guidelines in a manner commensurate with the size and nature of their activities, and the extent to which their use of AI could pose material risks.

All FIs should minimally institute basic policies for the use of AI commensurate with the FI’s level of AI adoption. FIs using AI as an integrated part of their business processes should minimally establish frameworks, policies and procedures to oversee their use of AI, apply clear identification and robust risk materiality assessments of AI use cases, systems or models, and maintain adequate AI inventories.

The application of AI life cycle standards and controls, as well as establishment of capabilities and technology infrastructure for the use of AI may be calibrated based on their relevance to the AI use cases, systems or models in the FI. Where these aspects are relevant to the AI use case, system or model, the FI may implement them based on risk materiality.

FIs are also expected to regularly review the adequacy of their AI risk management efforts against AI developments and address new or accentuated AI risks that may arise due to these developments.

AI oversight

To establish and oversee robust frameworks, policies and procedures to support AI risk management across the FI, the Board and senior management should do the following:

  1. Maintain effective oversight of AI-related risks, foster the appropriate risk culture for AI use, and ensure that the use of AI does not conflict with the ability to meet other supervisory expectations
  2. Ensure that existing risk management frameworks, policies and practices across the organisation adequately identify, assess and address risks posed by AI
  3. Ensure consistent standards, clear accountability and robust coordination across the FI to manage AI risks

Key AI risk management systems, policies and procedures

FIs are expected to ensure that their AI risk management framework encompasses key systems, policies and procedures for the following aspects:

  1. AI identification: Systems, policies and procedures should be established to ensure consistent identification of AI usage across all relevant business and functional areas. Clear roles and responsibilities for AI identification should be assigned.
  2. AI inventory: An accurate and up-to-date inventory of AI use cases, systems or models should be established and maintained across the FI. The AI inventory should capture key attributes and be regularly reviewed to ensure the captured attributes consider newer AI technologies.
  3. AI risk materiality assessment: An assessment methodology to evaluate the risk materiality of an AI use case, system or model based on the nature of the business should be established. The assessment should consider (i) the inherent risk materiality of an AI use case, system or model before appropriate risk management controls are applied and (ii) residual risk materiality after risk management controls are applied. Various risk dimensions relevant to the FI’s context should be considered, at a minimum, covering impact, complexity and reliance.

AI life cycle controls

FIs should plan for and implement robust controls covering the entire life cycle in a proportionate manner and develop contingency plans for high-risk AI use cases, systems or models.

Key areas include:

  1. Data management: Data management controls should be implemented to ensure data used across the AI life cycle is fit for purpose and representative, of high quality and subject to robust data governance.
  2. Transparency and explainability: The extent of transparency and explainability required of an AI use case, system or model according to its assessed risk materiality should be determined and the relevant controls established accordingly. Key considerations may include reliance on AI for the final decision, level of impact on customer or risk management outcomes.
  3. Fairness: What constitutes “fair” outcomes should be defined and there should be appropriate controls to identify and mitigate harmful biases and discriminatory outcomes across the AI life cycle.
  4. Human oversight: Controls should be implemented and regularly reviewed to ensure appropriate human oversight over an AI use case, system or model across its life cycle.
  5. Third-party AI management: It should be ensured that onboarding, development and deployment controls for third-party AI are adequate for the risk materiality of the use case, system or model that uses or depends on third-party AI.
  6. Selection: The objectives and risks of each AI use case, system or model should be considered when selecting AI algorithms or features in data to use.
  7. Evaluation and testing: Relevant evaluation and testing that is proportionate to the assessed risk materiality of the AI use case, system or model should be conducted, and appropriate guardrails implemented where there are risks and limitations associated with AI identified during development.
  8. Technology and cybersecurity risks: AI systems should be secure, well governed and supported by appropriate controls to manage technology and cybersecurity risks.
  9. Reproducibility and auditability: AI development process should be documented to enable reproducibility and auditability.
  10. Pre-deployment reviews: AI use cases, systems or models should be subject to (i) independent reviews prior to deployment and (ii) technology and cybersecurity reviews, to ensure that AI can be deployed into the production environment securely.
  11. Post-deployment monitoring and review: Robust controls for the ongoing monitoring of all deployed AI (including third-party AI used in the FI) should be developed and implemented, and aggregate risks across all AI use cases, systems and models periodically reviewed.
  12. Change management: (i) Comprehensive and robust controls for managing changes to deployed AI and (ii) clear controls for eventual retirement or decommissioning of AI when they are no longer needed or exceed risk tolerance should be developed and implemented.

AI capability and capacity

  1. AI risk management capabilities: An FI should determine and ensure the necessary competence and proper conduct of personnel involved in developing an AI use case, system or model, and conduct regular reviews to ensure the relevant personnel are equipped with adequate capabilities and capacity for effective AI risk management.
  2. Technology infrastructure for AI: An FI should ensure that its technology infrastructure is adequate for an AI use case, system or model, considering the relevant technology risk management guidelines, notices and industry frameworks.

* * * * *

LOGO_Wong&Leow_Singapore

© 2025 Baker & McKenzie. Wong & Leow. All rights reserved. Baker & McKenzie. Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.


Copyright © 2025 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.