It is intended for the AIRM Guidelines to apply to all FIs and for FIs to implement the AIRM Guidelines in a manner commensurate with the size and nature of their activities.
If you have any feedback or comments for the MAS on the proposals concerning the AIRM Guidelines, please reach out to the MAS via this link. The consultation will close on 31 January 2026.
Alternatively, if you have any questions on how this may impact your business or operations, please feel free to reach out to us.
Overview: scope and application of the AIRM Guidelines
Definition of AI
For the purpose of the AIRM Guidelines, AI may refer to an AI model, system or use case, defined as follows:
- Model: method or approach that converts assumptions and input data into outputs such as estimates, decisions or recommendations
- System: can comprise one or more models and other machine-based components
- Use case: specific real-world context that the model or system is applied to
Calculators or tools whose outputs are solely based on predefined programming logic or rules would not be regarded as AI.
Application of the AIRM Guidelines
The AIRM Guidelines are designed to be applied in a proportionate manner across all FIs of different sizes and risk profiles. FIs should implement the AIRM Guidelines in a manner commensurate with the size and nature of their activities, and the extent to which their use of AI could pose material risks.
All FIs should minimally institute basic policies for the use of AI commensurate with the FI’s level of AI adoption. FIs using AI as an integrated part of their business processes should minimally establish frameworks, policies and procedures to oversee their use of AI, apply clear identification and robust risk materiality assessments of AI use cases, systems or models, and maintain adequate AI inventories.
The application of AI life cycle standards and controls, as well as establishment of capabilities and technology infrastructure for the use of AI may be calibrated based on their relevance to the AI use cases, systems or models in the FI. Where these aspects are relevant to the AI use case, system or model, the FI may implement them based on risk materiality.
FIs are also expected to regularly review the adequacy of their AI risk management efforts against AI developments and address new or accentuated AI risks that may arise due to these developments.
AI oversight
To establish and oversee robust frameworks, policies and procedures to support AI risk management across the FI, the Board and senior management should do the following:
- Maintain effective oversight of AI-related risks, foster the appropriate risk culture for AI use, and ensure that the use of AI does not conflict with the ability to meet other supervisory expectations
- Ensure that existing risk management frameworks, policies and practices across the organisation adequately identify, assess and address risks posed by AI
- Ensure consistent standards, clear accountability and robust coordination across the FI to manage AI risks
Key AI risk management systems, policies and procedures
FIs are expected to ensure that their AI risk management framework encompasses key systems, policies and procedures for the following aspects:
- AI identification: Systems, policies and procedures should be established to ensure consistent identification of AI usage across all relevant business and functional areas. Clear roles and responsibilities for AI identification should be assigned.
- AI inventory: An accurate and up-to-date inventory of AI use cases, systems or models should be established and maintained across the FI. The AI inventory should capture key attributes and be regularly reviewed to ensure the captured attributes consider newer AI technologies.
- AI risk materiality assessment: An assessment methodology to evaluate the risk materiality of an AI use case, system or model based on the nature of the business should be established. The assessment should consider (i) the inherent risk materiality of an AI use case, system or model before appropriate risk management controls are applied and (ii) residual risk materiality after risk management controls are applied. Various risk dimensions relevant to the FI’s context should be considered, at a minimum, covering impact, complexity and reliance.
AI life cycle controls
FIs should plan for and implement robust controls covering the entire life cycle in a proportionate manner and develop contingency plans for high-risk AI use cases, systems or models.
Key areas include:
- Data management: Data management controls should be implemented to ensure data used across the AI life cycle is fit for purpose and representative, of high quality and subject to robust data governance.
- Transparency and explainability: The extent of transparency and explainability required of an AI use case, system or model according to its assessed risk materiality should be determined and the relevant controls established accordingly. Key considerations may include reliance on AI for the final decision, level of impact on customer or risk management outcomes.
- Fairness: What constitutes “fair” outcomes should be defined and there should be appropriate controls to identify and mitigate harmful biases and discriminatory outcomes across the AI life cycle.
- Human oversight: Controls should be implemented and regularly reviewed to ensure appropriate human oversight over an AI use case, system or model across its life cycle.
- Third-party AI management: It should be ensured that onboarding, development and deployment controls for third-party AI are adequate for the risk materiality of the use case, system or model that uses or depends on third-party AI.
- Selection: The objectives and risks of each AI use case, system or model should be considered when selecting AI algorithms or features in data to use.
- Evaluation and testing: Relevant evaluation and testing that is proportionate to the assessed risk materiality of the AI use case, system or model should be conducted, and appropriate guardrails implemented where there are risks and limitations associated with AI identified during development.
- Technology and cybersecurity risks: AI systems should be secure, well governed and supported by appropriate controls to manage technology and cybersecurity risks.
- Reproducibility and auditability: AI development process should be documented to enable reproducibility and auditability.
- Pre-deployment reviews: AI use cases, systems or models should be subject to (i) independent reviews prior to deployment and (ii) technology and cybersecurity reviews, to ensure that AI can be deployed into the production environment securely.
- Post-deployment monitoring and review: Robust controls for the ongoing monitoring of all deployed AI (including third-party AI used in the FI) should be developed and implemented, and aggregate risks across all AI use cases, systems and models periodically reviewed.
- Change management: (i) Comprehensive and robust controls for managing changes to deployed AI and (ii) clear controls for eventual retirement or decommissioning of AI when they are no longer needed or exceed risk tolerance should be developed and implemented.
AI capability and capacity
- AI risk management capabilities: An FI should determine and ensure the necessary competence and proper conduct of personnel involved in developing an AI use case, system or model, and conduct regular reviews to ensure the relevant personnel are equipped with adequate capabilities and capacity for effective AI risk management.
- Technology infrastructure for AI: An FI should ensure that its technology infrastructure is adequate for an AI use case, system or model, considering the relevant technology risk management guidelines, notices and industry frameworks.
* * * * *

© 2025 Baker & McKenzie. Wong & Leow. All rights reserved. Baker & McKenzie. Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.