Singapore: IMDA and AI Verify Foundation finalize Model AI Governance Framework

In brief

Following the publication of the proposed Framework in January 2024 (which you can refer to our earlier client alert here for more details) and the feedback received from various stakeholders, the finalized Model AI Governance Framework for Generative AI ("Framework") was released on 30 May 2024 by the Infocommunications  Media Development Authority (IMDA) and AI Verify Foundation.

The Framework expands upon the Model AI Governance Framework last updated in 2020.


Contents

In more detail

The Framework serves as baseline guidance for key stakeholders, such as policymakers, industry, researchers, and the general public, to adopt best practices in relation to pertinent issues posed by generative AI.

The Framework draws from insights and conversations with key jurisdictions, international organizations, research institutions and leading AI companies. It proposes nine dimensions to be looked at as a whole to foster a trusted AI ecosystem, which we explain in further detail below.

Dimension Details
Accountability Accountability is essential to ensure that organizations along the AI development chain remain responsible and accountable to end users. The Framework proposes a delegation of responsibilities that draws inspiration from cloud and software development processes, differentiating between responsibilities upfront in the development process (ex-ante) and rectification of issues discovered thereafter (ex-post).
Data Due to the importance of data as an element of AI model and application development, the Framework provides guidance on using data in AI model development. Data fed to models should be of good quality and originate from trusted sources. Policymakers should also foster open dialogue with relevant stakeholders to ensure that the rights of copyright owners are balanced with data accessibility concerns.
Trusted Development and Deployment There should be meaningful transparency around baseline safety and hygiene measures undertaken by companies. Organizations should strive to adopt best practices that surround the AI development lifecycle (i.e., around development, disclosure and evaluation).
Incident Reporting Incident reporting is an established practice that similarly applies to AI. Organizations should adopt vulnerability reporting to proactively discover vulnerabilities in their software and ensure that there are adequate internal processes to report any incidents for timely notification and remediation.
Testing and Assurance Organizations are encouraged to engage third-party testing and assurance, as well as external audits, to provide transparency and foster greater credibility with end users.
Security As for the security of AI, organizations should adopt "security-by-design" as a fundamental security concept. This includes designing security at every phase of the systems development life cycle (including development, evaluation, operations and maintenance). New tools and security safeguards may also be developed, including input moderation tools and filters as well as digital forensics tools specifically for generative AI.
Content Provenance There is a need for technical solutions to reduce the potential for harm posed by realistic synthetic content creation by generative AI. Examples of such solutions include digital watermarking and cryptographic provenance to identify and flag content created with or modified by AI.
Safety and Alignment Research & Development (R&D) Businesses should accelerate research and development in model safety and alignment to drive research for the global good. To achieve more impactful research and development of safety and evaluation mechanisms, this should involve global cooperation to better leverage limited talent and resources.
AI for Public Good Responsible AI also means empowering individuals and businesses to thrive in an AI-enabled world. To reap the exponential benefits of AI, the imperative is to democratize access to technology, support public sector AI adoption, upskill workforces, and develop sustainable technologies.

Key takeaways

  • The Framework, through its nine dimensions, highlights relevant pressing and growing concerns related to AI governance, and aims to set out a recommended approach to balance such concerns with innovation through generative AI.
  • In order to foster a trusted ecosystem to safely harness the use of generative AI, it is essential for all stakeholders to utilize the Framework as a reference point and adopt best practices contained therein.

* * * * *

If you wish to speak with us on any of the dimensions or issues raised above, please reach out to us.

Relevant links:

Model AI Governance Framework for Generative AI issued by AI Verify Foundation and Infocomm Media Development Authority on 30 May 2024.

* * * * *

LOGO_Wong&Leow_Singapore

© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.

Contact Information

Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.