In more detail
The Framework serves as baseline guidance for key stakeholders, such as policymakers, industry, researchers, and the general public, to adopt best practices in relation to pertinent issues posed by generative AI.
The Framework draws from insights and conversations with key jurisdictions, international organizations, research institutions and leading AI companies. It proposes nine dimensions to be looked at as a whole to foster a trusted AI ecosystem, which we explain in further detail below.
Dimension |
Details |
Accountability |
Accountability is essential to ensure that organizations along the AI development chain remain responsible and accountable to end users. The Framework proposes a delegation of responsibilities that draws inspiration from cloud and software development processes, differentiating between responsibilities upfront in the development process (ex-ante) and rectification of issues discovered thereafter (ex-post). |
Data |
Due to the importance of data as an element of AI model and application development, the Framework provides guidance on using data in AI model development. Data fed to models should be of good quality and originate from trusted sources. Policymakers should also foster open dialogue with relevant stakeholders to ensure that the rights of copyright owners are balanced with data accessibility concerns. |
Trusted Development and Deployment |
There should be meaningful transparency around baseline safety and hygiene measures undertaken by companies. Organizations should strive to adopt best practices that surround the AI development lifecycle (i.e., around development, disclosure and evaluation). |
Incident Reporting |
Incident reporting is an established practice that similarly applies to AI. Organizations should adopt vulnerability reporting to proactively discover vulnerabilities in their software and ensure that there are adequate internal processes to report any incidents for timely notification and remediation. |
Testing and Assurance |
Organizations are encouraged to engage third-party testing and assurance, as well as external audits, to provide transparency and foster greater credibility with end users. |
Security |
As for the security of AI, organizations should adopt "security-by-design" as a fundamental security concept. This includes designing security at every phase of the systems development life cycle (including development, evaluation, operations and maintenance). New tools and security safeguards may also be developed, including input moderation tools and filters as well as digital forensics tools specifically for generative AI. |
Content Provenance |
There is a need for technical solutions to reduce the potential for harm posed by realistic synthetic content creation by generative AI. Examples of such solutions include digital watermarking and cryptographic provenance to identify and flag content created with or modified by AI. |
Safety and Alignment Research & Development (R&D) |
Businesses should accelerate research and development in model safety and alignment to drive research for the global good. To achieve more impactful research and development of safety and evaluation mechanisms, this should involve global cooperation to better leverage limited talent and resources. |
AI for Public Good |
Responsible AI also means empowering individuals and businesses to thrive in an AI-enabled world. To reap the exponential benefits of AI, the imperative is to democratize access to technology, support public sector AI adoption, upskill workforces, and develop sustainable technologies. |
Key takeaways
- The Framework, through its nine dimensions, highlights relevant pressing and growing concerns related to AI governance, and aims to set out a recommended approach to balance such concerns with innovation through generative AI.
- In order to foster a trusted ecosystem to safely harness the use of generative AI, it is essential for all stakeholders to utilize the Framework as a reference point and adopt best practices contained therein.
* * * * *
If you wish to speak with us on any of the dimensions or issues raised above, please reach out to us.
Relevant links:
Model AI Governance Framework for Generative AI issued by AI Verify Foundation and Infocomm Media Development Authority on 30 May 2024.
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.