In more detail
Nine dimensions to foster a trusted ecosystem
The Framework proposes nine dimensions to be considered together to foster a trusted AI ecosystem, namely:
- Accountability — Incentivising players along the AI development chain to be responsible for the protection of end users. This requires the allocation of responsibility based on the level of control that each player has in the development chain.
- Data — Good quality data is key for the development of GenAI. However, it is also crucial to consider how such a need intersects with data protection laws, copyright laws and data governance.
- Trusted development and deployment — Best practices should be adopted to ensure meaningful transparency around model development and the implementation of baseline safety and hygiene measures.
- Incident reporting — Establishing incident reporting structures and processes for timely notification, remediation and continuous improvement of AI systems.
- Testing and assurance — Conducting external audits to ensure transparency and build greater credibility and trust with end users; adopting common standards for AI testing to ensure quality and consistency.
- Security — Adapting existing frameworks for information security, and developing new testing tools to address new threat vectors to GenAI models.
- Content provenance — Increasing transparency of content generation to promote informed consumption of online content for end users (owing to the rapid creation of realistic synthetic content at scale).
- Safety and alignment research & development (R&D) — Accelerating investment in R&D to improve safety techniques and evaluation tools to address the potential risks that GenAI may bring.
- AI for public good — Increasing accessibility to GenAI by, for example, improving awareness and providing support to drive innovation and AI use among SMEs or partnering with communities with respect to digital literacy initiatives.
These nine dimensions seek to further the core principles of accountability, transparency, fairness, robustness and security. The goal is to develop a trusted AI ecosystem, where AI is harnessed for public good, and people embrace AI safely and confidently.
Public consultation
The IMDA and AI Verify Foundation are welcoming feedback and input to refine the proposed Framework. The consultation is open until 15 March 2024.
* * * * *
If you wish to speak with us on any of the dimensions or issues raised above, please reach out to us.
Relevant links:
Proposed Model AI Governance Framework for Generative AI issued by AI Verify Foundation on 16 January 2024
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.