In more detail
The Guidelines are designed to help system owners secure AI throughout its lifecycle, addressing both traditional cybersecurity risks and emerging threats specific to AI, such as adversarial machine learning. They emphasize a proactive approach, advocating for AI systems to be secure by design and secure by default. This means integrating security measures from the outset, rather than as an afterthought.
In particular, the Guidelines highlight the following:
- Lifecycle approach to AI security, which is a comprehensive approach to securing AI systems, covering stages from planning and design, development, deployment, operations and maintenance, to end-of-life – This approach ensures that security is integrated at every phase, addressing potential risks early, and continuously adapting to new threats. System owners are encouraged to conduct regular risk assessments and implement security measures tailored to each stage of the AI lifecycle.
- Adversarial machine learning and supply chain security, which underlines the importance of protecting AI systems from adversarial machine learning attacks and securing the AI supply chain – This includes safeguarding training data, models and software libraries from manipulation and ensuring that all components adhere to stringent security standards. The Guidelines also recommend monitoring for adversarial activities and implementing robust defenses against data poisoning and model evasion attacks.
To complement the Guidelines, the CSA has also released the Companion Guide, which curates voluntary practical treatment measures and controls that system owners of AI systems may consider to secure their adoption of AI systems. Each measure/control is designed to be used independently, to offer flexibility in customizing which measures to evaluate and what mitigations to adopt, based on a particular organization's specific needs.
The Companion Guide emphasizes the following:
- Holistic risk assessment, which highlights the importance of conducting a comprehensive risk assessment tailored to AI systems – This involves identifying potential security risks at each stage of the AI lifecycle, from planning and design to deployment and maintenance. By systematically evaluating these risks, organizations can prioritize and implement appropriate security measures, ensuring robust protection against both traditional and AI-specific threats.
- Supply chain security, which involves securing the AI supply chain, including by verifying the integrity of data, models and software libraries used in AI systems – The Companion Guide recommends implementing secure coding practices, conducting regular vulnerability scans, and ensuring that all components are sourced from trusted providers. These measures help mitigate risks such as data poisoning and model backdoors, which can compromise the security and reliability of AI systems.
The Guidelines and Companion Guide are intended to be living documents, updated regularly to reflect new developments in AI security. The CSA encourages feedback and suggestions (at aisecurity@csa.gov.sg) from the community to continuously improve these resources.
As the Guidelines and Companion Guide mainly address cybersecurity risks to AI systems, both do not address AI safety or other related aspects, such as transparency and fairness, in any significant detail.
Key takeaways
As reported in our August 2024 client alert, these documents highlight the CSA's dedication to a cooperative and forward-thinking strategy in enhancing AI system security.
As Singapore strives to lead in AI innovation, these resources will be crucial in maintaining trust and ensuring that the nation's AI systems stay informed and prepared for emerging threats.
* * * * *
For further information and to discuss what this development might mean for you, please get in touch with your usual Baker McKenzie contact.
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.