The AI Ethics Principles are a significant step forward in the Kingdom's efforts to ensure the responsible and ethical development and use of AI technology. The principles are comprehensive and well-aligned with global best practices. The Principles will be essential in ensuring that AI is used to benefit society and the environment and to avoid the potential harm that AI can pose.
Key takeaways
The Al Ethics Principles to be taken into account when designing and developing AI systems are:
The fairness principle requires actions to eliminate bias, discrimination, or stigmatization of individuals, communities, or groups in the design, data, development, deployment, and use of AI systems. When designing AI systems, it is essential to ensure fair, objective standards that are inclusive, diverse, and representative of all or targeted segments of society.
The privacy and security principle represents overarching values that AI systems are required to have. AI systems have to be built in a safe way that respects the privacy of the data collected and upholds the highest levels of data security processes and procedures to keep the data confidential and prevent breaches.
The humanity principle highlights that AI systems should be built using an ethical methodology to be just and ethically permissible, based on intrinsic and fundamental human rights and cultural values, to generate a beneficial impact on individual stakeholders and communities through the adoption of a more human-centric design approach.
- Social & Environmental Benefits
This principle embraces the beneficial and positive impact of social and environmental priorities that should benefit individuals and the broader community that focuses on sustainable goals and objectives. AI systems should contribute to empowering and complementing social and ecological progress while addressing associated social and environmental ills.
The reliability and safety principle ensures that the AI system adheres to the set specifications and that the AI system behaves exactly as its designers intended and anticipated. Indeed, reliability is a measure of consistency and provides confidence in how robust a system is. At the same time, safety ensures that the AI system does not pose a risk of harm or danger to society and individuals. As an illustration, AI systems such as autonomous vehicles can threaten people's lives if living organisms are not adequately recognized, specific scenarios are not trained for, or if the system malfunctions.
- Transparency & Explainability
This principle is crucial for building and maintaining trust in AI systems and technologies. According to this, AI systems must be built with a high level of clarity and explainability, as well as features to track the stages of automated decision-making, particularly those that may lead to detrimental effects on individuals. Namely, the AI system should be designed to include an information section in the platform to give an overview of the AI model decisions as part of the overall transparency application of the technology.
- Accountability & Responsibility
This principle is closely related to the fairness principle, and it holds designers, vendors, procurers, developers, owners, and assessors of AI systems and the technology itself ethically responsible and liable for the decisions and actions that may result in potential risk and adverse effects on individuals and communities. Proper mechanisms and control mechanisms have to be in place to avoid harm and misuse of this technology.
Entities shall be responsible for ensuring that their AI documents are published in compliance with the AI Ethics Principles, and the authority may measure their level of commitment, supporting them in evaluating their AI systems and making recommendations on how to improve their compliance.
* * * * *
If you would like any assistance in any data and technology-related matters or issues generally, please feel free to contact our lawyers.
* Content prepared by Legal Advisors in association with Baker & McKenzie Limited.