Key takeaways
US laws increasingly refer to the NIST AI Risk Management Framework as a standard
For example, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October 2023. The order aims to define the trajectory of AI adoption, governance and usage within the US Government and cited the AI RMF and other NIST resources approvingly. Similarly, California Governor Gavin Newsom issued an Executive Order on AI in September 2023 promoting responsible AI use across California public sector operations. Among other things, the order directs state agencies to develop guidelines for public sector procurement and uses of generative AI that are based on the AI RMF. California has since published its public sector generative AI guidelines, which draw materially on the concepts and principles found in the NIST AI RMF.
In the private sector, the California Safe and Secure Innovation for Frontier Artificial Intelligence Model Act, which Governor Newsom has until September 30 to veto or sign into law, requires AI developers and operators of computing clusters to consider NIST guidance. Likewise, Colorado's Consumer Protections for Artificial Intelligence will require deployers of high-risk AI systems to implement a risk management policy and program that considers the AI RMF. Notably, Colorado's law accords organizations that comply with the AI RMF an affirmative defense to regulatory enforcement. See our commentary on the California bill and Colorado law here and here. These developments highlight the growing trend of lawmakers looking to the NIST AI RMF as a standard for AI risk management.
But what does it mean to comply with the AI RMF? The following summarizes some key points from the AI RMF and offers some thoughts for organizations to consider.
The AI RMF is about mitigating risks and maximizing AI trustworthiness
The AI RMF suggests that the main objectives for responsible AI actors (i.e., any entity that plays a role in the AI system lifecycle) are to manage risks and maximize trustworthiness when developing and using AI systems. Risk is a function of the magnitude of harm that would arise if an event occurs and the likelihood of that event occurring. The AI RMF describes different harms that may arise when developing and using AI (see Figure 1 of the AI RMF) and lists over a dozen reasons why AI risks differ from traditional software risks (see Appendix B of the AI RMF). The AI RMF also lists the following characteristics of trustworthy AI:
- Valid: The AI system fulfils the requirements for its intended use or application.
- Reliable: The AI system is able to meet defined conditions.
- Safe: The AI system does not endanger human life, health, property, or the environment.
- Secure: The AI system maintains its functions and structure (including confidentiality, integrity and availability) in the face of internal and external change.
- Resilient: The AI system is able to return to normal function after an unexpected adverse event.
- Accountable: Organizations are responsible for the consequences of AI systems.
- Transparent: Organizations should provide appropriate and tailored information about an AI system and its outputs to individuals interacting with the system.
Explainable: Organizations should be able to describe the mechanisms underlying AI systems' operation.
- Interpretable: Organizations should be able to describe the meaning of AI systems' output in the context of their designed functional purposes.
- Privacy-enhanced: Organizations should be aligned with norms and practices that help to safeguard human autonomy, identity, and dignity, including enhancing freedom from intrusion and individuals' agency to consent to disclosure or control of facets of their identities.
- Fair with harmful bias managed: Organizations should be aligned with values related to equality and equity while addressing systemic, computational, statistical and human-cognitive biases.
Mitigating risks and maximizing trustworthiness go hand-in-hand. The more an AI system embodies the characteristics listed above, the greater the potential for AI actors to identify and mitigate its attendant risks. The AI RMF acknowledges that different organizations have different risk tolerances and priorities, but accentuates that broader adoption of its principles will allow more of society to benefit from AI while also being protected from its potential harms.
Governance is key because it enables organizations to map, measure and manage AI risks
NIST describes four specific functions to help organizations address the risks of AI systems. The first of these functions is "govern", which is at the core of an effective risk management program because it enables the organization to progress the other three functions: "map", "measure", and "manage". The "govern" function entails an organization establishing and overseeing the policies, processes, and practices that guide how it manages AI risks. "Mapping" is focused on figuring out the context, purpose, and potential risks of an AI system. "Measuring" is about assessing and monitoring AI system performance, risks and impacts to ensure they align with predefined objectives and mitigate potential harms. And "managing" is about proactively implementing measures to prioritize, respond to, and mitigate identified risks throughout the AI system's lifecycle.
As a companion to the AI RMF, NIST published a Playbook listing numerous suggested actions for advancing each of the four functions and the AI actors who should be involved in performing each action. NIST states that the "Playbook is neither a checklist nor set of steps to be followed in its entirety." But organizations seeking to comply with the AI RMF, or truly and fully consider the framework, should probably go through each of actions that the Playbook suggests, document whether the action is relevant to the organization and why or why not, and list the ways in which the organization addresses each relevant action. The organization might then holistically assess whether its actions and measures sufficiently address each action in the Playbook, implement a plan to address any gaps, and regularly repeat this process. Organizations should be careful about not waiving attorney-client privilege or work-product protection when seeking legal advice on what to do to mitigate AI risks.
The AI RMF represents a solid starting point
The AI RMF can help organizations structure and organize their compliance journey. Its companion Playbook catalogs myriad actions that organizations might take to operationalize the AI RMF's guidance. However, the AI RMF and Playbook are intended to help organizations in just about any sector with just about any use case. The documents therefore include many general statements, some of which may not be applicable to your organization's proposed development or use of AI systems. Once your organization has reviewed the AI RMF and Playbook, you should consider how these tools apply specifically to your organization's culture, practices and risk management strategies. Other NIST publications may also offer organizations more tailored and concrete recommendations on specific AI use cases. For instance, NIST's Generative AI Profile examines how the AI RMF might apply particularly to the development and use of generative AI technologies. In addition, NIST plans to regularly update the publication to enhance its relevance and usefulness. Organizations should therefore revisit the document periodically to ensure their AI governance programs reflect the latest NIST guidance.