Skip to main content
TopAIThreats home TOP AI THREATS
Regulatory Concept

AI Risk Management Framework

A structured methodology published by the US National Institute of Standards and Technology (NIST) that provides organisations with a systematic approach to identifying, assessing, and mitigating risks associated with AI systems throughout their lifecycle. The NIST AI RMF (AI 100-1) is a voluntary, non-sector-specific framework applicable to all AI technologies.

Definition

The NIST AI Risk Management Framework (AI RMF), published in January 2023 as NIST AI 100-1, is a comprehensive guide for managing risks associated with AI systems. The framework is organised around four core functions: Govern (establishing AI risk management policies and culture), Map (understanding the AI system’s context, capabilities, and risks), Measure (assessing and tracking identified risks using quantitative and qualitative methods), and Manage (prioritising and acting on risk assessments through mitigation, monitoring, or acceptance). The AI RMF is designed to be used throughout the AI lifecycle — from design and development through deployment and decommissioning — and is intended to complement rather than replace existing risk management frameworks.

How It Relates to AI Threats

The NIST AI RMF is a foundational governance tool within the Human-AI Control and Security and Cyber Threats domains. It provides organisations with a structured approach to identifying AI-specific risks — including bias, security vulnerabilities, privacy violations, and safety failures — before they materialise as incidents. The framework’s Map function aligns with threat modelling practices, its Measure function supports risk scoring and prioritisation, and its Manage function guides mitigation strategies. Unlike the EU AI Act (which is legally binding), the AI RMF is voluntary but has become a de facto standard for AI governance in the United States and is referenced by government procurement requirements.

Why It Occurs

  • The rapid deployment of AI systems across sectors created demand for standardised risk management guidance
  • Existing risk frameworks (ISO 31000, NIST CSF) did not adequately address AI-specific risks such as bias, hallucination, and emergent behaviour
  • US government agencies needed a framework for evaluating AI systems in federal procurement and deployment
  • Industry stakeholders needed a common vocabulary and methodology for discussing and managing AI risks
  • The voluntary, flexible design allows adoption across diverse AI applications without sector-specific regulatory constraints

Real-World Context

The NIST AI RMF has been adopted by federal agencies as part of Executive Order 14110 on Safe, Secure, and Trustworthy AI. Major technology companies reference the framework in their AI governance documentation. The framework is complemented by the NIST AI RMF Playbook, which provides practical implementation guidance, and the Generative AI Profile (NIST AI 600-1), which extends the framework to address generative AI-specific risks. The AI RMF is increasingly cited alongside the EU AI Act as part of the global AI governance landscape, and organisations operating internationally often map their risk management practices to both frameworks.

Last updated: 2026-04-03