SecureWorld News

The NIST Artificial Intelligence Risk Management Framework

Written by Kip Boyle | Mon | May 13, 2024 | 1:23 PM Z

The U.S. National Institute of Standards and Technology (NIST) has published the Artificial Intelligence Risk Management Framework (AI RMF). This is an important development as AI becomes increasingly pervasive in our lives and businesses.

The AI RMF is not a knee-jerk reaction to the sudden popularity of AI chatbots like ChatGPT. NIST has been working on this framework for some time, as directed by the National Artificial Intelligence Initiative Act of 2020. The goal is to provide guidance for managing the risks of AI systems throughout their lifecycle.

Like NIST's other frameworks, such as the Cybersecurity Framework and Privacy Framework, the AI RMF was developed through extensive industry collaboration. NIST went out and asked: What works? What doesn't work? What should we be thinking about? The resulting framework reflects the best current thinking on AI risk management.

The AI RMF has two main parts: 1) Foundational information; and 2) The AI RMF Core. The foundational information provides important context and background, including a discussion of AI risks and trustworthiness characteristics like security, privacy, fairness, and transparency.

But the real meat is in the AI RMF Core. Similar to NIST's other frameworks, the Core has functions, categories, and subcategories. The four main functions are:

1. Govern – Cultivating a risk management culture 
2. Map – Recognizing context and identifying risks
3. Measure – Assessing, analyzing, and tracking identified risks
4. Manage – Prioritizing risks and taking action based on impact

The scope of the Govern function seems to be at the organizational level, while Map, Measure, and Manage are more focused on specific AI use cases and systems. Applying the framework will require establishing your context, identifying use cases, and then mapping, measuring, and managing the risks for each one.

This is not a trivial undertaking. The AI RMF is dense and introduces a lot of new terminology. Measuring AI risks, in particular, is a complex challenge that will require new processes, metrics, and testing methodologies. Quantifying AI risks won't be easy, but the effort is necessary and worthwhile.

AI is already seeping into our organizations, often under the radar in "shadow AI" initiatives by individual employees and departments. The risks are real—from unintentional bias and inexplicable decisions to privacy violations and security vulnerabilities.

Will there be AI disasters? Very likely. Chatbots giving customers false information. Algorithms discriminating in harmful ways. Generative AI producing toxic content. The legal and reputational risks are immense. At some point, the grown-ups are going to demand AI governance.

[RELATED: Watch Out! Governance Is in AI Waters – Do You Have Your Life Jacket?]

Every organization needs to establish governance around AI, including policies, roles and responsibilities, and a culture of AI risk management. A good starting point is an AI Acceptable Use Policy to provide guardrails for employees.

The AI RMF is a work in progress and will continue to evolve, just like AI itself. But it provides an essential foundation for managing the risks of this powerful technology. Smart organizations will embrace the framework now, before AI chaos gets out of control. The alternative is not acceptable in a business context.

Is the AI RMF perfect? Nope. But it's an important step in the right direction. We're in the early days of enterprise AI adoption and risk management. Expect the guidance to get more specific and actionable over time, perhaps even splitting into multiple tailored frameworks.

In the meantime, I encourage every organization to study the AI RMF and start applying it in practice. Identify your use cases, map out the risks, and put governance in place. Engage a broad set of stakeholders to develop your policies. And share your experiences with the community. We're all learning together.

If you want to take a deeper dive, check out this two-part podcast episode I made with Jake Bernstein, a privacy and cybersecurity attorney with the law firm K&L Gates:

https://cr-map.com/podcast/153
https://cr-map.com/podcast/154

AI is a powerful technology with immense potential for both good and harm. By proactively managing the risks with the help of the AI RMF, we can tip the scales towards the positive outcomes. It won't be easy, but it's vitally important. The future is already here. We need to make sure it's one we want to live in.