skip to main content

NIST Mystery Science Theater

Katherine Forrest and Anna Gressel introduce you to NIST—the National Institute of Standards and Technology, an office of the Department of Commerce—and its role in shaping the present and future of artificial intelligence in the United States.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone, and welcome to today's episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, it's fitting that neither of us are in New York City again. I'm in Delaware, and where are you?

Anna Gressel: So I'm actually in Napa right now in California for Ms. JD's LaddHer Up retreat and I used to be on the host committee for this event and on the board of Ms. JD. It's my absolute favorite event of the year and it brings together women General Counsels and women associates in their first six years of practice and it is just life-changing, and so I am really like riding a high this morning as we're recording this.

Katherine Forrest: Okay, well that's fantastic. And I have to tell you that I'm riding high also down here in Delaware because I'm very excited about today's episode. I really truly am because I think it's going to be a mystery solver for some people.

Anna Gressel: Well everyone loves a solution to a mystery, Katherine.

Katherine Forrest: Right, you know, my father, he was a mystery writer and he thought up mysteries all the time and how to solve them. I think today's is, you know, not quite as exciting as some sort of locked room murder, but it's exciting, nonetheless.

Anna Gressel: So what is today's mystery, Katherine?

Katherine Forrest: So today's mystery is one that people have been thinking about and not really wanting to say aloud, which is what is this NIST thing, this N-I-S-T thing that everybody in the AI world keeps talking about? NIST this, NIST that, NIST the other thing.

Anna Gressel: So NIST, and by NIST you mean the U.S. Department of Commerce's National Institute of Standards and Technology.

Katherine Forrest: Right, we often hear like randomly scattered about in the AI space statements about NIST framework. And first of all, people talk about it as if it's singular and it's actually more than one thing. There are three different documents that we're going to talk about today. But what we're going to do, is we're going to solve the mystery of the NIST.

Anna Gressel: Okay, I'm all for that. So let's do it. I'll kick us off with the first of the documents that I think you were going to lead us into. And that's called the Artificial Intelligence Risk Management Framework or AI RMF 1.0. There's actually a numbering system. It's numbered NIST AI 100-1.

Katherine Forrest: Right, and the NIST AI Risk Framework 1.0 came out in January of ‘23.

Anna Gressel: Yeah, and this is an important document. I mean, I think sometimes we think about standards as being, you know, industry documents for technical folks, but that is not always the case. Here, actually, the NIST AI Risk Management Framework is cited in certain regulations and that gives it a lot of force and a lot of importance. And I'll just call out one of those, which is important. It's the Colorado AI laws. We've talked about that in other episodes, but Colorado creates actually a rebuttable presumption if a developer or a deployer used reasonable care and they can show that by showing they have an up-to-date risk management policy that leverages NIST's risk management framework.

Katherine Forrest: Right, and, you know, that is so important for people to understand because we think about AI as not having, in terms of the regulatory frameworks, a lot of guidance. But here we've got Colorado pointing to NIST. And let's dig a little bit into some of what that NIST 1.0 does. It was aimed more and is aimed more at narrow AI or single-task AI, almost by virtue of when it came out, which was January of ‘23. And we'll get to something that's specific about generative AI that's called the Generative AI NIST profile. But let's go a little bit more into 1.0 for now.

Anna Gressel: Yeah, we can't possibly go through all of it, it’s actually a pretty meaty document. But here are the highlights. So it's voluntary other than, as we mentioned, you know, being cited to in regulation that might create a real reason for people to follow it. And it's intended for developers, deployers and users of AI systems.

Katherine Forrest: And that's really important because as we get to the next two NIST publications, one of which is still under consideration, the third one, it's important to remember that all three parts of the chain that you just mentioned, Anna, the developers, deployers and users, are covered by these NIST documents.

Anna Gressel: Yeah, and the AI RMF 1.0, tells us that it's intended to be practical and to be able to adapt to new technology.

Katherine Forrest: Yeah, in other words, it's trying to set some basic standards without really dictating sort of numerical boundaries or something of that nature and trying not to require something to be obligatory.

Anna Gressel: So importantly, the 1.0 risk management framework develop what it and subsequent NIST publications call the core of the framework. And that core is a description of four specific functions to help organizations address the risk of AI systems in practice. And those are Govern, Map, Measure and Manage.

Katherine Forrest: And I like to say those really fast. Govern, Map, Measure and Manage. Govern, Map, Measure, Manage as the NIST core. And these concepts are laid out in some detail in the 1.0 framework. But let's talk about some other things that are actually in that framework as well. The NIST 1.0 defines risk in an interesting and sensible way, and it's going to be a mouthful. And I'll just preview that in July of 2024 NIST released something that sort of shortens this.

But the way that risk is defined is “a composite measure of an event's probability of occurring and the magnitude of the degree of consequences of the corresponding event.” Let me do that again. So risk is defined as the “composite measure of an event's probability of occurring and the magnitude of the degree of the consequences of the corresponding event.” So in other words, it's the probability measured against some sense of how bad the consequences of an outcome could be.

Anna Gressel: Yeah, and that's actually, it's a pretty typical construct, risk and probability that are used in all different kinds of risk management frameworks. But what's interesting, and I think what NIST 1.0 does well, is it recognizes that AI risks are neither well-defined nor easy to measure. This is a very hard process for a lot of organizations.

Katherine Forrest: And you see that actually in all three of the documents that we're going to talk about today from NIST that there really is a lack of specifics as to how to measure or quantify risk. And there are desires by NIST to have companies actually come up with what is the best fit for the company in terms of that kind of measurement and quantification.

Anna Gressel: And the Risk Management Framework 1.0 also points out that the type and level of risks that a developer might face could be very different from those of a deployer or user.

Katherine Forrest: Right, and it also mentions a couple of other things that I'll just tick through and then we can move on to the other two NIST documents. NIST 1.0 talks about risks that can be different at different points in time in the life cycle of a model, or risks that in a laboratory can be different than those in real-life settings, and that there's kind of an inscrutability sometimes in black box systems that can actually be hard to evaluate for risk.

Anna Gressel: Definitely. And before we move on, let's mention that 1.0 also mentions that risk tolerances can be specific to an organization or person. An organization should prioritize risks and resources allocated to their priorities. And that's really important, taking that kind of risk-based approach.

Katherine Forrest: Okay. So let's go now to the next in line for the NIST mystery solution, which is a document that's numbered NIST AI 600-1, and it's called the Artificial Intelligence Risk Management Framework: Generative AI Profile.

Anna Gressel: I feel like we should title this episode, Katherine, NIST Mystery Science Theater. That would make me happy. So 600-1 came out in July of 2024. It makes it a lot more recent than the original risk management framework.

Katherine Forrest: Yeah, it really wasn't very long ago. So why don't you start us off on that one, Anna?

Anna Gressel: Yeah, so the GenAI Profile or 600-1 references being at least in part in response to the White House Executive Order, which I think we've talked about in some detail. And it's intended to define risks that are novel to or exacerbated by the use of generative AI. And then it uses those core concepts we talked about earlier from the initial risk management framework, Govern, Map, Measure and Manage.

Katherine Forrest: I love that. Govern, Map, Measure, Manage. Let's say it together. Govern, Map, Measure, Manage. No, you're not going to go there. Okay. All right. Well, this document uses the same definition of risk as the mouthful that I just said a few minutes ago, but now it's got a little bit of a pithier way of describing it. It defines risk as just the probability of occurrence with magnitude of consequence. So they've taken fewer words, same concept.

Anna Gressel: Mmhmm, it's much smoother.

Katherine Forrest: Right. And as GenAI-specific, the NIST 600-1 document references some risks that are not even known to the developer, but that can materialize abruptly over time.

Anna Gressel: Mmhmm, that's a concept of emergent risk, but also risk where the harm becomes worse over time, like emotional harm or disinformation, both of which can worsen over some sort of time scale.

Katherine Forrest: Right, and let's quickly go over a few of the risks of this 600-1 NIST document and what it identifies as risks. There's the CBRN information or capabilities, which we've talked about before—chemical, biological, radiological and nuclear. They talk about hallucinations or confabulations, as the document also refers to them; or dangerous, violent or hateful content; or data privacy; or environmental impacts, which also interestingly has impacts on the planet generally; bias, and then something that I find interesting, which is called human configuration, which is actually set forth as its own independent risk. And this is humans either feeling risk averse to AI or getting too emotionally entangled with it.

Anna Gressel: Yeah, that's fascinating that it uses the term entangled.

Katherine Forrest: Yeah, I find that really interesting.

Anna Gressel: And the last few risks it identifies are information integrity and information security, intellectual property and various issues with the value chain and third party component integrations into models, which would merit, I think, one or more deep dive discussions. Super interesting stuff.

Katherine Forrest: Right, it's a really useful document that I recommend that our audience really take a look at and dive into as their business organizations might need.

Anna Gressel: So let's get to the final NIST mystery solution for the day. And that's the third of these documents. This one is called Managing Misuse Risk for Dual-Use Foundation Models. And that's NIST AI 800-1.

Katherine Forrest: And while this one came out in July of 2024, it's not done yet. It's still just had its comment period expire. And so we're still waiting for what might be a changed and final version.

Anna Gressel: And this came out in large part in response to the White House Executive Order, which also uses this concept of dual-use foundation models, which are really these highly capable frontier models. And that's also a concept we've seen in the EU AI Act.

Katherine Forrest: Right, and though the word frontier models is sometimes used a lot more colloquially in the United States to be just models on the forefront, and so you hear a lot of just sort of random use of frontier models, really a frontier model here or a dual-use model is a truly highly capable model.

Anna Gressel: Yeah, and that concept of dual-use, mean, sometimes traditionally dual-use means things that have both military and civilian application. So here this concept of dual-use in the AI space is really models that are highly capable that could result in real and potentially catastrophic harm or that could be weaponized if put into the wrong hands.

Katherine Forrest: Right, so this NIST 800-1 document warns us that these models can have, again, emergent or unknown capabilities.

Anna Gressel: Yeah, and it adds that it's really difficult to know how scaling a model, that means making it bigger or using it in different ways, will impact its performance or its capabilities.

Katherine Forrest: And the risk between size and potential for harm isn't really even fully understood.

Anna Gressel: Yeah, and the methods to safeguard the public are nascent. We're just figuring them out. But we have to be careful because testing models under controlled conditions may not really reveal how they behave under real world conditions. So it's a real challenge on the testing.

Katherine Forrest: And that's something that the NIST 800-1 sort of points out, this laboratory conditions versus real world conditions difference. And it asks developers and deployers to ensure that once there are risks that are identified, that they are managed before deploying a model.

Anna Gressel: And that the developer or deployers should collect information about misuse even after deployment.

Katherine Forrest: But again, these are voluntary recommendations about disclosures and recommendations. But interestingly, if someone were to comply with it, there'd be a lot of information publicly available about these risks. But they are voluntary.

Anna Gressel: So that was a huge amount of information for our audience, Katherine, I think.

Katherine Forrest: Yeah, people may have to break this into two to listen to it. And they're going to have to either that or they're going have to take their dog now for a whole long run versus just the length of a dog walk. We try to make these podcasts the length of a dog walk, but now it may be like a dog run. But I think we've solved the mystery of this thing called NIST, that we hear about every day. And I'm Katherine Forrest.

Anna Gressel: and I'm Anna Gressel. Thanks so much for joining us.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy