skip to main content

Transparency and AI

In this week’s episode of “Waking Up With AI,” Katherine Forrest and Anna Gressel describe what it means for an AI system to be transparent, and how the shift from narrow AI to generative AI has made transparency harder.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning everyone and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, you and I were just speaking about the fact that I don't sound really as terrible with this particular headset as I thought I was going to sound. I'm traveling and I don't have my travel mic, so I hope the audience can put up with me.

Anna Gressel: Well, I actually think you sound pretty good for what it's worth, but I think it's also a little ironic because today's episode is all about how humans can understand AI. And now we may have to learn how to understand that and also you too, Katherine.

Katherine Forrest: Right. Well, I have confidence that my little sort of $15 headset from Amazon will be sufficient. All right, let's launch into it.

Anna Gressel: Today we're going to talk about the concept of transparency and AI. And transparency is a word we hear a lot from regulators. It's used in many of the state laws relating to AI that have been passed. So we'll lay out what the concept of transparency really is and why it matters.

Katherine Forrest: I think it's a particularly interesting topic because there really has been a change in what transparency means and whether it can be achieved with large language models. So between say, 2022 and today, there really has been a significant change.

Anna Gressel: Yep, that's exactly right. With narrow AI—that's the kind of AI that we had been working with back in 2018, 2019, and onwards—that does a single task, and it really describes that kind of AI that existed before LLMs, where there was more real transparency within the models.

Katherine Forrest: Right, so let's back up and talk about what the word transparency in connection with AI meant before LLMs came and really changed it all. And why don't you just sort of give how you define it, Anna.

Anna Gressel: So I'll define transparency here, I think, in the way that it is used just generally in ordinary English language. And what I mean by that is the ability to see through something or to have no real barrier that prevents someone from seeing directly through something.

Katherine Forrest: Right, and with narrow AI, and by that I mean one of the AI tools that did a single task, as we've just said, but also one that was not and is not architected around a neural network, because there's still a lot of narrow AI out there. With narrow AI, transparency meant the ability to look at an AI model and by examining the data set that went into the model, the inputs or factors at the model would use in terms of making its predictions or its decisions and looking at how they're weighted, which means essentially how much importance is given to a particular input or factor, you can tell an awful lot about why a particular decision or prediction is rendered in terms of the output.

Anna Gressel: Right, so the ability to look at the model and how it worked and the factors that were important was really important to understanding how that model actually made a decision and prediction. So let's just take ourselves back in time to 2018, 2019, 2020, when we worked a lot with narrow AI models. There were often actual ways to determine which inputs were weighted more strongly by the model. And that's just kind of a fancy way of saying which factors contributed most to the ultimate decision or recommendation.

Katherine Forrest: Right, that's really at the heart of the concept of transparency in AI. Can you understand why the model is coming up with the answers that it's coming up with? Let me give you an example. We'll take financial services and have a hypothetical company that might be using a narrow AI tool to predict whether a particular individual is a good candidate for a credit card. And the tool might be using a data set of, say, prior credit card holders that spanned a five-year period. And you could actually examine the tool and see how the model treated characteristics such as income, home ownership, gender, race, education, any number of factors. And if you wanted to adjust the inputs—take one out for instance, like race or gender, age—you could do that.

Anna Gressel: Yeah, and you could actually look and see which characteristics the model took into consideration and how it weighed or valued them.

Katherine Forrest: Right, and that's the transparency, the ability to look at the factors, at the weights, and the data set and understand a little bit more about how the predictions come about.

Anna Gressel: So let's pivot now, Katherine, and talk about large language models, which are really actually quite different on that front. And large language models are considered to be much more like the true black boxes we've already always heard about with respect to AI.

Katherine Forrest: Okay, we're going to call that also the opposite of transparent.

Anna Gressel: Yeah, exactly. And that's really based on the architecture that undergirds these models. They're built using a neural network. And that's, you know, I've always drawn on my neuroscience background here, but that's the kind of AI architecture that really was originally designed to try to mimic the structure of the human brain.

Katherine Forrest: And they give the neural network the different parts of it, different little names. There's the input layer, I love the next one, what are literally called hidden layers, and then an output layer.

Anna Gressel: Yeah, and in a large language model or any kind of large neural network, actually, the inputs or data get related to each other with particular weights. But you can't really see those weights or determine the relationship that the model makes or even why the model's making certain relationships with an LLM in the way that you can actually pretty easily determine contribution or weighting with a narrow AI tool.

Katherine Forrest: And it makes sense that regulators care a lot about transparency and lawmakers as well, because when models are making decisions that are impacting humans, they want there to be a way that the user, the deployer of the model, the impacted person, can all be able to understand the primary factors that go into making a particular decision.

And the concept of transparency is built really directly into most of the AI standards and principles that you now see around the world.

Anna Gressel: I mean, yeah, traditionally with issues like loan underwriting, for example, there are actually obligations to provide explanations to people about how loans are given and the particular factors that contribute to decisions. And the White House Executive Order really embeds the concept of transparency directly. And so does the EU AI Act. Many state laws are doing that with respect to issues like algorithmic bias. So transparency is really one of those core concepts we see in the AI regulatory space.

Katherine Forrest: But LLMs have really changed the game with regard to transparency.

Anna Gressel: Definitely. In short, with LLMs, you often can't understand why they make decisions they do in the same way you could with some of the tools that were used in the narrow AI space.

Katherine Forrest: But importantly for our listeners, the regulators and lawmakers haven't dropped the concept of transparency or the requirement for it. What they've done is they've changed how it's implemented and how it's evaluated.

Anna Gressel: That's exactly right. And there are major questions about what transparency requirements mean or even should mean in the context of LLMs. For example, LLM transparency might come in the form of asking, what did you intend the model to do? What data was it fed? And what was the output?

Katherine Forrest: And when you examine the output, a big part of transparency today is ensuring, as we were doing also with narrow AI, that the model is not exhibiting bias. And one way of determining whether it's exhibiting bias is really to test the model in a variety of situations, red teaming it to see whether it exhibits bias.

Anna Gressel: In the generative AI context, there's also much more of a focus on system cards, almost like a nutrition label that helps explain the model's capabilities and limitations. That can be important, particularly in the context of general purpose AI models, which are often licensed by other actors and tuned for specific purposes. And the idea is that the licensee should have some transparency around how the model should or should not be used.

Katherine Forrest: Right, so that's another kind of transparency that we now have for LLMs.

So the bottom line for our listeners is that transparency in the generative AI context is now about looking at the beginning and the end of the process and really not so much trying to figure out how a particular factor or input is weighted.

All right, Anna, I think that's all we've got time for right now. Let's go ahead and sign off for the week. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Well, thank you guys for joining us and bring your coffee, come on a walk and listen to your favorite AI podcast with us.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy