skip to main content

Accuracy vs. Fairness in AI

In this week’s episode, Katherine introduces the infamous accuracy versus fairness problem in AI, while pointing to some recent research developments that could show promise in identifying bias in models.

  • Guests & Resources
  • Transcript

Katherine Forrest: Hey, good morning, everyone and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest, and today you've only got Katherine Forrest. We have sort of an unusual situation where Anna is actually in Abu Dhabi right now where she's doing what she likes to do best, as we all do, which is talking about AI. So, I'm going to be talking this morning about a really important area in machine learning and all forms of artificial intelligence, which is the trade-off between accuracy and fairness in AI. And this is really an important area that we introduced in one of our very first episodes about how models can actually be accurate — they can be accurate and do what they're supposed to do, but they can actually result in some unfair outcomes. So, we're going to go through that a little bit today.

So, it's been a notorious problem because originally when models were developed, the initial sort of push was to try to get predictive outcomes that would accurately reflect the data set. And that was an enormous effort, and it was an effort that then eventually succeeded. And researchers realized that when they maximized for accuracy off of a data set, sometimes if the data set had issues, there would be impacts on fairness.

Let's go through this a little bit. First, let me give you an example. And so, the example that I want to use is a run-of-the-mill credit card application. And I want to talk about the different kinds of characteristics that can go into that, which is you can look at somebody's where they live and their marital status and educational status and their gender. And potentially racial categories might find their way into it based upon any number of differential characteristics such as zip code or educational institution. You might have a loan loss history. You might have all kinds of indicia in terms of prior debt history, et cetera, et cetera, et cetera.

So, you've got a variety of characteristics that are part of a data set. That data set is the normative world that the AI tool understands as the world that it is to try and make its predictions off of. If that normative world has got embedded biases in it, structural inequalities from the history of a particular organization, the history of a particular set of loan files, a history of the United States of America, for whatever reason, then that data set is all that the AI tool understands for its world. So, it will make predictions off of that data set.

So if you say to the data set, okay, here we've got an architected AI tool and we want it to predict from that data set, what is the most likely applicant to pay off its debts within x period of time, whether it's 60 days or 90 days or one year or every month, whatever it is, give us the profile. It will spit out the results based upon, in fact, what it understands from the data set that it has is the profile of the model and the answers to the query. The problem is that that accuracy, the ability to actually take that model and make it work in the way it's been architected to work can result in unfairness. Unfairness because it can end up that the data set doesn't have a diverse data set that reflects our current society.

For instance, maybe it's a data set that's several years old and doesn't reflect, in fact, the diversity of male versus female applicants. Maybe it actually is a data set that does not take into consideration certain zip codes in an appropriate way. Maybe it's a data set that doesn't reflect the diversity in terms of racial categories of different applicants. And so, you can have a variety of data set issues that can actually result in the model making a prediction that is both accurate but not that fair.

And there are ways today of tackling this fairness versus accuracy problem. And one of them is by trying to look hard, of course, at the data set and trying to ensure that your data set is most accurately reflected of the diverse population. But there are other methods that model developers are making before and after the development of the models right now. And we recommend that listeners review a recent survey in this field called, “Fairness issues, current approaches and challenges in machine learning models,” for more information.

But it's really hard to actually avoid accuracy versus fairness issues unless you're really thinking about it, and it's one thing that regulators are highly focused on. They're focused on the fact that you can have a model that's working accurately but it's producing unfair results. And a word to the wise, which is that it is not an answer to regulators that your model is working accurately. Today, algorithmic bias is a type of issue that regulators say, you've got to do better. You've got to make your model not only accurate, but fair if it's being unfair against protected categories.

And we talked about this just a little bit a few episodes back, there's also a question and an overlap between fairness and interpretability. That overlap is strong and getting stronger. And what do I mean by that? Well, interpretability is a growing field of AI research right now that is trying to unpack exactly how the circuitry of a neural network leads to particular predictions from a model. And an interpretable model is the ability for developers to go back into the model and to try to understand why a model is coming out with certain predictions.

So right now, Anthropic recently published a major breakthrough in interpretability, which allowed them to map certain features or patterns in Claude to human interpretable concepts—and they were actually looking at the Golden Gate Bridge was one feature. And so, interpretability and additional research in interpretability may be a key step that we're going to find plays the supporting role in debiasing models.

So, what I would suggest to our audience today is when you're looking at your models, think about both accuracy and fairness. Understand that fairness is actually a requirement for many regulators right now in terms of decision making that impacts humans. And take a look and watch carefully the developments in the area of interpretability and follow some of the very interesting research that Anthropic is doing.

All right, that's all we've got time for today. We'll have Anna, I hope, back with us from her far-flung travels next week. I'm Katherine Forrest and we'll talk to you again next week. Signing off.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy