skip to main content

Human-Centered AI

Katherine Forrest and Anna Gressel explore the concept of human-centered AI, discussing how AI systems can be designed with the fundamental goal of prioritizing human needs and enhancing our daily lives.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, we are finally on the same continent again.

Anna Gressel: We are! So exciting. Since the last episode, I headed from Rome to Abu Dhabi, where I was talking to a lot of folks about AI, actually, Katherine.

Katherine Forrest: So, tell us a little bit about what's happening in Abu Dhabi these days in terms of AI before we get into our topic for today.

Anna Gressel: I mean, I have to say it was amazing to be there. Just the sheer excitement about AI, the focus on it, the investment in it. It is a country that is really kind of playing big in the AI space and wonderful to meet people and talk about these kinds of really important and exciting technologies with them.

Katherine Forrest: All right, there's really so much interesting work going on around the world right now on AI. And so, let's turn today to an issue that is not specific to any part of the world at all, that really does impact people throughout the world in terms of AI, which is the phrase “human-centered AI,” and the concept of human-centered AI.

Anna Gressel: Yeah, definitely, Katherine. So human-centered AI is, at its core, all about designing AI systems that keep the interests of humans, so that's whether end users or those affected by the AI system, at the top of mind. And that means building AI that helps us and doesn't hurt us.

Katherine Forrest: That's right, some people would say that a human-centered AI system doesn't detract from, but instead contributes to human flourishing. And there's a scholar named Asu Ozdaglar out of MIT who's a notable voice in this regard.

Anna Gressel: Yeah, that's right, Katherine. She used the framing around human-centered AI in her recent presentation at MIT's EmTech Digital, which we recommend our listeners catch up on if they're able. And we should note that there are a ton of voices in academia all across the world with insights on this topic. Besides the work coming out of MIT and the Schwarzman College of Computing, we could also point to Stanford's Human-Centered AI Institute, or any of a large number of labs in CS, and increasingly in the humanities, that have the aim of building or specifying human-aligned, human-centered AI systems and other technologies.

Katherine Forrest: And let's make this point about human flourishing more precise because it's really at the core of human centered AI. When folks like Ozdaglar say flourishing in the context of AI, they're thinking about AI augmenting and amplifying our creativity in our work. And here's a possible example of that.

Let me just give you a sort of little sort of scenario. There I am sitting down to write something late at night and let's just imagine for a moment that I don't really like the way a certain paragraph is coming along. And sometimes maybe I would have in the past sent it to a colleague for input and edits, but it's late at night. And I mean, maybe really late at this point. And so, no one's going to see this until the morning. So, I take a paragraph, I copy it and I put it into a chatbot — or I read it aloud into a chatbot — and I get spontaneous input from the AI. Maybe it's got great suggestions, maybe it doesn't. But in any case, I'm getting actual insights from the AI tool through the chatbot, whether it's written or oral, right away. And that is allowing me then to move on, to actually progress that piece of writing, or to move on to another task if I'm content with some of the suggestions that they've given me. So that's really what we mean. We mean using the technology to assist humans in actually accomplishing tasks that allow them to flourish.

Anna Gressel: I think that's right, Katherine, and it's worth noting that there's an economic angle to all of this as well, since so much of flourishing can really be put in terms of labor. Is AI going to augment workers, or is it going to replace them? That question and others like it are not academic or ethical debates these days. Companies are actually thinking about these issues in the race to adopt AI, and to find the right use cases for their businesses.

And especially with the EU AI Act’s heavy focus on human centered AI, designing and deploying systems along these lines is now also becoming a real legal concern for companies.

Katherine Forrest: Right, to go a little further on that economic point. Stanford's Human-Centered AI Institute has a lot to say on human-centered AI. They publish an annual AI Index report, which this year measures about 500 pages. And they have a section that's data-driven that really looks at how AI can make workers more productive and lead to higher quality work.

Anna Gressel: Katherine, so I want to ask you a question. As an ethicist and a former judge and a frequent writer on AI, I'm sure you've spent a lot of time thinking about our relationship as humans to AI and our role in steering AI systems. So, can you tell our listeners a little bit more about that and your views on those issues?

Katherine Forrest: Well, there's a lot to say, but I guess if I were to give a short snippet of a much longer talk, I'd say that AI has already shown its ability to save human lives through things like novel drug discovery, medical diagnostics — but it's also shown an ability to be used to undermine democracy through things like misinformation, disinformation, and we humans are its progenitors. We're the architects of its models. We decide how and what to train them on. We are the origin of their data. And we have to proceed with great care with our concepts of what's right and what's wrong, and what can help and what can hurt and keeping that always at the forefront.

Anna Gressel: So, it's safe to say that there is legal, ethical, and to some extent, economic interest in designing and deploying human-centered AI systems. Some of that work takes place in the development stage. Think about aligning AI with human values, which we've talked about before. But some work towards ensuring human-centered AI can only occur after an AI system has been deployed.

Katherine Forrest: Concretely these systems should be monitored, overseen, and governed in a way that focuses on their human-centric capabilities and ways in which they might be deviating from human-centric functionality.

Anna Gressel: And Katherine, I think there's this really interesting accountability policy report from the Department of Commerce's NTIA division, which highlights these kinds of accountability concerns. I think it's fantastic. I often include some of the pictures from that in my slides when I give talks, because it actually is so comprehensive in how it thinks about accountability. And also, some of the regulatory requirements, for example, for keeping a human-in-the-loop when it comes to AI decision making.

Katherine Forrest: All right, we're going to return to those questions, particularly the human-in the-loop in some future episodes, but, for now, that's all we've got time for today, Anna. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel and before you hear from us again next week, make sure to like and share the podcast if you've been enjoying it.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy