skip to main content

The Rise of Deepfakes

In this week’s episode of “Waking Up With AI," Katherine and Anna discuss the rise of deepfakes, the risks they pose, and the regulatory and legal responses they are triggering.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I am Anna Gressel.

Katherine Forrest: And hey, Anna, I'm back from a couple of weeks of traveling and talking to a number of different kinds of audiences on AI.

Anna Gressel: I know, it was a whirlwind. You were in like seven hotels and seven cities or something like that.

Katherine Forrest: Yeah, like nine hotels I think, I lost count. And as they say, it's all glamour. But actually, I had fun. It was exhausting, but I was speaking to a lot of different kinds of audiences, sometimes two in a day about AI. One topic that I ended up speaking about a lot, that I thought would be good for our podcast, was about deepfakes. And I was talking to a bunch of folks about how realistic they've become. You know, two years ago, you could really sort of spot them a mile away.

Anna Gressel: Yeah, it's a different world today. In some respects, it's what people have been predicting would happen, but now we're basically there. So, let's start at the beginning, get our audience up to speed on these issues. So first, let's get grounded on what a deepfake is. It's basically a kind of synthetic or fake media made to look or sound real.

Katherine Forrest: When you say media, what do you mean by that?

Anna Gressel: It's a good question. So, a deepfake could be a still image, a video, or an audio. It's essentially using AI to create fake videos, fake audios or video that has audio. Really, these capabilities are all bundled together now.

Katherine Forrest: You know, one thing that I think is really interesting is that deepfakes as a concept, and actually as a technology, started off as using a short clip of either a video or an audio of a real person and then using AI to be able to extract how that particular person would look while saying or doing something that he or she never said or did. Just taking a real image and changing it in some fundamental way.

And I remember talking to audiences about this in 2019 and describing a time when I thought that candidates for public office would be targets of fake videos and audios, having them saying and doing things that they didn't ever do. And that could really impact their electability and actually impact the democratic process. But it's all happened so much more quickly than I even had thought and really in a much more sophisticated way.

Anna Gressel: I mean, the election issue is a real one and we are really seeing deepfakes come up in that space, but we're also seeing a number of other use cases. So deepfakes are used to create pornographic scenes that never actually occurred. Sometimes even those involve well-known movie stars and they're fighting back against that right now actually. And sometimes ordinary people, you know, just in life are maliciously targeted.

Katherine Forrest: There are instances that I was talking about with some of my audiences of images or audio in the law enforcement context that are starting to create some real issues when people are added to or taken out of a particular actual real video. And so, it can change the nature of criminal prosecutions.

And there are also instances where people have actually passed away and their voices have been recreated or their images have been recreated to say or to do something. And we've actually seen a couple of examples of that in some movies and a documentary not too long ago.

But are a number of deepfakes of political figures and there's a pretty well-known one of Barack Obama and Angela Merkel enjoying a day at the beach together, which never happened probably much to people's disappointment, because it was an amusing one.

Anna Gressel: Deepfakes are really an important issue right now because they can present real -time hazards to how people understand what information is true in the real world. They can distort our shared concept of reality, what actually happened on a particular day, who did what, who said X, Y, and Z. And that's a massive problem and potentially a threat to democracy, particularly around events that can have significant consequences.

Katherine Forrest: One of the things that I was talking to some audiences about, particularly at judicial conferences, was, as I just mentioned a little earlier, the issues in the criminal prosecution context with the manipulation of audio and video that you expect to actually reveal a true event that's manipulated to then have both a partially true but partially completely fake event all in one.

And it can actually undermine when these things are challenged, a jury's belief in the reliability of video and audio evidence. And that could have real implications in terms of how we try cases.

Anna Gressel: I think that's exactly right. And it's something that some scholars have called the “liar's dividend.” The idea in a world of fake evidence who actually knows what is true becomes a really big question. I think that's a big question for juries. And it's also easy to imagine how the right deepfake at the right time could cause really irreparable damage. Actually, increasingly, we don't have to imagine it. So, take the New Hampshire deepfake case, for example, where voters in the state received a totally artificial, AI-generated robocall, mimicking President Biden's likeness and actually discouraging people from showing up to the polls.

Katherine Forrest: I mean, part of the real problem here, but also one of the for certain kinds of media, one of the benefits is that these things are really easy to make. It's a problem when they're used maliciously and in a way, that's really undermining the democratic process or doing targeted negative behavior. But there are a number of apps, different kinds of apps that allow you to make a really very photorealistic looking and audio realistic sounding deepfake.

I actually made a couple of them for showing at some of these conferences and I made one of a guy who was in a suit, and he looks completely real and has a real sounding voice. His mouth and his voice move at the same time. I had him sort of announcing an event about a company that was supposed to impact the stock price just to demonstrate that someone could actually release one of these kinds of deepfakes purporting to have real information about a company that could in fact be fake.

Anna Gressel: So, Katherine, what are the questions to ask to understand the legal risks that are incurred with deepfakes or AI-generated media?

Katherine Forrest: Well, there are really two questions. First, does the generated media resemble a real existing person? Does it look like a real person and sound like a real person? Because if so, there might be a variety of state laws that are implicated, such as the right to publicity. There could be harm in the form of reputational risk. And then a second question is, is the generated media being used in a misleading or false way? And is that misleading or false use actually generating harm and we can start thinking about fraud, we can talk about stock manipulation ,we can talk about also things like defamation claims. Right now the FTC has announced that it would use its section five powers in order to pursue some of these uses.

Anna Gressel: And on that note, I mean, we've seen bills regulating deepfakes crop up kind of across state legislatures and overseas. And in the US, we're seeing that happen actually in a very domain-specific way. So, a lot of states have woken up to the dangers that deepfakes pose to the electoral process and election integrity. And they've drafted bills regulating the use of AI-generated media in that area. And then some really try to target this issue of how to provide redress for victims of harmful deepfakes, those are like the pornographic deepfakes that we were talking about earlier.

Katherine Forrest: Right, so you've got a variety now of state laws. We've got the federal government in the form of the FTC and using its Section 5 powers. And then also now the FCC has remarked that the use of AI-generated voices in robocalls is illegal. So, we're starting to see some real legislative and state legislative crackdown on deepfakes.

Anna Gressel: Yeah, and on the international side, the EU AI Act, which is kind of in the process of being finally adopted, it itself has certain requirements for deployers of AI systems that can generate deepfakes. And so those deployers would actually have to disclose that content has been artificially generated or manipulated. But this is a really big area of regulatory interest. I'm sure, Katherine, we’ll come back to this in the future.

Katherine Forrest: We will, but for now, Anna, that's all the time we've got for today. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel and that's it for this week's episode of “Waking Up With AI.” Like and subscribe and do all those great things if you're enjoying the podcast.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy