skip to main content

Deepfakes After Death

In this week’s episode of “Waking Up With AI,” Anna Gressel considers audio deepfakes and the complex legal and regulatory questions that arise when a deepfake uses the likeness of the deceased.

  • Guests & Resources
  • Transcript

Anna Gressel: Hey, everyone. Good morning, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Anna Gressel, and I don't have Katherine with me today, but we have a lot of interesting things to talk about, starting with a few big policy developments. I don't think we'll cover them in detail, but I wanted to mention we're sitting here at the end of January 2025 and it's a real moment from the AI policy perspective.

The Biden administration's executive order is officially out, it's been rescinded. And now we have a Trump administration executive order focused on US innovation and competition. And under that executive order, the leadership of the new administration has 180 days to essentially decide what to keep and what to get rid of from the last administration. So we're going to be in a period of flux for a while. And this tells us that it's possible we'll see some major policy developments even before that 180-day period elapses. But it essentially sets the clock running. So for folks doing their 2025 AI calendar planning — and I know that's a lot of our audience — set your reminders that this summer will likely see some really interesting developments on the US side of the pond in addition, of course, to some major milestones with the EU AI Act. 2025 is definitely off to an interesting start.

But putting that aside for the moment, we're turning back, today, to an important legal issue, that is, likeness rights related to people who are deceased. And this isn't like bringing people back from the dead, although I know we could have done a whole Halloween episode on that. But today we're going to talk a little bit about audio deepfakes. I mean, these could be any kind of deepfakes, but let's just take audio as an example for today, which are really not new technologies. And I know Katherine wrote about some particularly thorny scenarios on audio deepfakes all the way back in 2021 for her column in the New York Law Journal. And specifically, she was focused on scenarios where someone's voice was made to say something they didn't say after they had passed away.

So that was before Katherine and I worked together, but we were friends and going to conferences at the time. And I remember that Roadrunner, that 2021 documentary about the late Anthony Bourdain's life, had just come out and the director revealed that a few of the clips of Bourdain speaking were AI-generated. The clips of Bourdain speaking happened to be vocalizations of text he'd already written. That's not quite as egregious as making up new content, but it is still fake, and the audience didn't know ahead of time that there were any AI-generated clips. More recently, a Mexican beer brand came out and started offering customers the chance to create deepfake short videos of their deceased loved ones in honor of Mexico's Day of the Dead. So that’s been prompting all kinds of questions about whether there should be safeguards to prevent bad actors from inappropriately using those services to impersonate and defraud people.

Stories like these raise so many interesting questions from a legal and regulatory perspective. Historically, US states have had varying degrees of protections for what they call publicity rights: basically the right to use someone's voice, image or likeness, usually in a commercial context. A lot of those laws typically come up when you need a celebrity's permission, for instance, before you show them endorsing a product. And the exact nature of those protections aren't the same across states. This is like a real 50-state patchwork. But the right of publicity doctrine came into being way before any of these deepfakes hit the scene. And it's worth noting that some states have specific right of publicity laws covering deceased persons, which can differ in really material ways from their general right of publicity laws. We're also starting to see a number of states like Tennessee, through its Elvis Act, creating specific digital replica laws that try to make clear that AI recreations of someone's voice or likeness are still covered under the right of publicity protections I just mentioned.

So that is just one piece of the puzzle governing how we use things like voice, and the kinds of legal issues that can come up in that context. But it's worth noting that many states have laws governing how companies can handle biometric data. And sometimes, not always, sometimes biometric data is actually used to generate deepfakes and recreate people's voices. So it's an important piece of that legal puzzle. In Illinois, many people know BIPA very, very well in our audience. I know that. But in Illinois, just for the folks who don't, some individuals can actually sue under that state's biometric law for using their biometric information without their permission in certain situations. And if the person you're making a deepfake of is deceased, you'd have to think about how those rights, like right of publicity laws or data privacy laws, apply differently.

So these state laws and other laws raise really tricky and interesting questions about when the act of analyzing someone's face, voice or likeness in order to generate a deepfake could raise concern. Imagine if someone used a deceased person's voice to confess to crimes they had nothing to do with. That would raise all sorts of questions, not just about whether the facts alleged in the confession are true. I mean, that's a big bucket right there. But also about whether the estates of the deceased can seek damages for defaming or putting the deceased in a false light. That even assumes it's clear to the court that it's a deepfake. You could easily imagine a future where both sides can't even agree on whether or not the confession from the deceased is a deepfake or is real, and the court might lack the tools to tell the difference. We actually did a whole podcast episode on this, on AI-generated evidence. So I would definitely go back and listen to that if you’re interested in this question of: How do you even tell in a court if something is true or false, and what does the law say today about that?

So the law is going to have to adapt here, because the tech is changing really quickly. Anthony Bourdain was, of course, someone for whom there existed hours and hours of video content with his voice. But it's not just celebrities or politicians who are at risk from audio deepfakes. Modern methods can match voices given just a few seconds of sample audio. I mean, let's be honest, I now have hours of audio of myself out there. So, you know, I'm definitely more in the Bourdain category these days. But we're also seeing companies that promise to use AI to create online accounts where the responses are supposed to sound just like your deceased loved one based on interviews they provided prior to their passing. And if adult children tried to do that for a deceased parent who wouldn't have wanted it, it's really unclear who would have the standing to step in and object.

You know, I think this makes us all think back to Sam Warren and Justice Brandeis, who really forced Americans to think about privacy when they wrote about the right to be let alone in 1890 in the Harvard Law Review. And more recently, European privacy regulators talk a lot about the right to be forgotten. But in 2025, we aren't just struggling with how to shape the right to be forgotten, we're wrestling with how we'll want to be remembered.

I think that's all the time we have for today. I'm Anna Gressel. Make sure to like and share the podcast if you've been enjoying it.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2025 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy