skip to main content

The Challenges of AI-Generated Evidence

This week, Katherine Forrest and Anna Gressel discuss how courts are grappling with the evidentiary issues raised by AI-generated or manipulated content, such as deepfakes, and how the federal judiciary is considering new rules to address them.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, actually, we saw each other in the flesh today because we were at Columbia Law School doing a little AI tech thing.

Anna Gressel: It was so fun; I feel like I haven't seen you in ages. I'm sure we have recently, but it was, it was fun to get together and be on the upper, upper west side. And the chill is in the air from the nice fall weather we're having these days.

Katherine Forrest: It was great, although I had to sort of wander in many blocks between 114th Street and 116th Street looking for the entrance. But here we are, and the biggest sign I have for the fall is that I teach a class at NYU Law School on quantitative methods as an Adjunct. I do it in the fall and I have only two more classes left, and that's a sure sign that we're getting ready for the wintery season. But speaking of change, I thought that we could talk about today the subject of a couple of evidentiary issues that courts are starting to run into with AI. The sort of the real deal how courts are actually now starting to encounter certain kinds of evidentiary issues and how judicial rulemaking is becoming more active.

Anna Gressel: Yeah, that's right, Katherine. Just last week, the advisory committee that proposes rules of evidence for federal courts decided to go ahead with new rules to address AI-based content. And that's super interesting because it tells us that AI-generated content has really taken sufficient hold in our society that the court system views it as appropriate and useful to propose new rules.

Katherine Forrest: Right, I don't think that typically things in terms of judicial rulemaking move quickly. So you know that you've actually had a technological impact that's been recognized when we're getting ready for some rulemaking. 

So let's spend some time today discussing just that and identifying some of the potential issues with what we'll call AI-generated evidence in the courtroom and why these issues matter today to judges and then actually get into a little bit of the proposed rulemaking. Maybe we should start with some background and talking about the kinds of content that can be generated by AI, and how that content can become evidence. Because not all of it's problematic.

There's a lot of AI-generated content that's from generative AI, like for instance, a contract that might have a generated provision that can be perfectly clear and perfectly enforceable, and that can be AI, technically AI-generated content. And that's not presenting any novel or extraordinary issues. And then we might have a transcript of a conversation that has occurred that's being created for instance, from one of the programs these days with various web interfaces where you can transcribe conversations, and the conversation may actually have occurred and presumably it did actually occur in the transcript it can be an accurate transcript. 

So these kinds of AI-generated content in the courtroom really isn't any different than using or introducing any kind of evidence. There'll be some general authenticity issues, some general hearsay issues in terms of out-of-court statements being introduced, if they are being introduced for the truth of what they've said. And that's not the kind of AI-generated evidence that's causing the additional rulemaking that we're going to talk about. What we want to talk about today, and this is sort of really a long wind up, are what we're thinking of as really deepfakes, AI manipulated evidence, manipulated audio or video content.

Anna Gressel: Yeah, that's right, Katherine. I mean I know we have like a pretty great listener base, actually love it when people reach out to us. Now I have a sense of like some of at least some of the folks in our audience. So I know a lot of you know what a deepfake is, but let's just break it down a little bit for the purpose of our discussion because there are some categories within that that matter. And so let's get a little bit like tighter on kind of the vocab we're using here.

So we're going to talk today about deepfakes, which have elements of audio or visual fabrication. An entire photo or video might not be fake, but a person or a thing could have been added or subtracted from an image. And I think there are actually a bunch of tools out there that do that these days. But for example, you could have a video of a traffic accident where the video accurately captures the accident itself, except the traffic light is changed from red to green by an AI program. Or at the actual time an event occurred, a photograph of a person who was actually at the event in question is instead shown to be in front of a castle in Italy complete with a time stamp. I would like to be in front of a castle in Italy, Katherine.

Katherine Forrest: I knew you were going to say that because I think that you're going to talk about bookstores in Italy very soon. I can hear it coming, we're on the verge of it. But what you're saying is that, and I want to use sort of an image from childhood and from game playing, but what you're saying is that Colonel Mustard was in fact in the library with the wrench at the time of the murder, but is instead shown in front of an Italian castle drinking wine, which happens to be in front of an Italian bookstore.

Anna Gressel: Exactly, and I will say I was a huge Clue fan as a kid, so I love that that's your example, Katherine.

Katherine Forrest: Right? And so as AI technology becomes more advanced, there is a real risk that content like deepfakes or AI-manipulated media will make it even more difficult for courts to discern what's true and accurate. And you want to go to the video evidence often for the actual real evidence of what happened. But also for jurors, you want jurors to find reliability in evidence and you don't want the jury to suddenly start questioning whether or not the things that they're being presented with are somehow not actually evidence at all. And to have them sort of start to question the judicial process.

Anna Gressel: Right, and I actually think, Katherine, there's a term that some folks have coined for that, which is the “liar's dividend,” the idea that in a world where anything could be real or fake, that it's sometimes the liars that benefit from that. And so that's where, let's just like pivot back to the rules of evidence. For our non-lawyer listeners, who we totally love, we're so thrilled that you listen to us, the rules of evidence govern how a case is established in court, and those rules focus on different types of proof from testimony and documents to images and video.

Katherine Forrest: Right, and in our judicial system, judges play a primary role in deciding whether and what types of evidence can be presented either to themselves as fact finder or to the jury as a fact finder or the group of fact finders and that's their gatekeeping function as it's called.

Anna Gressel: So Katherine, do you want to break down some of the issues that are really coming to the crossroads between AI and evidence?

Katherine Forrest: Yeah, and so let me just sort of throw out a few examples. Of course, evidence is only an issue if it's relevant to the fact that's at the heart of the dispute. If you need to have a question about where was Colonel Mustard at the time in question in order for Colonel Mustard's whereabouts to be a relevant question or for instance, you have to have a question about whether or not the traffic light was green or red in order for that fact to be of relevance to the issue in dispute. If those issues are not in dispute in a case, then a photo of Colonel Mustard in Italy or Colonel Mustard being removed from the library with the wrench or the color of the traffic light, that would be totally irrelevant and it really should be excluded on that basis alone and we should never get then to the harder questions and we can stop right there.

But let's assume that those issues are in the two cases that you've got an individual who was supposed to, you think, have been in one place and you're being shown that they're in another, or a light is supposed to have been one color you thought, but then you're being shown another. So then we move on to questions of admissibility and the key, one of the key issues to admissibility is the thing what it purports to be? Is it a reliable rendition of the thing that it purports to be?

And so is it in fact a true and correct photograph of Colonel Mustard in Italy? Or is it in fact a true and correct photograph of, or video of the traffic accident? Is it reliable? And this brings questions of authenticity and admissibility into the forefront. And so as manipulated evidence becomes more of an issue for courts, we're going to expect that parties are going to be bringing these questions to the court in a variety of ways. But certainly they're going to be questioning whether or not this evidence is in fact manipulated and even potentially bringing in experts who will testify, yes it was manipulated or no it was not manipulated.

Anna Gressel: Yeah, and we know that everything that is authenticated is not necessarily admissible.

Katherine Forrest: Right, when I was a judge, we dealt all the time, that's what judges do, with admissibility questions and authenticity questions, and I would get presented in a trial with hundreds of questions of admissibility of evidence, and it can get pretty complicated, but let's take the transcript example of the AI tool that we mentioned earlier, and let's just assume that an AI tool makes a transcript of a conversation, and that transcript can in fact be a true in an accurate transcript of that conversation. But if someone wants to use that transcript for the truth of what has been said, and there are some complicated answers as to whether or not or questions as to whether it's a party admission, but hearsay comes into play. And then hearsay is whether a statement has been made outside of court or in certain other settings. And there are really complicated rules around when out-of-court statements can be admitted. So when a transcript can be admitted for the truth. But the point is that you cannot always admit a statement that somebody has made outside of court, even when you've got an accurate transcript of it. You have to actually go through the hearsay rules.

Anna Gressel: Yeah, and in the context of AI-manipulated evidence, there's something developing in the courts that I know you talk about, I talk about sometimes, with the judiciary related to the authenticity point you just made called the deepfake defense, or at least that's what some people are calling it today. And basically, that is a claim by a party to a case that incriminating evidence is really fake or false.

Katherine Forrest: And that would be that Colonel Mustard was not in fact in Italy. That he was in the library with the wrench, but he has been put into a timestamped photograph in Italy, that makes it look like he was not. That would be an example of an AI-manipulated piece of evidence. But really, seriously, we're using this sort of funny example of a game, but there are true examples that are starting to happen now in courts where some of this manipulated evidence is having a real impact.

Anna Gressel: Yeah, absolutely. We're seeing this starting in some family law cases where parties have fabricated evidence against their family members and in some criminal cases like the January 6 cases where folks have effectively argued that footage of them was a deepfake.

Katherine Forrest: And in addition to the authentication issue, we're already seeing judges facing the relevance issues. For example, a Washington State Court judge in March excluded AI-manipulated video footage in a murder case, reasoning in part that it would result in a very time-consuming trial within a trial.

Anna Gressel: Right, and some of the challenge that courts have with AI-generated or manipulated evidence is that the courts themselves can't tell what's real or fake or changed using AI. So they need to rely on experts for that. And sometimes the experts need to rely on state-of-the-art tools to detect what is real and what is not.

Katherine Forrest: And there's a little bit of a whack-a-mole aspect to it where you're trying to stay one step ahead. You're having technology that's trying to detect the deepfake evidence while the deepfake evidence is becoming ever more sophisticated. And there aren't standard tools yet. That means that there's no one tool that is used by the industry or that has been accepted by the legal community or by the government, not to mention just parties in a civil lawsuit, that are agreed to be the sort of best housekeeping stamp of approval or have the best housekeeping stamp of approval in detecting AI-generated or manipulated evidence. 

So this means that courts have to hold hearings to determine reliability of certain kinds of evidence, and what they do is they take in evidence of different techniques of trying to determine whether or not something's been manipulated or not. And they actually will hold sometimes, it could be a one-day hearing, it could be a couple of hours hearing, it could be a couple of days of hearing to try to execute on their gatekeeping function. And there was actually a New York State surrogate's decision that recently highlighted how important that function is in terms of trying to figure out what's real and what's not.

Anna Gressel: That's right. And before we discuss the proposed new federal rules of evidence, which, you know, I mean, as the name suggests, apply to evidentiary issues in federal courts, Katherine, can you just tell our listeners briefly, what is the process for creating or amending rules of evidence at the federal level?

Katherine Forrest: Well, it's a lengthy process and a very careful process. You want to get together a lot of stakeholders and make sure that when you amend a rule of evidence, you're doing it in a way that is really thoughtful and looks at things from all angles. So it starts with a judicial conference and that's the main policy-making institution within the federal judiciary. And the judicial conference is composed of federal judges and among other committees it has a committee known as the Standing Committee. And the Standing Committee has five advisory committees which are composed of judges and lawyers and different kinds of thought leaders. And one of those five advisory committees actually deals with the rules of evidence.

Anna Gressel: And what happens within that advisory committee for the federal rules of evidence?

Katherine Forrest: Well, the advisory committee studies proposals to change the rules, which will actually include language, it'll include concepts, and they'll debate them. And if the advisory committee agrees to take up a proposal, it can then seek public comment. And often there is a lot of public comment. And after that, if the advisory committee approves the proposal, the standing committee will then review it. And if approved, the proposal then may actually go up to the Supreme Court, the U.S. Supreme Court. And if the U.S. Supreme Court approves it, then the proposal eventually takes effect unless Congress intervenes.

Anna Gressel: That is like a lot of approvals, Katherine.

Katherine Forrest: That's definitely bureaucratic. And of course, you really want it to be a thoughtful process and you really want it to go through multiple steps, but it's not fast. And so that means that when you've got rules that are actually now being generated with regard to deepfakes, it's going to take some time, but it also tells you how important these issues are becoming to the court system.

Anna Gressel: So should we talk a little bit, Katherine, about some of these proposals that are on the table these days? So one of these proposed rules would modify the Federal Rule of Evidence 901, which concerns authentication. Under the proposal, authentication involving AI-generated systems would require evidence that describes the training data and software used and show reliable results.

Katherine Forrest: Right, and another proposal would address deepfakes. And it would do that really in particular. This proposal would also fall under the same rule, Federal Rule of Evidence, or FRE 901. And under this proposed rule, if a party challenging computer-generated evidence's authenticity is able to demonstrate that that evidence has been altered completely or partially by AI, the evidence would be admissible only if it's shown that it's more likely than not authentic. And a previous version of this proposed rule had a different standard, that the evidence was more likely than not fake.

Anna Gressel: And my favorite of the ones on the table is a proposed new Federal Rule of Evidence 707, which would require certain expert testimony for outputs of machine-generated evidence. I mean, I just personally really find expert issues fascinating. And so for me, this is really interesting. It would be connected with Rule 702, which as many of our lawyer listeners know, governs expert testimony.

Katherine Forrest: Right, and for those interested in a deep dive about proposals related to AI and evidence and actually some language relating to some proposals themselves, we really recommend Judge Paul Grimes and Professor Maura Grossman's papers and articles on the subject, which you can find online.

Anna Gressel: Yeah, and a lot of people think there shouldn't be any changes. I mean, there are certainly folks with that view. But now that the committee is moving forward, when can we expect changes, if ever?

Katherine Forrest: Well, even though the proposed rules are moving forward, they're by no means guaranteed. I mean, as you saw from the bureaucratic process, there are lots of ways in which there can be an off-ramp for some of these proposed rules. But next, the proposed rules will be presented at another meeting in May, so that's May of ‘25. And if approved, there'll be a public comment period lasting for a few months. And then, of course, as I'd mentioned, ultimately the Supreme Court would have to approve any rules. So, really, we're talking about quite a lengthy process. I'd be very surprised if we get anything out before right towards the end of 2025 or into 2026.

Anna Gressel: Yeah, and in the meantime, we're going to see pretty substantial changes to AI technology and AI detection technology. I mean, we see changes to that and advancements all the time.

Katherine Forrest: That's right, and the deliberative process, this whole sort of bureaucratic process, if you will, is useful in that it provides input from stakeholders, but because it's so slow, these technological changes can actually overtake that process. So there's some pros and cons to it. It can both be quite careful, but it can also sort of miss the boat. We'll have to see with these rules that are sort of happening right now, with generative AI and manipulated generative AI content, which side it falls on.

But Anna, hey, I think we're out of time. That's all about all we've got time for today. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel. If you like the podcast, give us a rating. You know I like those stars. So just my own personal request to y'all.

Katherine Forrest: All right. Thanks, everyone.

Anna Gressel: All right, thanks.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy