skip to main content

Navigating AI-Generated Disinformation

In this week’s episode Katherine Forrest and Anna Gressel explore the nuances between misinformation and disinformation, the role of AI in spreading falsehoods and the innovative tools being developed to combat these threats.

  • Guests & Resources
  • Transcript

Katherine Forrest: Hey, hello folks, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I’m Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: And so Anna, this is the first podcast that we've recorded after you've been made a partner at Paul, Weiss. And I want to congratulate you, and I'm sure all of the members of our audience would want to congratulate you. You only joined our firm just over a year ago, but you have absolutely blown us away with your knowledge and your judgment in the AI field, as well as complex commercial issues. And so I am so honored and proud to be able to call you my partner now.

Anna Gressel: Thanks, Katherine. It's still surreal because it's actually we're recording this before it's public, so it's kind of exciting. I just don't know; I feel so lucky. We have the best clients, hands down the best clients, and it's just a pleasure to work with them and with you every day. And so I couldn't be more thrilled.

Katherine Forrest: Well, now that we've done that, we'll just get back to work.

Anna Gressel: Should we go to something that's a little bit of the opposite of good news and talk a little bit about the prevalence of AI-generated disinformation today?

Katherine Forrest: An important topic, yeah, for sure. It's safe to say that AI-generated disinformation has been such an important topic in 2024, which was just a huge election year worldwide. Not just in the United States, we were so all very aware of it, but really there have been elections all over the world. More than two billion people have been eligible to vote all over the world and that's in a number, dozens and dozens and dozens of elections in many different countries and so that's like almost half of the world's population.

Anna Gressel: Yeah, I mean, it's such an amazing thing to have that happen in one year. And it makes you realize that there's been this opportunity to vote as part of a political process that has been happening worldwide. And it's so different from where the world was, for example, at the beginning of last century or centuries ago.

Katherine Forrest: It's really extraordinary and today's topic is the impact of AI on the electoral process, in a particular way how information can be disseminated and the truth or accuracy of that information can really impact the electoral process. So people have been wondering if AI could turbocharge the spread of fake news or even undermine democracy itself. It's really the fear of fraud that AI can be used to generate when put into the wrong hands or be used by people who want to misuse AI but being turned towards information. So both regulators in the industry have been getting nervous about this, and so I want to just for a brief moment digress because of that into one of my other many favorite topics, this one called discursive democracy.

Anna Gressel: Ooh, I love it. I can't wait to hear about it. That's a big word, do you want to break it down for folks?

Katherine Forrest: It's a big concept, but it's really at the fundamental sort of basis of our country, discursive democracy being the concept that dialogue, conversation, communication inform the electorate and that when you have a representative democracy where a few elected representatives are actually representing the interests of a much larger population that understanding what the population's real concerns are, but also the population being able to understand what the person running for office stands for on those issues is critical to an election that properly and appropriately matches the two. So you have a discourse, if you will, a discourse that informs democracy. So in essence, if we don't have accurate information on the events around us or on the positions that people are really taking, then we are going to be undermining the concept of discursive democracy at its core.

Anna Gressel: Yeah, and we've talked about deepfakes in prior episodes, but that was before we lived through this election cycle in which both deepfakes and all forms of various electoral shenanigans powered by AI have been kind of in the forefront.

Katherine Forrest: Right, and let's talk about two different words that have been used, I think, over the course of the election cycle. One is the word disinformation, and the other is misinformation, and we'll separate the two. Misinformation is information that's just inaccurate or wrong. For instance, when the person who sat next to me on a train once explained that there's a secret cabal of aliens who control the UN. And true story for someone.

Anna Gressel: That is why I like the quiet car in Amtrak.

Katherine Forrest: You do like the quiet car, although I'm not sure that you're always 100% quiet. I think you're the person who gets shushed, but we live and we learn. The point here is that that person who was giving me information about the secret cabal of aliens controlling the UN was giving me misinformation because they didn't mean or intend to spread falsehoods. They just had their facts wrong.

Anna Gressel: Yeah, and disinformation, on the other hand, is the intentional spread of falsehoods. It's when you mean to crowd out the truth with a lot of fake information, and it becomes particularly effective when the falsehood is constructed to grab folks' imagination and feed into their old assumptions. So when we talk about disinformation, we're often talking about motivated individuals or groups spreading information that is false with an intent to deceive.

Katherine Forrest: Exactly, so misinformation is simply mixing up for instance, the model of a car that's speeding away or whether the light is red or the light is green. That's misinformation, there is a truth there and the truth has been gotten wrong, but it's not intentional. Disinformation has a kind of maliciousness to it, a kind of intentionality, and it's got the potential to really coarsen a country's politics and divide people over issues that really are not necessarily as divisive as they're being made with that disinformation. So, on the one hand, it's one thing to make people just not understand the true facts unintentionally, but there's something else when you're intentionally trying to mislead people and you're putting democracy at stake.

Anna Gressel: Yeah, we've talked a lot about AI creating potential for advancements and development of science and all of these things, but it also does create new risk surfaces. And one of those is a new risk surface for fraud or for deception. And AI makes it easier for bad or kind of ill-intentioned folks to widely disseminate false facts, including by using AI to alter images or audio that have people like politicians or people close to them saying embarrassing or discrediting stuff. And what is the difference with AI? I mean, I think we always try to index on what really is different with AI from any software or Photoshop. Really, now if before it took hours to photoshop a politician to look like they're really accepting a bribe, now it's just like one AI prompt away from happening.

Katherine Forrest: Right, and the ease of it is one thing and that's very concerning but we do know that there are a lot of people really highly skilled engineers who are working hard at trying to get ahead of this problem and to try to find ways to at least if not solve it then to soften it, and they're using different kinds of AI tools to detect fake accounts, to detect doctored audio, doctored video, otherwise known as deepfakes. But in some ways, it's like a game of whack-a-mole because you're trying to stay one step ahead, but sometimes you end up a step behind because there's sort of a leapfrog of the technology that allows for the disinformation to jump ahead of you. So, you know, the tools are sometimes adequate and sometimes you're just a little bit behind.

Anna Gressel: Yeah, and one thing that researchers haven't really figured out with any rigor quite yet is whether AI-generated disinformation hits people differently than just the same old rumors that bad actors have always used to spread disinformation or propaganda without AI.

Katherine Forrest: I personally think that the more realistic that disinformation is, the more damage it can do. And as the AI capabilities allow for disinformation to be increasingly photorealistic, audio realistic, generally realistic, then you've really got, I think, a new vector for disinformation to take hold on our hands. There's this paper from Stanford that indicates that people are both oriented to believe what they already want to believe, but that they're also today getting increasingly comfortable with relying upon people that they can't see face-to-face or even know. And in my mind, that makes sense because our world has now become so used to different forms of digital media, mediating our communications, whether it be text, social media, Discord groups, online groups that play games together, however it's happening. We're used to being distanced from a lot of people that we communicate with.

Anna Gressel: Yeah, and I think one thing that that tells us is that efforts to engineer more sophisticated tools to detect sources of disinformation are really, really important today. And particularly when those disinformation campaigns or propaganda campaigns are being sponsored by state actors or foreign adversaries, that can be really important because it means that those efforts can happen at a really wide scale. They're quite sophisticated. And in fact, some of the efforts that you might be able to make to trace disinformation are harder when they're really foreign governments conducting those campaigns. So we want to make sure that we can unleash innovation powering all of these good uses of AI, but without letting the bad actors have a field day with this new technology.

Katherine Forrest: And that brings to mind a phrase, dual-use, that's been used by regulators in connection with AI, particularly in connection with the White House Executive Order, which now, under the forthcoming Trump Administration, some people are saying is going to be withdrawn. But putting that aside for the moment, the concept of a dual-use foundation model is a foundation model where you've really got two sides to the coin. You've got the good, and then you've got the, at least, potential for bad. And that's really what we've got here. We've got great AI tools that can be used to make all kinds of, say, novel video that allow people to make videos and movies of novel content that can be very exciting and only positive for providing entertaining content to folks, but also be used for this kind of disinformation that we're talking about.

Anna Gressel: Yeah, I mean, a really, really simple way to understand that is a hammer. That's an often used example, a hammer can build houses, but it can also be used to hurt someone. And so that's just a really, really simple way of understanding that kind of dual-use capability of a tool. And models are tools, right? So that's just a place, Katherine, think we can use to bring some of those developers into the conversation.

Katherine Forrest: Right. And companies across really the tech ecosystem have been working, as we said a few minutes ago, to combat this problem. And in February of 2024, a whole bunch of tech companies signed on to something called the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, and that was at the Munich Conference. So this conference set up an agreed set of commitments to both detect and then to counter harmful AI content that was related to elections and that was a really important moment.

Anna Gressel: But I think one important thing to remark on here is that it wasn't just model developers who were signing on to the accord. It was also social media companies who have platforms where disinformation could be spread and that really underscores the recognition that this is just a larger problem and there's so many people coming in to try to figure out how to solve it together.

Katherine Forrest: And that makes me really think about watermarking. There was watermarking that was done with digital files in the sound recording area when the internet was in its youthful phase. But watermarking is something that can also be used here in connection with trying to combat disinformation.

Anna Gressel: Yeah, and watermarking is a way of putting an identifier into a digital file. You can think of it as an authorized digital indicator that a particular file with particular characteristics is either legitimate or that it’s manipulated or that it’s fabricated all together.

Katherine Forrest: Right, and number of developers have signed on to this thing called another one of these long phrases, The Coalition for Content Provenance and Authenticity (C2PA), and they're making an effort to build content credentials that will tell you about the origins of content that you see online, and that that could include identifying whether a piece of digital content is AI-generated or not, or whether it comes from a trusted human source. So depending on how the watermark works, there are visual watermarks that you could use or spot and others are only machine readable. So some are going to be human readable, some are only going to be machine readable. And these are going to be some of the tools that we're going to be used or that will be used and are being used to try to detect this disinformation.

Anna Gressel: Yeah, and I think it's important to talk a little bit about the different kinds of technologies here. We have things like content provenance indicators, and we have things that are like watermarks or labels. Katherine, do you want to break down the difference between those two things?

Katherine Forrest: Well, they're really two sides of the same coin. So content provenance is like a stamp on an envelope where somebody adds it on there to tell you where the letter came from without ever seeing or changing the letter that's inside the envelope itself. But watermarking is like secretly signing the paper that's been used to write the letter so that whoever reads it can see where it came from and can do that without even seeing the envelope. So one is sort of on the inside if you will and one is sort of on the outside.

Anna Gressel: Katherine, this makes me feel like you'd be really into it if I gave you invisible ink for the holidays, like an invisible ink pen.

Katherine Forrest: Well, did you ever do any of those detective books when you were a kid, like Encyclopedia Brown? No? You were probably too young for that, but yeah. I had a secret desire to be a detective.

Anna Gressel: I can tell. So let's get back to this whack-a-mole problem with bad actors finding ways to strip off watermarks off content or remove that kind of authenticity stamp from the envelope.

Katherine Forrest: Right, and it's been easier than we'd hoped for bad actors to remove this content provenance metadata from truly authentic content, but we've got experts and engineers spending all day trying to think up different ways to prevent either content provenance or watermarks from being stripped out. And one promising approach these days is to actually put what I'm going to call a tell in the text or in the photo image that is something that only AI would really do. So for instance, in response to a prompt, a tell, if it's going to be a narrative or text based prompt, could be including a particular word that only AI would use in that particular context or that is recognized by certain tools as an indicator of AI. And the same thing with images where you might then have something within the image which an AI tool can pick up on. And that can be used just like a distinctive style of a human who's writing something or a human who had taken a photograph to actually tell what the provenance is. And so if you want to be sure that it's genuine, then you'd have one kind of tell. You also might be able to have a tell, if you will, for something that's fake.

Anna Gressel: Yeah, going back to our detective analogy that's almost like handwriting analysis, right? Like you have little loops consistently over your letters, and that means that you're Katherine B. Forrest. So let's go back to the AI context though. An example of this would be an AI output that uses the word marmalade way more often than the word jam. And that might make no difference to the reader. They might not even notice it. But the fact that the word marmalade was popping up more often than it otherwise would help a deepfake spotting tool tell that the output came from a machine and not a human being who would probably just use the word jam. But nothing's perfect. Even there, a bad actor could just take AI-generated text and swap out words for the closest synonyms, and suddenly that deepfake detection tool might not really be able to work so well or even detect that it's a deepfake at all.

Katherine Forrest: Well that's right, there's going to be now a lot of continuing work on all of these tools. And so let's talk about whether or not this actually made an impact on this election cycle. So there is a place called the Alan Turing Institute, which is named after of course Alan Turing, who was one of the fathers of the modern computer, and certainly of the Turing test to determine certain states of mind, if you will, of computers, if there is such a thing. But the Alan Turing Institute found that in terms of this election cycle that AI-enabled disinformation had not had a meaningful effect on election results. And that's encouraging.

Anna Gressel: The Turing Institute also found that on a number of occasions, real human generated content had been flagged by humans as AI-generated. And that's its own kind of problem because it can undermine public confidence in online information generally.

Katherine Forrest: Right, when fake content starts cropping up everywhere, you end up possibly with some false positives where people think, well, something that's actually genuine is fake and that actually undermines confidence, you're right.

Anna Gressel: Yeah, and the Turing Institute says we can't afford to stand still on dealing with these issues. So just because AI disinformation doesn't swing elections today doesn't mean that it won't in the future, particularly as models and tools start to evolve even further.

Katherine Forrest: Right, so the best way that we can make sure that we've got our elections protected from disinformation is to continue to work on these tools.

Anna Gressel: Regulators are also thinking deeply about these issues, and that comes in a bunch of different forms, whether it's requirements to watermark text or requirements to make watermark detectors publicly available. There are a lot of different ways that regulators and lawmakers are trying to slice and dice this, and that's a whole other conversation we can have another time, Katherine. But just to say, everyone is leaning into these issues, and we don't expect that to disappear even in a new administration or globally, as these issues continue to evolve.

Katherine Forrest: Great, and I look forward to returning to this topic because I think it's going to get increasingly interesting as time goes on. But that's all we've got time for today. I'm Katherine Forrest, and I am very proud to be here with my partner, Anna Gressel.

Anna Gressel: Thanks, Katherine, and thanks to all of our listeners and supporters. We're so thrilled to have you with us, and we're hoping that you guys are having a great start to your holiday season.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy