skip to main content

Responsible AI: The Need to Avoid Misuse

In this week’s episode, Katherine Forrest and Anna Gressel delve into the pressing issues surrounding responsible AI and the potential for its misuse, including the darker side of AI.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone and welcome to today’s episode of “Waking Up With AI,” a Paul, Weiss podcast. I’m Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: And Anna, actually, while I say waking up with AI, we're in different time zones and you've been awake for some time.

Anna Gressel: That is true. I'm actually in Rome this week. It's a great city with a long history and it's super interesting to think about where we are today with AI against that kind of ancient historical backdrop. And in particular, the Vatican City is sitting right here in Rome across the river from me right now. And the Pope has added to the voices calling for attention to AI ethics in partnership with several major tech companies.

Katherine Forrest: So, you know, there are so many calls and papers and committees and task forces for AI ethics and AI safety. It seems like many, many governmental agencies have made such calls and there are all kinds of NGOs that have made such calls, that we really hope that the word is getting out. And in that vein, Anna, let's pick up and go back to some of what we've talked about in prior episodes and look at some of the terms that have really framed the discussion around AI. Today, I thought we could talk about responsible AI and the flip side of responsible AI, which is misuse of AI. So, it's sort of the flip side of the same coin. Because we've mentioned in the past that there are words that all of these papers and task forces and calls for AI ethics and safety have talked about that are similar--transparent, responsible, accurate and fair, robust, human-centered, accountable, all of those in the context of AI. And I thought that since we've already dealt a little bit with some of the transparency issues, we could move on to responsibility.

Anna Gressel: I think that's right, and this is a big issue, particularly with generative AI, which is inherently susceptible to so many different kinds of uses. And determining what responsible AI means and also who bears the burden of ensuring responsible use is really a huge topic.

Katherine Forrest: Absolutely right, widespread use capabilities come with the analog, the possibility of widespread misuse capabilities. And misuse of a model can really amount to an unresponsible use of that model. So, let's talk first about what we mean by responsible AI, and responsibility is a concept that we learn as kids. And I think of it really as bearing ownership or bearing the personal burden for the uses that have been set in motion, either by the developer or the deplorer of a model.

Anna Gressel: So, let's peel that back a little, Katherine. When a model is used, there are many people already in the chain that have been part of setting that model into motion, even if they weren't the particular person who asked the model to undertake a particular task, and we talked about this on one of our last older episodes on the supply chain or the value chain for AI.

Katherine Forrest: Right, there's the model developer, and they can architect a model for responsible use. They can do with that in a number of ways, and in a really simplified way, they can put restrictions around a model and what it can or cannot do. And those restrictions can be part of what falls into the concept of responsible use. So that's the developer side, but there's also a deployer side, how the company or a person is using the model, and the deployer as the user can actually be either a good actor or a bad actor. We hope that most of, or almost all of, the users of AI models are good actors, but that's a concept now that's related to responsible AI. Who is the actor who's actually using the model? And if they use it in a bad way, then we fall into this category of misuse. The dictionary defines misuse as taking something and using it in a way that's unsuitable or in a way that it was not intended to be misused, and with GenAI models, many of the misuse scenarios that we're now encountering are focused on things like security, privacy and manipulation.

Anna Gressel: Yeah, and I think we can further bucket this concept of misuse into two general categories, which I think is a helpful framework. First is the inadvertent use of an AI model in an incorrect way. And that might mean using it for a purpose that goes beyond the way in which it was designed in a manner that was not really intended by the developer or by the deployer who's creating a specific application. And we might call that innocent misuse. And the second is the concept of purposeful misuse of the model by a user. Again, this is use in a matter that wasn't intended, but here the user knows they're leveraging the model for an improper or even malicious purpose.

Katherine Forrest: Right, and when a model is used for an improper or malicious purpose, the outcome can actually now carry criminal penalties. That's relatively new news. For example, we've seen recent actual prosecutions of individuals who have used AI models in an illegal and unlawful way, and one of the most, I think, sort of widespread examples of that is with child sexual abuse material called CSAM and this past May, just really sort of a month and a half ago, a federal grand jury in Wisconsin indicted a software engineer on four counts of producing, distributing and knowingly possessing obscene visual depictions of minors engaged in sexually explicit conduct using GenAI. So, the fact that it was AI created was not and is not a defense to this particular category of criminal activity.

Anna Gressel: That's right, Katherine. And interestingly in that case, the developer allegedly used a text to image GenAI model to create and distribute thousands of realistic images of minors based on very specific and explicit text prompts, including by directing the models to not produce images of adults. And DOJ specifically commented that this was a use that the developer had discussed and by the developer, I mean, the particular person, the software developer had discussed with other users. He actually kind of told them how he evaded the detection by model controls that would otherwise actually be intended to censor the production of sexually inappropriate content. It's worth noting that the Department of Justice is increasingly focused on the use of AI tools to commit crimes.

Katherine Forrest: That's right. I mean, now we're talking about sort of the flip side of responsible AI, this misuse of AI. And in February 2024, Deputy Attorney General Lisa Monaco announced that the DOJ is actually going to be seeking higher sentences for offenders who have used AI to enhance their crimes and to enhance the danger of their crimes. And she's also announced a justice AI initiative, which is a six-month initiative to convene experts from a variety of different areas, civil society, academia, science industry to advise the DOJ on criminal misuse and to have discussions with foreign counterparts. So, this is all part of really what the Biden October 31st White House Executive Order. Was it the 30th or the 31st, Anna, for the Executive Order? Whatever, it's one of those last two days of October, but it's part of that whole Biden Executive Order piece.

Anna Gressel: No idea. We're also seeing attempts at the state level to pass legislation attempting to expand existing criminal and civil laws to cover AI generated content that might arguably not fall within the scope of current laws or you know, prior laws. So, Washington has it you know passed a bill that expands criminal penalties under current child pornography laws so that they actually cover digitally fabricated CSAM, very expressly in Idaho and Utah are doing the same thing. We've seen the same trend with AI generated non-consensual deep fake pornography, which is now banned in several states and we're seeing proposals in the EU and the UK for that kind of content.

Katherine Forrest: There's actually a recent paper from Google that does a really nice job of putting together a list of different types of generative AI misuse risks, and it's called “Generative AI Misuse, a Taxonomy of Tactics and Insights from Real-World Data.” And the paper focuses on intentional misuse scenarios, not the types of inadvertent misuse risks that can actually happen. It's not a long paper, and it's very accessible. I really recommend it.

Anna Gressel: It's really interesting too, the authors concluded that the majority of reported cases of misuse don't actually consist of particularly sophisticated uses of GenAI or particularly sophisticated attacks, but rather relatively simple ways of exploiting GenAI systems that require little to no technical expertise.

Katherine Forrest: And we've talked about this before as GenAI being used to create realistic depictions of humans in the form of deep fakes. And that can actually sort of impact democracy. We can have misinformation or disinformation that can move markets or disseminate all kinds of problematic data into the public domain.

Anna Gressel: And we discussed these deepfake risks in detail in one of our episodes. We won't go back into that in detail, but people should listen to it. But I think what's really interesting is this second bucket too, which the authors discuss as kind of realistic falsification, and not falsification of humans, but things like fake documents or counterfeit goods.

Katherine Forrest: Right, exactly, like faking a birth certificate or government record, which is made easier by certain GenAI models.

Anna Gressel: Exactly, and the idea is that certain models that can generate highly realistic images will actually be able to produce fake records not only in English but in other languages and those can be used for fraud schemes or even worse. Finally, the author's last bucket of misuse risks is really engagement with AI-generated content. That includes things like hyper-personalization, so things like tailoring disinformation for a particular person or using AI to craft very effective phishing emails.

Katherine Forrest: The authors really note that these misuses are actually in play today. This is not a tomorrow situation, it's a today situation. And so, companies really need to be aware of these potential misuses, which brings us to our practical pointer. Anna, do you want to go through the practical pointer that we've got for folks today?

Anna Gressel: Definitely. I mean, I think from our perspective, companies that use GenAI models should really consider focusing on some of these misuse risks in their employee education, so employees don't fall victim to the kinds of AI-based scam attempts that are already being leveraged by bad actors. And companies may also want to consider whether, and under what circumstances, it would be appropriate to monitor their own AI systems for potential misuse by insiders or by threat actors.

Katherine Forrest: Right, and that actually overlaps a bit with another very broad topic relating to cybersecurity risks with AI systems, and we'll save that for another day. But right now, we're signing off for the week. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel and we'll see you all again next week.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy