Indemnification Issues in the AI Space
In this week’s episode of “Waking Up With AI,” Katherine and Anna discuss the role of indemnification provisions as a clarification tool in the complex AI landscape – and all of the questions that they raise.
- Guests & Resources
- Transcript
Partner
» BiographyCounsel
» BiographyKatherine Forrest: Hey, good morning, everyone. I'm Katherine Forrest.
Anna Gressel: And I'm Anna Gressel. We're your hosts for “Waking Up With AI,” a Paul, Weiss podcast.
Katherine Forrest: And Anna, I have another scintillating topic for today's podcast, but it's actually one that I am really interested in.
Anna Gressel: I can't wait to hear about it, Katherine.
Katherine Forrest: It's the scintillating topic of indemnification.
Anna Gressel: Yep, indemnification issues are coming up all the time in our AI world these days.
Katherine Forrest: They really are, and so I thought that since indemnification is being talked about all the time right now in terms of AI, it would be really useful for us to do just a short piece on why.
Anna Gressel: Yeah, let's dive in. I think for our non-legal listeners, let's start with what we're really talking about with indemnification. So, I tend to think of indemnification as the allocation of responsibility between and among various potential actors, usually one person or entity agreeing to be responsible if something happens to you or because of what another has done. Here's an example: I own an AI tool and I license it to you, Katherine. As part of our contract, I might indemnify you if the tool causes a specific kind of harm. And that indemnification means I would pay, agree to pay, damages to you.
Katherine Forrest: Or you could also indemnify me by, for instance, taking over if there was a claim made against you, handling that claim, et cetera, et cetera. Sometimes legal fees are involved, and indemnification can involve a shifting of legal fees. A contract that contains an indemnification provision actually contains the specific provision. The world is your oyster. You can make an indemnification claim have anything in it that you want as a creative lawyer.
But Anna, tell us why indemnification provisions are so important in the AI space today. They've been in contracts forever, but why are we seeing so many questions about indemnification in the AI area?
Anna Gressel: Yeah, I think in order to understand that we have to take a step back and actually look at what's happening in the industry, not just in the legal industry, but in the technology industry. So, I think there are two main drivers here. The first is the fact that we've all seen this huge boom in generative AI technology. And what that is really doing at a practical level is driving the creation of a whole new technology stack. So, at the bottom of that stack, we have foundation models and foundation models’ providers that are usually technology companies.
And then on top of that, I mean, we've seen a huge explosion of different kinds of companies that are tuning models, turning them into products, and then selling them into further businesses or actually turning them directly into consumer-facing models. Then we have the companies, kind of think of them as even further downstream, that are licensing in those models. Some of them are using those for internal stakeholders, their employees, or some of those companies are further tuning those models and packaging them into consumer-facing products and services. So, this is what we call, kind of at a high level, the AI value chain, all of these different actors who are involved in AI development and deployment. And the risks and liabilities have to be allocated among all of those actors and across all of those actors.
But the second factor that I think is really important is the fact that this whole AI value chain exists against a quickly evolving AI legal and regulatory backdrop, as many of our lawyer listeners know. And there's a lot of uncertainty there. So, in light of that uncertainty, including in areas like copyright, companies are using indemnification to provide some additional clarity and actually manage risks contractually.
So, Katherine, with all that background said, what are some of the specific issues with indemnification you're seeing right now around AI technologies?
Katherine Forrest: What we're seeing right now are a lot of open questions about how much responsibility one entity in that chain that you described, whether it be the designer or the developer or a licensor or a licensee, how much responsibility that entity, person, company should take or will take, what kinds of risks they want to bear and why.
And some of the issues have to do with things that occurred, for instance, during the training process. I've seen indemnification provisions relating to companies allocating who bears the risk that a tool that it's been trained on material that may be copyrighted or may have certain other characteristics and may have some legal risk associated with it, at least in their minds, whether or not they want to actually shift any responsibility on that to the other party.
And I've also seen indemnification provisions, and I should say requests, because sometimes people won't agree to these things. They're all still actively being negotiated. There's no real market, if you will, about how allocation of responsibility applies on the output side. So, if the model, for instance, generates output that duplicates some of the inputted training material, who bears the risk of that, if there is any legal risk?
Anna Gressel: I mean, there are tons of ways we're also seeing indemnification issues come up. For example, what if a tool generates inaccurate results? Or what if a tool generates results that cause potential bias or results in an investigation? This is an issue in highly regulated industries where folks may be concerned that investigations or examinations could come up. What about a tool in the robotics space that creates some sort of physical harm to humans? These are all kind of complicated issues around indemnification that we're seeing start to play out today. And Katherine, what are some of the more complicated issues you're seeing?
Katherine Forrest: Well, we're seeing these indemnification issues come up with just large language models and a variety of bespoke models for particular domains, but they're also coming up when you're layering models on top of models. It's sort of the Lego example of the Lego bricks that I gave way back when during an earlier episode. But when you've got, for instance, a large language model with a fine-tuned model on top, you might have differences in the indemnification request, the indemnification obligations and where indemnifications start and stop. And so, when you're layering models, you really have to ask yourself, do you have the right indemnification provisions in place?
Anna Gressel: Right, and I mean the ecosystem becomes complicated there too. You know, the more models you have, the more actors you have. You might have different developers, licensors, licensees, and you have to think about all of the indemnifications and how they carry through across that whole set of actors.
Katherine Forrest: Right, the data and the ownership and the privacy issues relating to the data set can also be different.
Anna Gressel: Definitely.
Katherine Forrest: So, each step of the way, there's going to be considerations about the risk level that the company wants to assume. And there's also risk that the company has in terms of whether it actually has obtained indemnification for a model when the protections that they've received only relate to one portion of the model, as we've mentioned.
Anna Gressel: Yep.
Katherine Forrest: And so, we're seeing how these companies right now address these issues. And they're all open for negotiation right now.
Anna Gressel: Yeah, I mean, there's no market standard right now. I think that's important to keep in mind. It depends on the risks of the model, the specific industries it's being deployed in, whether they're consumer or business facing models. We'll get into the weeds on that on a future episode. It's really interesting stuff.
Katherine Forrest: Right, we absolutely will. All right, that's all we've got for today. But this is a critical issue for companies, and we'll continue talking about it in future episodes. And with that, I'm Katherine Forrest.
Anna Gressel: And I'm Anna Gressel, and we hope you've enjoyed our most recent episode of “Waking Up With AI,” a Paul, Weiss podcast.