Biometric Data and AI
In this week’s episode of “Waking Up With AI,” Katherine and Anna explore the intersection of AI with the complex world of biometric data, breaking down the tangled web of regulations impacting its usage around the world.
- Guests & Resources
- Transcript
Partner
» BiographyCounsel
» BiographyKatherine Forrest: All right. Good morning, everybody, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.
Anna Gressel: And I'm Anna Gressel, and today we're going to dig into the subject of biometric data and AI.
Katherine Forrest: Okay, and so for this, Anna, for this biometric data and AI podcast, I need a big cup of coffee. The audience can't see it, but you and I do these things on Zoom with each other. And so, I've actually got a really huge, extra huge cup of coffee, and I'm ready to dig in.
And the first thing that we want to do, and we seem to do this a lot with our episodes, is to baseline people with some definitions. And I think that's really important in the biometric data area because a lot of people, when I'm talking to them, think that biometric data is really limited to things like fingerprints or iris scans.
Anna Gressel: It's actually a really broad category of data. Biometric data could be, among other things, a fingerprint, a capture of your face or voice – we call those face prints or voice prints – a retina or an iris scan that you mentioned before, Katherine. Hand geometry, ear characteristics, all different ways of authenticating an individual's identity based on some characteristic of their biology. But it can also include behavioral characteristics.
Katherine Forrest: You know, I think that the ear characteristics one—literally E-A-R, for those of you who are wondering if you're hearing that correctly—is really very interesting. But also, these behavioral characteristics are interesting as biometric data. It can include how a person walks, his or her gait, the way they move their head or body. So, Anna, let's talk about how biometric data and AI intersect.
Anna Gressel: Yeah, so AI can be used to analyze biometric data and make predictions or classifications relating to a person. And a real concern for regulators has been about the retention and the use of that biometric data for purposes other than something like a personal security identifier.
So, what do I really mean by that? I use, for example, my face sometimes to unlock my phone. And that is a pretty accepted use of using AI facial scanning technology to unlock a device, or you can use your eyes or facial characteristics for different things like at the airport. There's the CLEAR service and Global Entry and now even TSA PreCheck.
Katherine Forrest: And Anna, there are also a number of uses that people encounter in their everyday lives in terms of things like the use in a retail store where in some pretty high-tech stores, you can actually make a purchase now with a handprint. And concerns have been arising from regulators about whether or not people have knowingly consented to the use of their biometric data and to the retention of their biometric data. So, it's both the consent for utilization and the consent for retention that's really gathering attention.
Anna Gressel: Yeah, and another place that we've seen these concerns arise is really specifically in the law enforcement context. Katherine, do you want to say something about that?
Katherine Forrest: Well, you know, it's a real topic of debate, Anna, because there are people on both sides of this question. Some argue that the use of facial and behavioral recognition technology can lead to a kind of surveillance state. On the other hand, law enforcement has been able to utilize it in very positive ways to actually solve crimes or prevent harm. So, you've got really people on both sides of that question.
Anna Gressel: Yeah, I mean, when you talk about that debate, I can't think of a place where it's come to the fore more clearly than with the EU AI Act. I mean, that was one of the major sticking points at the end of the regulatory process, kind of reaching a political agreement on this. And that was a huge point of contention between different groups thinking about how that act should be crafted.
And what do we end up with as a result? Well, we have the EU AI Act, which regulates biometric information in a large number of ways with a number of exceptions because of those kinds of public interest concerns. So, I mean, let's just talk about that for one second. Biometric categorization systems—those are systems that infer sensitive attributes like race from gait analysis or, you know, faces. Those are actually prohibited by the Act in almost all instances. And certain surveillance uses are prohibited and sometimes there are exceptions that actually allow for those uses for things like preventing terrorist attacks or other threats to life or the personal safety of natural persons. So, you can see they're trying to thread the needle between these two debates and come up with something that was politically workable.
Katherine Forrest: Yeah, and we talk about the EU AI Act a lot on this podcast, but that Act is far from the only regulation of biometric data and the use of AI in making predictions based on biometric data. For instance, there's also the GDPR, and that's been around for now several years. It really predates the EU AI Act by several years and considers biometric data to be in a sensitive category. And it actually prohibits the processing of such data without adequate legal basis. And it also, just as I was saying earlier, requires certain kinds of explicit consent from the biometric subject, the person whose biometric data is being used.
Anna Gressel: Yeah, I mean, I think that's important to keep in mind. And we see some parallels to that in U.S. state privacy laws, which also consider biometric data generally to be in a category of sensitive data. But the U.S. also has a number of very specific biometric privacy laws. So, for example, the Illinois Biometric Information Privacy Act. There are also laws like that in Texas and in Washington that are biometric specific, but there are also broader, as I mentioned, broader state privacy laws that cover biometric information in interesting and sometimes unique ways. And the CPRA is worth noticing, the California Privacy Rights Act, because that gives consumers the right to opt out of the sale or sharing of their personal data like biometric data.
Anyhow, the whole tangled web of laws that apply really lead us to a bunch of different questions that I think are worth asking in any context where we're really dealing with biometric information. So first, for a given analytics tool, let's just say any tool that's kind of using biometrics, is there actually a voice print or a face print or some sort of biometric identifier created? If that's the case, then generally something like Illinois' BIPA or state privacy laws or the EU AI Act might apply, and you might also consider Section 5 of the FTC Act in terms of what disclosures and consents are being given.
Second, if the tool is characterizing people based on biometrics or other sensitive characteristics, then we've got kind of all the privacy laws coming into play around that.
Third, if a tool is used to reach automated decisions about individuals, then a whole host of laws and provisions about automated decision making are going to come into play.
And finally, if the tool is engaged in emotion recognition or workplace monitoring, the EU AI Act actually has some really interesting things to say about that. And that's a whole other conversation we could have about those points, Katherine.
Katherine Forrest: All right, folks, that's the high-level overview for biometric data and AI. It's all the time that we've got for today. I'm Katherine Forrest.
Anna Gressel: I'm Anna Gressel and we'll see you all again next week.