skip to main content

The EU AI Act Becomes Law

In this episode of "Waking Up With AI," Katherine Forrest and Anna Gressel discuss the complexities and implications of the EU AI Act coming into effect, outlining its broad territorial scope, risk categories, governance expectations, and phased compliance timeline.

  • Guests & Resources
  • Transcript

Katherine Forrest: Good morning, everyone and welcome to today's episode of “Waking Up With AI,” a Paul, Weiss podcast. I’m Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: And hey, Anna, we've got a really full program today because tomorrow is August 2nd, and that's the day that's been a long time in coming. It's the first day that the EU AI Act becomes effective. And at times it's been a little bit of a rocky road getting here.

Anna Gressel: I know it really has been, it's a big day and we're going to be celebrating it. I'm going to be celebrating it as “EU AI Act Day,” which is now what I'm calling it. So, we're doing this special episode dedicated to the act so we can celebrate in advance of tomorrow. You and I have been talking about this act for ages, since 2019, really before even the white paper was put out about the act.

Katherine Forrest: Right. I mean, think that you and I both spoke in Brussels about the act at a conference that was held at the EU Parliament. I remember I was speaking about the predicted impact of the act on the United States. And what were what were you speaking about?

Anna Gressel: So, I was speaking about the legal implications of generative AI and it was surprisingly timely as ChatGPT had just been released the prior week. We all had to adjust our comments on the fly. But after all this time, the act isn't even going to be in full effect tomorrow. Let's actually talk about what is going to happen as it comes into effect in various stages.

Katherine Forrest: All right, and those stages are actually going to span a period of about three years.

Anna Gressel: So, let's also start with a few key takeaways from the act as sort of an overview for our listeners.

Katherine Forrest: All right, first, it has a really broad territorial scope that can be somewhat complicated. So, we're not going to sort of cover everything today, except just to say: be aware, be on alert for its territorial scope, covering models and systems that were developed and trained outside of the EU, if certain things occur within the EU. So, you've really got to watch and ask yourself some jurisdictional questions.

And second, every business is going to need to analyze the AI that they've been developing and deploying to determine whether it comes within the scope of certain risk categories, such as prohibited practices that we're going to talk about more in a moment, or the high risk category, or even some of the low risk or minimum risk categories. You really want to understand where do your tools fall.

Anna Gressel: And third, it not only covers AI systems, but also has special provisions for something that the AI Act calls “general-purpose AI models.” And you can think about this as akin to foundation models within that tech stack we've talked about before. And as part of those provisions, companies will need to evaluate whether the AI they may be developing or using might fall within a special definition of systemic risk that's provided under the act. But note that definition may continue to evolve as regulators and legislators look at systemic risk issues. A fourth issue to be on the lookout for is governance. So, regulators in the EU are now going to be expecting that companies are putting into place appropriate governance and risk management procedures.

Katherine Forrest: Next, as a fifth takeaway, there are lot of takeaways here, Anna, that we're sort of packing into this episode. There are transparency obligations for really a broad swath of AI systems, even if they're not prohibited or high risk. So, companies need to be aware of what transparency requirements might fall upon them and be prepared to explain, if they're asked by regulators, how they've met those obligations.

Anna Gressel: Yep, and in addition to all those things that companies may need to be looking for, they can also expect that the EU's new AI Office, which sits within the Commission, is now going to start issuing guidance in the form of interpretation or points on implementation of the act. And we're going to see very active work from standards bodies as well, like CEN-CENELEC, to provide additional guidance on issues like AI risk management and documentation, among many, many others.

Katherine Forrest: So, while tomorrow is the effective date of the EU AI Act, the timeline for when companies actually have to comply with various obligations is spread out over time. So, on November 2, 2024, which is 90 days after the act comes into effect on August 2, that is not an obligation for companies to do anything. It's a requirement for the member states to do something. They've got to get their ducks in a row for enforcement. And by that date, November 2, the member states are required to identify their internal national authorities that are going to be responsible for supervising and enforcing the EU AI Act. And for some member states, what they're going to do, no doubt, is designate existing entities. Others may create an entity within an entity or a new entity altogether.

On February 2, 2025, a ban on what the EU AI Act has identified as prohibited practices takes effect. And when we talk about prohibited practices, we're talking about different use cases that fall into certain categories where the EU has decided that these categories are prohibited. So, for instance, if there are practices using an AI system which score individuals using biometric data, that would then be prohibited. Also, the prohibited practices include scraping facial images from the internet or inferring emotions in the workplace or educational institutions from facial recognitions, videos and other things like that. They also can't exploit these AI systems, also can't exploit vulnerabilities based on a person's age or some other protected characteristic.

And lastly, one that I'm particularly interested in is that it's prohibited to make a risk assessment about the likelihood of a person to commit a crime. And I pause on that one because I wrote this book, Anna, as you know, called When Machines Can Be Judged, Jury, and Executioner about the use of some narrow AI tools that in the United States, certain state systems, judicial systems are using to do that kind of scoring. So, while this is not the EU in the United States, it will certainly, I think, possibly have a flow-on effect if you've got something that is allowed here, prohibited in the EU, and there will certainly be a lot of discussion about the use of those tools in the United States.

Anna Gressel: Definitely. And next up on the act is the list of high -risk use cases. And this category is really important for companies. It includes certain biometric uses, the use of AI in critical infrastructure and the use of AI for determining educational and employment opportunities. So, it's a very, very broad set of risk categories, and there some that we haven't even mentioned here. When a high-risk use case is at issue, the developer or deployer has obligations that include things like conducting a fundamental rights impact assessment, putting in place certain governance and risk management practices, as well as engaging in post-market monitoring efforts. And I just want to pause on that last piece. It's a really interesting one. Basically, the EU wants AI to be treated almost like a product. So, if it's discovered that the AI system is causing harm after its release, it can be pulled from the market.

Katherine Forrest: And then the last category of these risk categories is one that's been designated as minimal risk, which is basically everything else. Now, I will mention, but we're running out of time, that there are some exemptions in the EU AI Act, and those are worth mentioning for things like some scientific research and development of AI systems and also for AI systems that are for personal and non-professional use and certain open-source models.

So I think that what we will do is leave a takeaway for our audience that we're going to be coming back to the EU AI Act as we see its implementation occur over time, as we get more clarity on what kinds of rules the member states are putting into place along with following the rules that the new Office of the EU AI Act has also suggested. And we'll talk about penalties, which can be really extensive for violations and we'll see how that actually plays out.

But that's all we've got time for today. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel. Make sure to like and share the podcast if you've been enjoying it.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy