skip to main content

U.S. Developments and the Colorado AI Law

This week on “Waking Up With AI,” Katherine Forrest and Anna Gressel look at recent U.S. regulatory developments in AI, namely a significant piece of legislation coming out of Colorado.

  • Guests & Resources
  • Transcript

Katherine Forrest: All right. Hello everyone, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, you know, by the way, I hope that our internet holds out because while I'm still in Maine and Maine is not, you know, it's not like the middle of nowhere. I'm in an area that's got really good internet connection. I've been having a few problems this morning, but we'll fight our way through. But what we wanted to concentrate on today is really some U.S. developments. We've talked a lot about the EU AI Act. And U.S. regulatory developments, we don't want to sort of lose sort of our focus on some of them because there have been some really important ones, including, and today's episode's really going to dig into this, the Colorado SB 24-205, which is some very significant legislation.

Anna Gressel: Yeah, it's super interesting, and it's really an important piece of legislation here in the US, and we'll call it the Colorado AI law. This is a first of its kind cross-sectoral AI legislation from a U.S. sensibility and it's not really that surprising it came out of Colorado. For folks who kind of play in the AI space, Colorado has been a first mover in the past with AI and insurance legislation and then, further to that, regulation. And Colorado has also been super active in privacy regulation. So, you know, like California and sometimes Illinois, Colorado tends to be on the top of our list of states to watch when it comes to actually passing the regulation that they proposed around AI. And when it comes to the Colorado AI law, the scope is really important. It's focused on high-risk AI systems, and that's defined as AI that makes consequential decisions, like whether someone should be hired or receive a loan.

Katherine Forrest: All right, and the definition of consequential decisions is really critical here, and it's one that our listeners should really hold on to in their mind. It means any decision that has a material or similar effect across a wide range of domains, including education and employment opportunities, financial and lending services, essential government services, health care services, housing, insurance and legal services.

Anna Gressel: That's quite a list, Katherine.

Katherine Forrest: I know. And what's interesting to note about the law's focus on high-risk AI systems is how much its scope actually differs from the EU AI Act and other similar legislation around the world that also regulates high-risk AI systems.

Anna Gressel: That is exactly right. And the EU AI Act was, in a sense, really focused on creating a product liability-like regime for AI. So, it's focused on how AI is used across a number of critical domains, as long as the way that AI is used might give rise to some significant risk. But Colorado's approach is actually different. It focuses on consequential decisions. Really, the decisions are the key nexus for the law. And that means that a much narrower category of use cases is covered in practice because the scope is really focused around the kinds of decisions that are reached that are important about consumers. And Colorado doesn't actually ban AI uses in the way the AI Act does. But as we'll discuss, there are some pretty burdensome requirements once you're in that high-risk AI bucket.

Katherine Forrest: And another notable point about the Colorado AI Act, like the EU AI Act, is it really does try to impose different requirements on different actors across the AI value chain. And we've actually spoken a little bit about that in the past. In Colorado, the two key groups of actors are developers in one group and deployers in another of high-risk AI systems.

Anna Gressel: Yep, that's right. Under the Colorado AI law, both developers and deployers have a duty to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination stemming from the intended uses of the AI system. So, while this duty is key to the law, it's also important to note that developers and deployers are entitled to a presumption that they used reasonable care if they satisfy some key obligations.

Katherine Forrest: Right, and the whole concept of duty of care is really a fascinating one. And we could do a whole episode on just that. It's particularly interesting to see that the concept is really starting to be integrated into AI legislation itself.

Anna Gressel: Katherine, I completely agree. So, let's start by talking about the developer obligations. And those are the obligations relevant to companies doing business in Colorado that either developed the AI system or substantially modified it. And those obligations are very transparency-focused. So, developers will need to give deployers a statement describing information about the reasonably foreseeable uses for the AI system, and how it might be misused, along with kind of a whole range of documentation about the kinds of training data used in the development of the model, as well as some additional categories of information.

Katherine Forrest: And then let's turn to deployers because those are all of the companies, entities and other users that are doing business in Colorado that actually use or deploy, also perhaps you could use the word implement, the AI system. So deployer obligations are critical. And under the law, deployers need to set up a risk management policy that governs their deployment of AI and conduct impact assessments. And there are a few things that are critical, but I'm not going to go through all of it right now. But there are some disclosure obligations, there are notification obligations. And if a decision that one of these systems makes is a consequential decision, then there are certain other kinds of obligations that come into play. So, it's really quite comprehensive.

Anna Gressel: Yeah, and I want to touch on one thing, which is for both developers and deployers, there are also important incident reporting obligations around algorithmic discrimination. And I think in some ways, this is one of the first times we've actually seen anything like this be enacted in the U.S. I want to pause on it briefly.

And, really specifically, developers are going to have to disclose to the Colorado Attorney General and to other deployers or developers of the system within 90 days if the developer discovers that a high-risk AI system it has deployed or has caused some sort of algorithmic discrimination or if the developer receives a credible report from a deployer that the system caused algorithmic discrimination. That's a consequential and meaningful reporting requirement.

And on the flip side, deployers have similar incident reporting obligations. And they have to actually notify the Colorado Attorney General if a high-risk system that they deployed caused algorithmic discrimination, again, within 90 days after making such a discovery. So that's a pretty quick turnaround and potentially a very broad-reaching and important obligation to keep in mind.

Katherine Forrest: We should also mention that there's one other requirement in the act that applies regardless of whether a system is a high-risk one — and that's if the AI system is intended to interact with a person like, say a chatbot, then the developer and the deployer are required to make sure that they disclose to individuals that they're actually interacting with an AI system. And that is similar to certain requirements in a bill from Utah that came out earlier this year regulating transparency in certain chatbot interactions.

Anna Gressel: So, Colorado's requirements don't actually go into effect until February 1st, 2026. So just over a year from now.

But for U.S. companies thinking about EU AI Act compliance, it can be really important to build in some focus on Colorado as well. And that will probably include processes like inventorying which systems are in scope, determining when and how to test for potential algorithmic discrimination and implementing mitigations and reporting management systems to help take that risk down or provide meaningful reports when necessary.

Katherine Forrest: But it's important to note that because this law isn't going to be fully implemented until 2026, there's really a potential for further changes. And in fact, when Colorado's governor signed it into law, he wrote a signing statement explaining that when he approved the legislation, he approved it with reservations. And he went on to say that if the federal government does not preempt the law before it goes into effect, he encourages his own legislators to work with stakeholders to significantly improve the legislation.

Anna Gressel: So, the practice pointer for today is this. For anyone focused on the U.S. kind of regulatory terrain around AI, the state legislatures are the place to watch. Colorado is a great signpost here. They were able to pass a very all-encompassing cross-sectoral AI law, but there was a lot of debate about that and even reservations for the governor. So, we're seeing similar kinds of legislative battles get teed up in significant states like California, where there are a ton of AI laws currently pending for this legislative session.

And as we'll talk about in future episodes, there have been other states that have been very effective at getting AI-focused bills passed, such as digital replica bills in states like Tennessee and Illinois.

Katherine Forrest: Right, it's really fascinating stuff and we're going to discuss this more in further episodes. We'll be talking about the Colorado AI Act really, I think, quite a lot. But that's all we've got time for today. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel. Make sure to like and share the podcast if you've been enjoying it.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy