California’s Drive Towards AI Regulation
Katherine and Anna walk you through key provisions of the AI-related bills that recently made their way out of the California legislature and onto Governor Newsom’s desk.
- Guests & Resources
- Transcript
Partner
» BiographyCounsel
» BiographyKatherine Forrest: Well, good morning everyone, and welcome to an episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.
Anna Gressel: And I’m Anna Gressel.
Katherine Forrest: And Anna, I have two things to say. Number one, we're both in New York City, and that seems like a rarity. We're about to go overseas. And then the second thing I want to say is that I want a mug. I want a mug that says, “Waking Up With AI, a Paul, Weiss podcast” on it. So, I can hold it up. Nobody will be able to see it but you, as you're on the other side of the screen, but I want to be able to hold that coffee mug with the logo on it. And maybe some of our listeners will be so lucky as to get “Waking Up With AI” mugs as well.
Anna Gressel: Well, spoiler alert, I've actually been scheming on that front. So, I'll keep you posted, and we'll keep our listeners posted when we actually have those mugs available and we can all drink them together.
Katherine Forrest: That's going to be an exciting moment. All right. So, as we're heading around the bend in the fall, we're seeing a real flurry of regulatory and legislative activity across the globe and the United States. And we thought we'd give folks a little bit of an update on that.
Anna Gressel: Yeah, let's start on the global front. Something a little bit different than what we've talked about. The Council of Europe's Framework Convention on AI just opened for signature. That's actually a big deal. It's the first binding treaty worldwide on AI, which promotes a risk-based approach to AI regulation globally. First of its kind. So, we'll see what happens on that, and we'll keep you all posted.
Katherine Forrest: And maybe you could just give us like a sentence or two on how this Council of Europe's Framework Convention on AI is different from the EU AI Act because they both, I think, to an American audience, might sound sort of similar.
Anna Gressel: Yeah, I mean, I think the Council of Europe's approach is very focused on setting a tone for AI regulation globally and coming up with an approach that different companies were amenable to and that they were willing to sign on to. And so, we've actually seen this get more traction than I think some folks originally expected. But in terms of the details, the Council of Europe's Framework takes a very human rights-oriented approach. Although, that's tempered somewhat by that risk-based orientation, and it is going to apply to some public sector uses of AI that I think are notable. But we can do a deeper dive on that, Katherine, if that's of interest to our listeners in another episode.
Katherine Forrest: I think that would be really terrific to sort of do a session on that. But in the United States, let's just talk a little bit about California because there is a lot happening in California right now. And in fact, there are so many bills that have different numbers, that I feel a little bit like we're in a “number soup land.” And so, if people don't really follow all the numbers we're going to throw at them today, I just want to say that really the important takeaway is that there's a lot happening in California and people need to stay attuned to the developments there.
Anna Gressel: Yeah, I mean, we're in this moment of uncertainty, right, with California, because legislature closed its session at the end of August. Several bills passed actually out of the legislature, but they haven't been signed by the governor yet. So, it's really unclear what's going to happen with all of these bills, whether they're going to pass, but we're going to give you a little update, and then we can do a further update once we actually see what gets signed.
So, these bills include things like 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Act, and we covered that, as some folks will remember, in a prior podcast on Frontier AI. And 1047 has become something of a lightning rod in the AI regulatory debate. It's attracted a lot of attention in the media and sparked a good deal of controversy.
Katherine Forrest: Right, and they call it SB 1047. This is like the part of the number soup. And it regulates the largest and most expensive and, by extension, the most capable AI models. These frontier models, as we've mentioned, that regulators and the industry are referring to. And again, we'll refer our listeners back to our episode on frontier models. But SB 1047 is really important for that.
Anna Gressel: Yep. And although 1047 has attracted some of the most attention, there are other important bills that we'll talk through a little bit today. That includes AB 2013, which seeks to mandate training data transparency for generative AI models; AB 2885, which amends California law to define artificial intelligence; SB 942, the AI Transparency Act, which creates transparency obligations for developers that include requirements to make freely available AI detection tools; and AB 1008, which amends the CCPA to state that personal information can exist in a bunch of formats, including, most notably, AI systems that are capable of outputting personal information.
Katherine Forrest: Okay, so now it really is number soup, right? Because if anybody gets to, is able to keep track of all of these numbers and what the particular bills are doing, then they get sort of a gold star. But each of these bills have now made their way to Governor Newsom's desk, and he has until the end of September to sign or veto any of the legislation. And as of today, when we're recording this episode, which is in early September, nothing has yet been decided on.
Anna Gressel: So, let's cover a few key provisions of the bills that have been passed but are on their way to Governor Newsom's desk. Katherine, let's start with 1047. I mean, it's quite important. So, I think it's worth going through some of these details and also some of the changes because when we talked about this bill a few months ago, it was in a substantially different format. There are some really important provisions that have been kind of updated as the legislative process has progressed.
Katherine Forrest: Right, and at its core, SB 1047 seeks to really mitigate the possibility and severity of what they call “critical harms” capable of being caused by certain covered models. And that translates to the creation of these CBRN, that's the chemical, biological, radiological and nuclear weapons, and mass casualties, or at least $500 million in damages on critical infrastructure. So, it proposes to do this by making AI developers liable for the harms of their models, the harms that may emanate or be causally linked to their models, and requiring that developers submit to the attorney general a certification of compliance with key provisions of the bill, like the ability to enact a full shutdown of the model, that's the so-called “kill switch,” implementation of a safety and security protocol and collaboration with a third party auditor.
Anna Gressel: And in terms of the changes we've seen to the bill, one of the most important has been to include a new concept of expensiveness. That means that in addition to being trained on a significant amount of computing power, which is greater than 1026 FLOPs, covered models must also be among the most expensive. And that means surpassing $100 million in cost to train. That's a lot of money. So, the act also covers fine-tuned models, when they are fine-tuned from a cover model, and there are certain compute thresholds for that as well.
Katherine Forrest: Right, and SB 1047 was substantively amended several times as you had already previewed, Anna, and partly that was in response to an outcry against the bill that it would stifle innovation in the United States and the AI industry generally. In addition to the financial threshold for covered models that we've just discussed, the bill revisions include a refinement of the definition of the phrase “critical harm” and the addition of a formal third-party auditing process, the elimination of a proposed “Frontier Model Division,” which would have received proposed positive safety determinations or required those reasonably excluding the possibility of a covered model having a hazardous capability.
Anna Gressel: Yeah, and as we mentioned before, 1047 has sparked a huge amount of debate. Advocates for the bill argue that it's warranted and a welcome step in mitigating catastrophic risk from AI models, which both technical and non-technical audiences are beginning to get concerned about and express concern about publicly. Critics, on the other hand, argue that the bill's provisions would stymie both innovation generally, as well as specifically the open source AI industry. And Nancy Pelosi has called the bill “well-intentioned, but ill-informed.”
Katherine Forrest: And let's go through a few of these other bills that sort of enhance our number soup statement earlier. There's AB 2013, which would require that developers of generative AI models make available certain documentation regarding the data that's used to train the AI model, including a high-level summary of the data sets used. And that summary should include key details, including the sources of the data sets, how the datasets further the intended purpose of the model, whether the datasets include any copyrighted material and whether the datasets include personally identifiable information or PII. And in some respects, this is really similar to certain transparency obligations that we've seen in the EU AI Act, but it's worth noting that the details of the EU requirements are still being firmed up, and I suspect the same will be true of AB 2013.
Anna Gressel: Yeah, I'd also like to mention SB 942, the California AI Transparency Act, which would require developers to make freely available an AI detection tool to users. And that needs to be able to assess whether an image, video or audio file was created or altered by the developer's AI system while outputting system provenance data in the content. So that's like personal provenance data, for example, or content provenance information.
And I was - I think it's worth mentioning - I was speaking to Law360 recently about issues in deepfake evidence. This kind of AI detection tool is one way that companies and policymakers are attempting to deal with the fact that deepfakes may really start flowing into the court system and become evidence unto themselves. But it's not a silver bullet. Detection is just one mechanism, and it's often a very challenging one to implement successfully.
Katherine Forrest: And finally, to sort of round out our number soup, we've got AB 1008, which amends the CCPA. And in addition to restructuring the definition of publicly available information, it purports to clarify that personal information can exist in multiple formats, including AI systems that can output personal information. So that's a fairly controversial position that's being heavily debated among EU regulators.
Anna Gressel: There are some other bills that are quite interesting, but those are the ones we wanted to flag today. And we'll make sure to follow up if any of these critical pieces of legislation are actually signed into law by the Governor.
Katherine Forrest: Right, and that's all we've got time for today. I think we've inundated people enough with number soup. I'm Katherine Forrest.
Anna Gressel: And I'm Anna Gressel. Make sure to like and subscribe to the podcast if you've been enjoying it.