Revisiting California’s AI Regulatory Efforts
In this episode of Waking Up With AI, Katherine and Anna update you on the latest status of two AI bills from the California legislature—SB 1047, which was sent back without signature, and AB 2839, which is the subject of a preliminary injunction in Federal court.
- Guests & Resources
- Transcript
Partner
» BiographyPartner Elect
» BiographyKatherine Forrest: Good morning, everyone, and welcome to today's episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.
Anna Gressel: And I’m Anna Gressel.
Katherine Forrest: And Anna, you know, our audience probably doesn't realize that we actually do this through Zoom screens with each other. So we look at each other through Zoom screens. And so I see you in different parts of the world, but it's always mediated through a Zoom screen. But right now I can tell that you're in your New York City Zoom screen, so you're back from Abu Dhabi.
Anna Gressel: I am, I am, it's beautiful here right now. We're in full, gorgeous New York City fall weather. So it's lovely to be around, and we're actually recording this heading into a gorgeous weekend.
Katherine Forrest: Alright, and how's that jet lag?
Anna Gressel: The trip was great. The jet lag is not so great. It's not terrible, but I'm kind of living in a blend of day and night. I'm not sure I know what time it is really right now.
Katherine Forrest: Yeah, I get that. I get that. So we'll actually do some domestic topics today to sort of try to decrease that jet lag a little more. We're going to talk about California and revisit some of what's been happening in California, some really interesting new developments.
Anna Gressel: There's actually so much happening in California right now. It's incredible how much has actually been passed in the recent legislative session relating to AI. And I don't think, Katherine, we’ll cover the whole set of bills that did actually pass. Maybe we'll do that in another episode. But for right now, I think we're going to focus on some of the more controversial aspects coming out of this past session. And we'll talk again about, you know, one of the really significant AI bills that was on the table.
Katherine Forrest: Right, and that's a bill that was called SB 1047. And you should take from just the fact that I remember the number of the bill that it means that it's been talked about a lot.
Anna Gressel: I mean, yeah, it's hard to keep track of all these numbers, but 1047. But we'll talk about that and Governor Newsom's veto of that bill and his creation of a new commission to study AI issues. And you know, on top of that, we actually wanted to talk about a recent federal decision from California that blocked the deepfake election law that California just passed from going into effect, which is a very important decision as well.
Katherine Forrest: All right, so these are two big developments, and so we've got a big agenda for our audience.
Anna Gressel: You bet. Let's talk about 1047 to start. And do you want to give, Katherine, a reminder to our audience about what that was all about?
Katherine Forrest: Yeah, I can do that pretty quickly. This was a really comprehensive bill from California that centered on creating guardrails and various obligations for developers of highly capable AI models. And for instance, it required that before a developer could even begin to initially train a very large language model, it had to be architected to have a capability for a full shutdown, which is akin to sort of an off switch, if you will.
And would also have required the implementation of a written safety and security protocol. It would have prohibited a developer from using a covered model for any use other than training or evaluation if there was an unreasonable risk that — we’ll put that phrase to the side for a moment — that the model could have caused or materially enabled what the bill defined as a critical harm.
And then, starting in 2026, the bill would have required that a model developer would have had to have had an outside auditing set of tests for compliance of the bill's requirements. And there are a number of other provisions as well, but just those, that's some of the highlights. The real big takeaway is that the SB 1047 bill was a comprehensive AI bill really focused on safety.
Anna Gressel: And so, Katherine, do you want to talk a little bit about what was wrong with it or what made it controversial?
Katherine Forrest: As I've said, it was a really long bill that had a lot of parts. And one of the parts enabled the Attorney General to have access to information to the model, and it had whistleblower protections, and it created a government agency or something called the Board of Modeled Frontiers. And there were a lot of companies and individuals who took issue with a number of the provisions, and they were adamant that, for some, that there were protections that were needed right now. But also. there were those who thought that the bill didn't focus on actual risk scenarios and use cases that, if you were using a powerful model in a secure environment for a bona fide purpose, that you would have had imposed upon you all kinds of obligations. And that would be without risk analysis, in particular, as to your particular use case. So let's just take the example of a bona fide financial services set of work or a pharma model used for drug discovery that all of these obligations in SB 1047 would have been triggered in those use cases. So it was a bill that was agnostic as to use case and a bill that was instead focused on the model capabilities themselves.
Anna Gressel: Yeah, and I think, you know, one other part of that that was particularly controversial was the potential impact on the open source ecosystem. It would have required a lot of risk management to be put in place, and there were major questions about how that would operate with openly released models that really become, are put into the control of the hands of the people who download them and then can use them for many purposes. So that was a big feature as well. So let's talk a little bit about what happened when this bill arrived on Governor Newsom’s desk.
Katherine Forrest: All right, well, as we know, Governor Newsom is the governor of state of California, and California has more tech companies and more of the big tech platform companies than any other state.
Anna Gressel: Yeah, and he sent the bill back without his signature.
Katherine Forrest: And sending a bill back in California without signature is essentially the same as a veto.
Anna Gressel: Mmhmm. It's like a big red stamp that says “veto,” but, you know, just without a signature on it.
Katherine Forrest: Right, it's just, it’s not like it is on TV where you have like a rubber stamp with the word veto on it. But when Governor Newsom sent this bill back without a signature, at the same time that he essentially vetoed it, his office published a statement from him in which he stated that the bill could have chilled innovation.
Anna Gressel: Yeah, and I think he mentioned that California is home, actually, to 32 of the world's leading 50 AI companies. That's quite interesting as a number.
Katherine Forrest: Right, and he did say in this statement that we can't wait for a major AI catastrophe to occur before taking action to protect the public, that there has to be proactive guardrails. But in his view, this bill was not the right one.
Anna Gressel: And I think it's worth mentioning that the same day of the veto or the failure to sign the bill, the governor announced the appointment of several really big names in AI to work on a project to create AI guardrails. And one of them is Fei-Fei Li, who is considered one of the leaders in the AI field, one of the creators of image recognition models, actually.
Katherine Forrest: Right, so it sounds like this is creating a committee that then has a lot of work to do, and it's going to take a while.
Anna Gressel: Yeah, and it's unclear exactly what this group is going to do. I mean, we know that SB 1047 is gone, so long as Governor Newsom is in office. And now there's a brand new process that will be starting to try to create some guardrails for higher risk AI situations.
Katherine Forrest: And my guess is that we'll see a lot more legislative activity from California, but nothing as comprehensive as SB 1047 in the near future.
Anna Gressel: Yeah, I think we'll probably see more targeted bills make their way to the governor that might have a slightly higher chance of being passed.
Katherine Forrest: And so let’s, Anna, let's move on to that deepfake issue that you'd mentioned and what happened in California in that area.
Anna Gressel: Yeah, this is an entirely different procedural situation. So here, a law actually was passed in California, and signed, that allowed any person to bring a claim for money damages against someone who released an election deepfake.
Katherine Forrest: And a challenge to that law was brought by an individual whose name on his, I guess, legal documentation is Christopher Kohls, K-O-H-L-S.
Anna Gressel: And he goes by the name Mr. Regan or Mr. Reagan. Actually, I'll admit, I don't know which one.
Katherine Forrest: Right, and he said that he was the creator of digital content about political figures and that his videos are meant to contain demonstrably false information about these political figures or about positions that they're taking. He thinks of it as satire or parody, and that he does this using AI to create videos and audio. And apparently, at least in part in response to some of the deepfakes that Kohls had created about Vice President Kamala Harris, momentum quickly got behind a bill to allow a political candidate, a governmental official or anyone who even saw the deepfake, which is incredibly broad, to bring suit.
Anna Gressel: And the day the bill was signed into law — so just to benchmark us, that was September 27th — Kohls brought a suit and moved for preliminary injunction, an order that would essentially prevent the law from going into effect, at least temporarily.
Katherine Forrest: And his claims were based on what he said were violations of the First and the Fourteenth Amendments, that as to the First Amendment, it violated his right to free speech. As to the Fourteenth, that it was vague.
Anna Gressel: Yeah, and the defendant said they were entitled to restrict election deepfakes because they could cause tangible harm. And we should mention that for First Amendment challenges like this, a law has to meet a standard called strict scrutiny.
Katherine Forrest: Right, and the judge agreed that the law didn't meet the standard. The standard requires, among other things, that if there's going to be a restraint on speech, it has to use the least restrictive means for advancing the state's interest.
Anna Gressel: So, Katherine, what was the lesser restrictive alternative?
Katherine Forrest: Well, Plaintiff argued that someone who cared and recognized the deepfakes as a deepfakes, in other words, someone who actually cared about trying to figure out if a particular video or audio was or was not fake, they could engage in counter-speech. Essentially, they could put out their own form of speech and contradict what the deepfake said.
Anna Gressel: Yeah, and the law's proponents said it was intended to cover lies and falsehoods and defamatory speech, but the court noted that the word defamatory or any variation on that really had not been used in the law.
Katherine Forrest: The judge also took issue with the breadth of the potential liability for content because the content that could be violative of the statute could depict an election official, a voting machine, a ballot or a voting site, all in some sort of incorrect manner, something that was false. And so there's a lot of subjectivity in all of that.
Anna Gressel: Yeah, and I think if we think about the intention of the law, really, it was to prevent having deepfakes that could be used to undermine people's confidence in the electoral process, because people might not know whatever they were seeing was actually false.
Katherine Forrest: Right, the judge was concerned that almost any digitally altered content of anything almost election-related could fall under the statute. And so that, for this judge, was just too broad.
Anna Gressel: Yeah, and I think we can't really separate where we are here from the context, right? We're in election season and the debate that Kohls conceded he made about Harris had been reposted by someone and had gotten like a hundred million views.
Katherine Forrest: That's a lot of views. 100 million views are a lot of views. And Anna, have we gotten that many views?
Anna Gressel: I think, Katherine, we're like a little shorter.
Katherine Forrest: We're a little short of a hundred million? Okay, well, hope springs eternal.
Anna Gressel: But seriously, deepfakes around the election that could be used to influence the election itself are no joke. They're, you know, a really serious concern for all kinds of agencies and legislators and policymakers actually globally.
Katherine Forrest: Right, I totally agree. You know, deepfakes that depict people saying things they never said could create real problems. A deepfake that shows an election site with a sign in front of it saying it's closed could create a real problem. Or showing a ballot box being overturned by a fake person and ballots sort of put on the floor, all sort of a fake video. You could imagine what kind of trouble that could create.
Anna Gressel: So Katherine, what's the next step in the case? This is not the end of the road for it.
Katherine Forrest: This is a kind of order that's going to be appealable. So we're going to have to wait and see what happens with whatever next steps the parties take and whether or not those next steps, which would be to the Ninth Circuit, whether or not they occur and what the timing of that would be.
Anna Gressel: Yeah, I mean, they could then affirm or reverse the decision.
Katherine Forrest: Right, that's right. It could go either way. And if it's not this law that has a challenge to it that's appealed up, there'll be another law. And so, we're going to be getting some guidance on First Amendment implications of deepfakes generally, but also these kinds of electoral-related deepfake laws that really go to the heart of the democratic process.
So, Anna, we've covered a lot today. And so, I think that's a wrap. I'm Katherine Forrest.
Anna Gressel: And I'm Anna Gressel. Thanks for joining us.