Taking Stock of the State of AI Regulation in the U.S.
This week, Katherine Forrest and Anna Gressel examine recent shifts in AI regulation, including the withdrawal of former President Biden's 2023 executive order on AI and the emergence of state-level regulations. They also discuss what these changes mean for companies in terms of navigating governance and compliance.
- Guests & Resources
- Transcript
Katherine Forrest: Good morning, everyone, and welcome to today's episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.
Anna Gressel: And I am Anna Gressel.
Katherine Forrest: And I am so excited. I was so excited before we started, and I'm still so excited about being together, right? Well, we're not actually physically together because that will like never happen because we're always in different places. But we're actually both stateside and in the same time zone. So this is a unicorn moment for us, Anna.
Anna Gressel: It really is. And I'm actually recording in the office today, which I never do. So, folks don't know this, but we had to spend like 10 minutes setting up this microphone here. But I also came to the office, and I just want to give LawPods a shout out. We love LawPods, which produces our podcasts. And they sent me a cool, pink Yeti tumbler that says, “this is my podcasting cup.” And so, I'm super excited to use this going forward, and they're literally the best. They make us sound great all the time. But also, we have our own mugs, which I think some people have seen from us, some of our clients and friends have seen. But it's a “Waking Up With AI” mug. I'll post a picture to social or something like that. I love it. If you want a mug, just message me, send me an email or send me a LinkedIn message. And we have a limited number, but we'll try to get some to folks.
Katherine Forrest: Extra credit to people who are in foreign countries. We know we've got an audience of some folks who are in Japan. We'd love to send a few mugs your way, let us know. But anyway, so it's an exciting, time to sort of have us and be able to show off our swag on video to each other, as opposed to you being in Abu Dhabi and you're at night and I'm in the morning or sort of all mixed up. But today we actually, I feel like we have sort of a serious episode that we wanted to do. And, for the audience, they don't know that we shifted it last night away from a technical, there's so much happening technically right now that you could just spend your life on it.
Anna Gressel: Yeah, there is. But it's also a moment, and I think we'll talk a lot about this, where companies are really grappling with what to do as they look towards 2025 and try to figure out what is a reasonable approach towards compliance and governance among kind of a shifting landscape, a shifting legal landscape. So I think we're going to pick that up today.
Katherine Forrest: Yeah, like what's happening? Like, and how, right? What's happening? And how does any of this impact sort of what folks in legal departments, compliance parts of organizations, chief technology officers, how does it impact what they've been doing and planning on AI for now, you know, a year or a couple of years? Whatever the sort of timeline is. So today is all about like, what's happening?
Anna Gressel: Yeah. I mean, it's a big moment, the beginning of 2025. It also feels a little bit, Katherine, like the moment we had in 2023 from a tech perspective. We had this moment where there was a sea change on the technology front. Every company had to pick up the question of, what do we do with this? Are our compliance frameworks sufficient? Are our governance frameworks sufficient? We've had machine learning, but it's a different moment now. And I think we're kind of hitting a different version of that, both because the technology continues to evolve — and we talk about that all the time on the podcast — but also because the regulatory landscape truly is in a different place than it was a few months ago. And there's a lot of uncertainty, a lot of confusion about how to navigate that. And that's not just confusion by companies. I think it's confusion by policymakers globally about what this world really looks like and what a reasoned approach to AI regulation should be.
Katherine Forrest: So let's start with just baselining people with, as of December 31st, where were we? And we knew that there was change coming, but we didn't know what it was. And we were predicting things like the executive order from the White House of October 2024 was going to be… 2024? 2023. 2023, oh my goodness, was going to be revoked. And it was, but let's go back to where we were, sort of like a snapshot in time. And why don't — you want to start off and talk about the European-U.S. sort of intersections and how all that fit together? And I'll jump in here and there.
Anna Gressel: It's a great question, Katherine. I mean, you and I have been doing this for a long time. And if we think back to an era around 2018, 2019, 2020, in the U.S., we originally had a framework towards AI that was very focused on discrimination issues, on bias issues that could result from AI. And that was, I would say, largely sectoral, but not entirely sectoral. We had banking regulators, insurance regulators, housing regulators, all concerned about the fact that AI could potentially lead to biased outcomes, potentially in areas that are legally protected along the lines of protected class, like race and family status, for example. So, you know, we had certainly an enforcement landscape that had started to arise in the U.S., and then that actually was followed by certain guidance and regulations from the regulators trying to articulate what they were worried about along those lines.
Katherine Forrest: Let me sort of pause there and break it into two. Because I think that we'll see this sort of federal-state distinction. We know we see this federal-state distinction becoming even more prominent right now. But one of the things that we saw at the end of 2024 was there was a robust set of federal agencies that were actively working on AI, proposals AI rulemaking. We had the NIST proposals. They're a non “regulatory agency.” They're an advisory agency, a standard-setting agency that works underneath the Commerce Department. But we had some stuff out of NIST that was really important. We had the SEC working on what it was going to be doing. We had essentially every agency of the federal government under the executive order from the White House, the 2023 executive order from the White House was doing a lot of thinking about AI. And then we separately had the states that were creating a variety of rules and regulations that were on the state level.
Anna Gressel: Yeah, I think that's exactly right. And, you know, and some of those themes had evolved from what we were originally seeing many years ago. I mean, there was work on kind of AI safety regulation at the state level, certain particular issues around election deepfakes. But a lot of the themes were consistent with what we had actually seen before around consequential decisions in high-risk domains like housing, insurance, financial services and states attempting to regulate those either through disclosure obligations or risk assessment obligations or certain kinds of information transfer, for example, as between developers and deployers. So certainly a lot of state activity. We've talked a little bit about that before. But I think, Katherine, we can also talk a little bit about what we've seen — and many of our listeners will know this — at the European level, which was, you know, at the same time that this kind of discrimination discussion was happening in the U.S., there was also a discussion in Europe about the safety of AI systems, and in particular, whether there were protections that were needed either based on kind of a product safety lens. You know, the idea that products would be put on the market that might be unsafe due to flaws in how AI worked and therefore we needed kind of a full life cycle regulatory approach. Or through a fundamental rights lens, which is also very European, the idea that AI could affect the fundamental rights of individuals in a way that could have a legal detriment to them, and therefore they needed to be protected and have an opportunity for redress, legal redress based on AI decisions.
And so that really formed the kind of core of the EU AI Act, which we've talked about at length, but takes a risk-based approach to AI regulation with some product safety themes, with some fundamental rights and rights-based approaches. But the EU AI Act was in formation for many years before ChatGPT hit the scene and then had to fundamentally react to and adapt to the fact that general purpose AI models were being put out there and they were very difficult to regulate from a use-based perspective. And so there were provisions added to the EU AI Act that were really about the kind of idea of a foundation model, a model that exists underneath the application layer that powers it that might itself have its own risks. And so we see now these general purpose AI model provisions of the EU AI Act, and those have become very important for many reasons. We're seeing a general purpose AI model code of practice that's kind of evolving, but that's the EU side of the equation.
Katherine Forrest: You know, at the end of 2024, what we saw was there had been so much work on trying to conform the EU AI Act, which had been passed, to actually be relevant to changing technologies. And there was a shift in focus to implementation. And I know that we were advising, and are still advising, a large number of companies on how do they best prepare themselves for implementation of the EU AI Act provisions. But there was also what I would consider to be acceptance and cooperation with the United States. And I would say that what we're seeing now is not a convergence between the EU and the United States in terms of rulemaking, but a divergence. And so that, I think, is a change of administration, focus and emphasis. And I have some specifics, Anna, that I can go into in just a minute.
Anna Gressel: Let me just say one more thing to kind of catch us up to the today moment, which is, I think you're exactly right. The fact that the EU AI Act was out there and really reaching conclusion in its legislative process put a lot of pressure on the U.S. to follow. And then we saw the executive order from the Biden administration. And the executive order covered all the kinds of themes we've talked about. But it also picked up on a theme that we haven't talked about, but we talk about all the time on the podcast, which is AI safety and the idea that AI systems themselves could pose national security risks or major risks to critical infrastructure or significant systems like our financial system, which actually is critical infrastructure. But the idea that that itself had to be tested and regulated and protected, and it took a much larger critical systems approach to AI safety. And that involved the creation, for example, of the AI safety institutes. And so there was a whole safety architecture and infrastructure that started to be built in the U.S., but there were also AI safety institutes, which are colloquially called AISIs around the globe. I mean, there's a significant AI safety institute in the U.K., which has now just rebranded to an AI security institute. But there started to be a very significant regulatory discussion around AI safety, both at the federal level in the U.S. and then in certain states like California, which tried to pass bespoke AI safety regulation that ultimately ended up being vetoed. So the AI safety piece was, I think, an important piece of this discussion and continues to be an important piece of the discussion going forward. But I think that kind of thematically catches us up to, what, fall of 2024? Katherine, am I right about that?
Katherine Forrest: Yeah. And so we're in the fall of 2024. We've got the — now one year has passed since the White House executive order, even though I had forgotten between 2023 and 2024. It's like one of those COVID things, you know, who remembers when COVID actually started? You always have to go back. When it was it? March of ‘19 or March of ‘20? It was March of ‘20. And anyway, so the White House executive order has been around for a while. We're in the fall of 2024. We've now got the Commerce Department, which has NIST as an agency under it, has started to really produce some very thoughtful pieces on different types of safety rulemaking, which was voluntary and hoped to be picked up. And then, freeze frame, the season's turn, new administration comes in and we're in a different world. And so maybe we talk a little bit about the different world we entered. So the world we were in was one of EU-U.S. convergence. The rulemaking was, in some ways, converging. The EU was ahead, but the White House was trying to get the various executive branch agencies and independent regulatory agencies, which we'll talk about in a moment, to come along and to develop reasonable AI regulations for them. We were giving a lot of advice on what companies should do with this emerging landscape, the administration changes. And there really is some amount of what I'm going to call fundamental potential change. And I want to say, Anna, that I don't know where this is exactly going. I want to preface my remarks by saying there isn't any huge answer to what's going on. The first thing that happened was, of course, as we knew from the campaign trail, that Trump immediately withdrew the executive order of 2023. So that was, that's gone. Gone, gone, gone. And so there's no mandate anymore. But it did not eliminate, at that time, the existing rules that had already started to be developed within the individual agencies.
Anna Gressel: Yeah, it did put a pause on some pending rulemaking or rulemaking that hadn't started. I mean, a formal pause, we kind of always thought that that would be the effect. And it did say, hey, you know, we're going to vest certain folks with authority to take a look at our AI policies and come up with policies that really prioritize national competitiveness, for example, within 180 days and look at what policies were there from the Biden administration and decide what we want to keep and get rid of. So we started, and we talked about this previously, we started basically a 180-day clock in January to do that. But it didn't necessarily tell us how that look was going to come out or where the winds — we know the winds are blowing, but we don't necessarily know where they’re blowing.
Katherine Forrest: We do know that another statement from the White House, I think unsurprisingly given a variety of tech forward initiatives in the White House, is that the Trump administration has said as a priority that they would like to see the United States be first to AGI and to have a leadership role or the leadership role in AI development. So we've got that out there. And I think that's an important, the goal that they've set up, is important as a backdrop to what else happens. And so, let me mention the big event of the last couple of days that I think is going to have a big impact. And that is the February 18, 2025 White House executive order that's called “Restoring Democracy and Accountability to Government.” And that executive order, if you just bear with me for one second, actually has a potential real impact on rulemaking coming comes out of certain agencies. And so let me just describe, Anna, if I can, you're going to bear with me to describe these, okay? Because I had to look this up. And I had to figure out like, what agencies does this affect? Okay, so it says that all draft regulations have to be presented for White House review, and there has to be consultation on priorities and strategic plans. But I kept thinking, well, what are defined as the independent regulatory agencies? So I just want to say for our audience, there are two things. And who knew recently that this is the way the world was actually structured? But there's the independent executive agencies which exist underneath the various, say, cabinet positions. So you've got transportation and you've got the Commerce Department, you've got the Department of Health and Human Services, you've got all of the cabinet-level, big sort of agencies, and then they've got their sort of some sub-agencies underneath them. But then we have something called independent regulatory agencies, and that is at the heart of this new White House order, the “Restoring Democracy and Accountability to Government” executive order. And independent regulatory agencies are the SEC, the Federal Reserve — which by the way is explicitly exempted from this, so let's just put that to the side — the Commodities Future Trading Commission and the CFPB, among a few others. What's not in there is the Copyright Office because the Copyright Office interestingly, for those of you who've never needed or wanted or — who knew that we needed to sort of figure out which organizations fell under which branches of government? But it's useful to know. But the Copyright Office is actually part of the Library of Congress. And it functions as a service unit under the Library of Congress. And it's actually part of the legislative branch. So it's not part of the executive branch. So we'll see whether or not the Trump administration tries to influence rulemaking from the Copyright Office, but let's for the moment put that aside and just think about these independent regulatory agencies as sort of the main focus. And by the way, NIST is part of the Commerce Department. It's a standard-setting organization, so it's not technically an independent regulatory agency. And FINRA is under the SEC. So altogether, we've got some of the major agencies are either part of the Trump cabinet set of agencies or falling under the independent regulatory agencies. And I think there are big implications for that with this executive order.
Anna Gressel: Definitely. I mean, it is really interesting to think about this because so much of the work on AI in the U.S. has actually been sectoral and has fallen within the mandate and the scope of those agencies. And so thinking ahead for all of the companies that are really highly regulated at that sectoral level, this is enormously important for them.
Katherine Forrest: Right, for instance, the SEC, as folks may recall, had come up with some proposed rulemaking. Gensler and his team had put a lot of time into that. And they'd taken just vast numbers of public comments. And you and I, Anna, actually had a conversation with Chair Gensler on a weekend about different aspects of this. And it was very, very deep and intensive and robust at the SEC. It is absolutely unclear what is now going to be happening at the SEC, and therefore at FINRA, because those are among the agencies where any rulemaking is going to now have to, unless it gets enjoined, go up to the office at the White House is going to be sending these regulatory agency rulemaking provisions to have them reviewed. So SEC, FINRA, question mark, a real question mark on what they're going to be doing with AI. And the CFPB we already know has got its own separate issues. But also the Commodities Future Trading Commission is going to have, I think, some, a big question mark over it in terms of where it's going to go with AI.
Anna Gressel: Yeah, no, I completely agree with that. And, you know, folks who have been in this space for long time, there's an echo here of the Trump executive order on AI from the first time around. Really, you know, if you think back to that executive order, the ethos was there should be no regulation unless it can be justified, specifically justified for AI. And so it will be interesting to see what is viewed, if you're taking that kind of frame or lens on regulation now. What is going to be viewed as justifiable by the Trump administration this time around?
Katherine Forrest: And when was that?
Anna Gressel: I think that was in, I think that was 2019, but you're testing my memory.
Katherine Forrest: Now, it was first administration and what was interesting was it was pre-generative AI. But even then, you could see the tech influence and the desire to have sort of a light touch jurisdiction. That became much more of a medium towards a heavier touch, but not quite as far as the EU. But Anna, as a counterweight or at least an alternative to what's happening at the federal level, we're seeing a lot of activity right now at the state level. So companies should, on the one hand, think of the federal level as having a lot of question marks over what the rulemaking is actually going to be, okay. So we don't know. We do know the EU AI Act is still in place, so companies need to be considering what they're going to be doing with regard to the EU AI Act unless the United States does something to make that somehow less of a touch point for U.S. companies, although how they would do that is entirely unclear because the EU is entitled to regulate goods coming into and going out of their jurisdiction. So we've got the EU, we've got the question mark over the federal government in the United States. How about the states? How about the actual states? You know, what's happening in Colorado, California, Virginia?
Anna Gressel: I mean, I think the short answer is things are moving very, very quickly. It's unclear what is going to land where. But right now I'd be prepared, and I think companies are preparing themselves for a bit of whiplash at the state level as states try to step in and regulate AI systems, but also AI applications that are of particular concern to them or AI issues that are of particular concern to them. And as many folks know, I mean, we had a very active legislative season last year as well. California was highly, highly active on AI regulation, but we also saw really the first landmark AI systems bill pass in Colorado. And that was very significant, and we saw kind of similar versions in Connecticut and in Texas. So there are big questions about what themes are going to be picked up on and what is going to be the focus. So I'm going to talk about Virginia in a moment, which I think is significant, but just to give folks a sense of a few of the bills that are percolating right now, we don't know if they're going to pass. California Senate Bill 243 is a bill on chatbots that would require companies that provide chatbots to prevent them from encouraging minors' engagement with chatbots and provide notifications that chatbots aren't human. But also, they have very interesting provisions in there about third party auditing and reports to the State Department of Health Care Services about, really, suicidal ideation in response to chatbots. And, you know, there are a number of lawsuits that you can tell have kind of prompted this response where, there's at least one case in which a child allegedly committed suicide based on its interactions with the chatbot. And so this bill would require operators to detect suicidal ideation by minors and report on that, and then even potentially report on what happened to those users. Did they actually commit suicide, for example, after interacting with the chatbot? So it's a very significant bill, I would say, and it'll be interesting to see where it lands. We have another California bill that would require GenAI providers to provide information to copyright owners on whether the model was trained based on that specific copyright owner's materials. That obviously is a very, very significant proposal. Again in California on data centers, requiring data centers to report on the energy resources required to train or that would be used to train AI models that exceed 1025 FLOPs. So these are big, big models. And so again, this is kind of trying to get to a disclosure regime around the energy and computing resources required to train those models.
Katherine Forrest: And let me just pause you right there before you go on to the rest of the states. That actually is an interesting one because, putting aside the high capability model that a 1025 FLOPs means, (in our prior episodes, we've really talked about that) but the Trump administration has recently put deregulation of energy as another core principle of what it's pursuing and allowing there to be, for the purposes in fact of AI development, increased access to starting up nuclear power plants, getting access to fossil fuels, etc. So it'll be interesting to see how regulation, which is looking at the energy usage, will be taken and how it will be impacted by this alternative, which is opening the spigot to energy access by AI companies.
Anna Gressel: That's not just a U.S. thing. I mean, there are so much investment in data centers, right? There's a huge announcement about French data centers and investment in kind of building those. India has a big announcement in data center investment. I mean, so this is a global issue. There's kind of a global competitiveness around who is going to have the data centers used for AI, how they're going to be powered, who is going to control those. And so, that will also replicate, I think, a little bit at the state level in terms of states that are attracting that investment. And then I would assume that there is going to be some amount of criticism of this bill based on that desire to have data center growth for AI training or for AI inference.
Katherine Forrest: Let's do one more state, like Virginia, because you read about Virginia these days as being on the verge, potentially, of some AI regulation.
Anna Gressel: So Virginia is the closest. I think you're right to focus on Virginia. We do actually now have Virginia House Bill 2094, which is the “High-Risk AI Developer and Deployer Act,” which has now passed. It's actually going to the governor for signature. If you look at the way that bill is constructed, it's quite similar to Colorado. There are meaningful differences though, in terms of the actual language of the bill that actually, I think, are probably net helpful. It's a clearer bill in some respects and really does try to create concrete definitions and solve some of the interesting issues that we've seen kind of with Colorado. But just in short, under this bill, developers and deployers of high-risk AI systems would have to use reasonable care to protect consumers from algorithmic discrimination. And they'd actually have a rebuttable presumption, including if they comply with NIST's AI risk management framework or ISO's 42001 standard or other kind of international standards. So it's meant to have a unifying effect on the frameworks that companies can use to actually make sure that they are compliant and make sure that they are putting risk management systems into place. It also has certain provisions about consequential decisions. And those are decisions that would materially affect the grant or denial of education, employment, financial services, health care, housing, insurance, marital status, legal services or post-incarceration release or court supervision services. So it's focused on these kinds of high-risk AI systems defined around consequential decisions in very particular domains. And then it would require certain things like impact assessments and notices and potential reviews for adverse action. So it's actually, in some ways, follows the pattern we've seen before. It's a little bit tighter in its writing, a little bit more constrained, but worth looking at, particularly because, you know, if the governor signs it, we're going to see our second real amendment state law on high-risk systems. And that follows the pattern from privacy law, right? You know, we started seeing a few states pass privacy laws, and then many, many more states patterned on kind of the original big states that had kind of anchored into policy positions.
Katherine Forrest: So let's just tie this all up. We've gone a little bit over today, but we thought we had so much happening that we really wanted to give people a baseline on this. And I'm sure we're going to be returning to this over the episodes in the coming year. And by the way, we've passed 50 episodes. So I think this is like the 51st or the 52nd. Anyway, congratulations to us.
Anna Gressel: That means our year anniversary is right around the bend, Katherine. We'll have to celebrate that.
Katherine Forrest: Right, right, right. So, I think it's going to be interesting to see if the federal government tries to engage in the preemption issues that I talked about, which is having a law that would have to be passed. It wouldn't be done by executive order, it would need to be done by the legislature. So it would be a different sort of kind of bird from what's been happening so far. And that law would then, could potentially preempt the state laws in favor of whatever the federal position is. That is not yet on the table anywhere. And so it is a total TBD. And so we'll let people know if we ever hear about it. But right now, the state laws you've talked about, Anna, that really is where the bulk of the regulation is sitting in the United States right now. And they really need to be paid attention to.
Anna Gressel: I think that's right. And just, you know, we can talk about this a little bit maybe in other episodes, but I think there are big and important questions for companies about how to actually set up governance and compliance programs to deal with the fact that this landscape is changing so quickly. There's almost no way to get ahead of all of it. But the question is, what does a smart governance model look like? What does a smart compliance model look like in this day and age? I think compliance there's a lot of work that can be done to focus on what the core compliance areas are for a company and index on those. But perhaps the more important discussion right now is actually on governance. And the question is, how do you create a governance model that is sufficiently agile and quickly moving and able to adjust for not only changes in the regulatory landscape that are going to shift the levers on what's high risk and what isn't high risk for a company, but also on the tech side? And the tech side is critical because, remember, governance is not a compliance exercise. Governance is not just about checking the boxes and making sure that compliance has been thought about. Governance is also about managing and thinking about the upside of technologies and kind of the risk-reward balance. And so it's not just a question of have we thought about the right risks, but how do we even measure and think about risk and ROI in an AI-based world where AI is moving so quickly? And you've got to have the right people in the room for that. You've got to foster the right discussions for that. It's not just a compliance discussion. It's also a change management discussion. So I would say, you know, for folks who are kind of in the weeds on governance, it's worth taking a moment to pause and think, can our governance move quickly enough to keep up? And if not, what are some of the things that are dragging us, causing a little bit of drag in our process? And can we alleviate those just to make sure that we're always able to adjust for the pace of change? So we can talk about that, but I think it's a critical moment to think about that.
Katherine Forrest: Absolutely. You know, it's not the case that because the executive order from 2023 has been withdrawn that we are now in a world of the wild west of AI regulation. Actually, there's still quite a lot around, and it's just actually much more dispersed. So, okay, folks, that was a longer episode just to sort of baseline us all on all of this activity that's been happening in the last few weeks and how it affects the AI regulatory landscape. We'll be coming back to it over time as the dust settles a little bit or more dust gets kicked up, and we'll be taking a look at all of that. I'm Katherine Forrest.
Anna Gressel: I'm Anna Gressel, just make sure to like and subscribe to the podcast if you're enjoying it.