skip to main content

Moody v. NetChoice

In our latest episode, host Kannon Shanmugam and his colleague, William Marks, unravel the complexities of Moody v. NetChoice, a pivotal decision that examines the application of the First Amendment to social media platforms.

  • Guests & Resources
  • Transcript

Kannon Shanmugam: Welcome to “Court Briefs,” a podcast from Paul, Weiss. I'm your host, Kannon Shanmugam, the chair of the firm's Supreme Court and Appellate Litigation Practice and co-chair of our Litigation Department. In this podcast, we analyze Supreme Court decisions of interest to the business community.

The Supreme Court recently completed its 2023 term, and on the last day, it issued its decision in Moody v. NetChoice, which involves the interplay between the First Amendment and social media platforms. Joining me to talk about the Court's decision is my colleague, Will Marks. So, Will, tell us a little bit about the facts of this case.

William Marks: Sure. So, in 2021, the states of Texas and Florida passed laws responding to what they viewed as perceived bias by social media platforms against conservative content. The Texas law provided that platforms with 50 million or more users could not engage in viewpoint censorship, and it also required disclosure of certain content moderation policies and procedures to appeal those content moderation decisions. The Florida law, on the other hand, prohibited platforms from banning political candidates or journalistic enterprises and required individualized explanations for certain content moderation decisions. This case involved a constitutional challenge to both of those laws.

Kannon Shanmugam: So, the same plaintiff brought suit to challenge both of these laws. How did this get to the Supreme Court?

William Marks: So, the plaintiff NetChoice is a group of social media platforms, including the large ones like Meta, X and Google, that sued Texas and Florida, claiming that the laws violated the First Amendment. Now, importantly, for purposes of the Supreme Court's ultimate decision, NetChoice brought a facial challenge. And that means they argued that the law was not just unconstitutional, as applied to particular social media platforms, but was in fact unconstitutional in all or a substantial number of its applications.

The 11th Circuit ultimately held in the Florida case that content moderation is speech and that the Florida law violated the platforms’ First Amendment rights. The Fifth Circuit came out the opposite way. The Supreme Court granted review to decide two questions. First, whether content moderation on social media qualifies as First Amendment speech, and second, assuming content moderation is speech, whether the states’ laws unduly burdened that speech.

Kannon Shanmugam: So, the Supreme Court's decision in this case was a little bit complicated. Will, walk us through what the Court actually held, and then we'll talk a little bit about the Court's reasoning.

William Marks: Okay, so the Court primarily held that the courts of appeals had not properly applied the standard for assessing facial constitutional challenges. But after doing so, it then proceeded to provide guidance to the courts of appeals on remand on how the facial challenges should proceed.

Kannon Shanmugam: So, let's talk a little bit about what that means in practice. So, what did the Supreme Court tell the lower courts to do here?

William Marks: Well, facial challenges are supposed to proceed by having a court analyze the full sweep of the challenged law and then assessing which applications of the law violate the First Amendment and measuring those applications against permissible applications. Here, however, the court of appeals had primarily focused on Facebook's news feed and YouTube's homepage in assessing the validity of the Texas and Florida laws. There were open questions, however, as to whether those laws had other applications, such as to an email provider or a ride-hailing service or even an online marketplace. The Supreme Court thus vacated the court of appeals’ decisions and remanded for further application of the correct facial challenge standard.

It didn't stop there, however. The Court then proceeded to explain that the Fifth Circuit had erred in its First Amendment analysis, even as applied to certain particular platforms. The Fifth Circuit had held that content choices made by Facebook and YouTube and other large platforms does not constitute speech. It also held that even if it did constitute speech, a state would have a valid interest in regulating the platforms to better balance the marketplace place of ideas.

The Supreme Court indicated that the Fifth Circuit was wrong on both fronts. The Court first held that the First Amendment is implicated where a party is compiling and curating other’s speech is directed to accommodate speech it would otherwise prefer to exclude. In other words, the party's editorial choices constitute protected speech. The Court then held that a state's interest in broadening the marketplace of ideas can't justify regulation of those editorial choices. However imperfect the marketplace of ideas might be, the First Amendment treats government tinkering with the balance of ideas as worse.

The Court then applied those principles concluding that First Amendment protected Facebook’s and YouTube’s content moderation. That moderation was speech, and even if the platforms did not moderate most content posted by third parties, the content was still protected by the First Amendment. The Court then held Texas’s asserted interest in ensuring a platform for conservative speech couldn't justify the state law. A state can't regulate private speech to advance its own vision of ideological balance.

Kannon Shanmugam: So, given the complexities here, Will, what are the implications of the Court's decision?

William Marks: Well, I think only appellate lawyers and academics are going to be focusing on the facial challenge issue. But the guidance the Court provided on the Texas law is really of profound importance for social media companies and users. The Court has now made clear that for First Amendment purposes, content moderation by major social media platforms is protected speech, and a state can't regulate that content moderation based on its perception that the platforms are disfavoring certain viewpoints. The question going forward will be how this type of analysis applies to other platforms on the internet, such as email, online marketplaces or mobile applications.

Kannon Shanmugam: Great. Well, there's certainly going to be more to come in this important area of the law, but thanks for now, Will, for summarizing the Court's decision. If you have any questions about the decision, please feel free to reach out either to Will or to me. And that brings us to the end of our first season of “Court Briefs.” We hope you enjoyed the podcast. For more information about Paul, Weiss's Supreme Court and Appellate Litigation Practice, please visit us at our website, paulweiss.com. Please subscribe to “Court Briefs” wherever you listen to your podcasts. And if you did enjoy this season, please rate and review us on your favorite platform. We'll be back again next fall with another season of “Court Briefs.” But until then, thank you for joining us and take care.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast TuneIn_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast TuneIn_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy