When the US Supreme Court agreed to hear Gonzalez v. Google, its first case involving Section 230 of the Communications Decency Act, the tech-policy world was laser-focused on its implications. The week before oral arguments, in February last year, the Brookings Institution held a panel touting the case’s “power to reshape the internet.” The New York Times wrote that the case “could have potentially seismic ramifications for the social media platforms that have become conduits of communication, commerce and culture for billions of people.” Google’s general counsel wrote that the “decision could radically alter the way that Americans use the internet.”
Those predictions fell short a few months later when the court released its opinion and completely punted on any interpretation of Section 230, the 1996 law that protects platforms from liability for user content. In a 2019 book, I called this statute “the twenty-six words that created the internet,” because it gave internet companies the flexibility to build their business models around user-generated content. As tech companies gained more power, critics on the left and right increasingly attacked the law, which they see as a get-out-of-jail-free card. But the Supreme Court was reluctant to resolve the heated debate. “We really don’t know about these things,” Justice Elena Kagan said during oral arguments. “You know, these are not like the nine greatest experts on the internet.”
Despite their reluctance to decide lofty cyber issues, there is a good chance that another internet law dispute will come before the justices in the next year. And this time, it will be difficult for them to avoid directly deciding the issue and having a huge impact on how the internet looks for decades to come.
The disputes involve two similar Texas and Florida laws which both restrict platforms from moderating certain speech and require transparency about user content policies. The Texas law, for example, states that large social media platforms “may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person” based on viewpoints or the users’ location. NetChoice, a group representing tech companies, has challenged both laws.
Last year, the US Court of Appeals for the Eleventh Circuit struck down Florida’s moderation restrictions. Judge Kevin Newsom wrote that platforms’ content moderation choices “constitute protected exercises of editorial judgment,” so the law likely violates the First Amendment. But later that year, the US Court of Appeals for the Fifth Circuit upheld the Texas law. “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say,” Judge Andrew Oldham wrote.
The Florida and Texas laws are not identical, but it is impossible to reconcile the courts’ opinions. In the Eleventh Circuit, tech companies have a First Amendment right to moderate user content as they see fit. In the Fifth Circuit, they do not. Lawyers refer to this problem—having different legal rules depending on what part of the country you’re in—as a “circuit split.” And a circuit split is particularly problematic for issues involving the internet, which reaches across state borders.
The Supreme Court receives more than 7,000 requests to review lower court decisions each year, and typically grants less than 1 percent of them. But the chances of the Supreme Court reviewing the NetChoice cases are greater than those of an average dispute. A circuit split—particularly a high-profile one such as this—makes the Supreme Court more likely to take interest. Assuming that the court agrees to hear the cases, we could expect an opinion next June.
A Supreme Court opinion in the NetChoice cases, far more than Gonzalez v. Google, has the potential to upend the laissez-faire approach that courts have applied since the internet’s infancy. The NetChoice cases are about more than just liability in lawsuits; they will require the Supreme Court to decide whether online platforms have a First Amendment right to moderate user content.
No court had ever before allowed the government to force websites to publish speech. “If allowed to stand, the Fifth Circuit’s opinion will upend settled First Amendment jurisprudence and threaten to transform speech on the internet as we know it today,” NetChoice wrote.
Platforms should be free of any direct or indirect government restrictions on their ability to distribute constitutionally protected user-generated content, even if that content is distasteful or objectionable. But the platforms also should have the flexibility to set their own policies, free of government coercion, and create the environments they believe are best suited to their users. The free market—and not the government—should reward or punish these business decisions.
The outcome of the cases could reach far beyond content moderation disputes. NetChoice repeatedly relies on a 1997 Supreme Court decision, Reno v. ACLU, to argue that the Florida and Texas laws are unconstitutional. In Reno, the Supreme Court struck down a federal law that restricted the online transmission of indecent images. The federal government had argued that just as the government can restrict television stations from broadcasting indecent content, it also could limit such material on the nascent internet. But the Supreme Court disagreed. The internet, the Court wrote, is “a unique and wholly new medium of worldwide human communication.”
This conclusion led the justices to rule that the internet is not like broadcasting, and deserves the full scope of First Amendment protections. “As a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it,” the Court wrote. “The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.”
But that was more than a quarter-century ago, when online platforms were not as central to everyday life and business. Big Tech back then was Prodigy, CompuServe, and AOL. The Supreme Court could use the NetChoice cases to rethink—and possibly limit—the hands-off approach to the internet that it articulated in Reno. Texas, for instance, argues that platforms should receive the less rigorous First Amendment protections that are afforded to cable companies.
Weakening Reno could open the door to more state attempts to force platforms to carry user content that the platforms believe violates their policies. But it would do more than that. If the Supreme Court were to conclude that First Amendment protections are weaker on the internet, it might allow other states to pass laws that make it more difficult for certain users to post, or that prohibit user content that would otherwise be constitutionally protected.
This is not hypothetical. In the past year, some states have passed laws that require all social media users to verify their ages, threatening the ability of people to post anonymously. Blue states have proposed bills to crack down on misinformation, hate speech, and algorithms. While these proposals could face substantial First Amendment obstacles under current precedent, they might be more likely to survive in a world without Reno.
Had the Supreme Court’s ruling in Gonzalez caused substantial harms, Congress could have stepped in and fixed the problems by amending Section 230. But because the First Amendment—and not a statute—is at the heart of the NetChoice cases, the Supreme Court’s ruling will be the final word. So it’s vital that the justices, regardless of whether they actually are the nine greatest experts on the internet, get this one right.