On October 3, 2022, the United States Supreme Court granted certiorari in Gonzalez v. Google LLC, No.1-1333, to address the scope of Section 230 of the Communications Decency Act. The Court will consider whether Section 230(c)(1) immunizes website operators and other online service providers when they use algorithms to recommend content to users, specifically terrorism-related videos posted on YouTube by other users of the video-sharing service.
Section 230(c)(1) is famously concise, having been referred to as the “the twenty-six words that created the internet.” It says simply “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Since its passage over 25 years ago, courts across the country have often construed the law to provide broad immunity to online service providers that host user-generated content. One example is Malwarebytes, Inc. v. Enigma Software Group USA, LLC, which stated “[M]any courts have construed the law broadly to confer sweeping immunity on some of the largest companies in the world.” Section 230 has facilitated the growth of the modern interactive Web, by protecting companies that host user-generated content from potential liability. But the law has come under intense scrutiny in recent years, with both courts and legislatures chipping away at this important safe harbor.
I. Section 230 Immunity
In 1997, when Section 230 was less than a year old, the Fourth Circuit held in Zeran v. America Online that Section 230 provided online service providers with broad immunity from liability arising from content posted by users. Writing for the court, Judge Wilkinson looked to the congressional purpose behind the statute, characterizing Section 230 as a shield for free speech online. As dozens of state and federal courts around the country adopted Zeran’s holding, this expansive interpretation of Section 230 took hold.
In the following decades, courts expanded these protections even further, holding that the Section 230 safe harbor applies regardless of how the plaintiff characterizes the claim, so long as the claim treats the online service provider as the publisher or speaker of content provided by someone else (typically another user). Courts even held that Section 230 protects users who repost other users’ content, and social media platforms that facilitate communications between users. In other cases, however, courts have found exceptions to the Section 230 safe harbor*, particularly where an online service provider is more actively involved in developing the content at issue or where the plaintiff’s claim alleges activity that goes beyond the traditional activities of a publisher. (*We’ve provided a summary of these cases at the end of this post.)
This trend towards courts narrowing and finding exceptions to the historically broad Section 230 immunity has accelerated in recent years, and Section 230 has also been the target of various legislative attempts to limit its scope at both the federal and state levels. For years, however, the Supreme Court has remained above the fray, routinely denying certiorari without comment when parties sought Supreme Court review in Section 230 cases.
Two years ago, in a statement accompanying the Court’s decision not to review a different case involving the scope of liability under Section 230, Justice Thomas wrote that “in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.” Justice Thomas repeated his call to review Section 230 earlier this year.
Now some 25 years since its passage, the Court has for the first time decided to take up the important question of how broadly the protections afforded by Section 230 apply.
II. Gonzalez v. Google
The family of Nohemi Gonzalez sued Google under the Anti-Terrorism Act, alleging that, by recommending ISIS recruitment videos through its YouTube algorithm, Google could be held accountable for Nohemi’s death during an ISIS attack in Paris in November 2015.
The family first brought suit in the Northern District of California, alleging that Google was liable for Nohemi’s death under the Anti-Terrorism Act, and that through its ownership and operation of YouTube, Google knowingly provided material support to ISIS in the form of its YouTube platform. In their amended complaint, the plaintiffs alleged that YouTube assisted ISIS in spreading its message by recommending ISIS videos to users using an algorithm designed to introduce users to content they might be interested in based on their “account information and characteristics.” On August 15, 2018, the lower court in the Northern District of California granted Google’s motion to dismiss, holding that the claims were barred by Section 230, and that algorithmic recommendations of ISIS content did not render Google an “information content provider.”
Gonzalez’s family appealed to the Ninth Circuit, which affirmed the lower court’s holding that plaintiffs’ claims were mostly barred by Section 230. The court rejected plaintiffs’ argument that the Anti-Terrorism Act imposes on Google a duty not to support terrorists, and agreed with Google’s argument that the thrust of plaintiffs’ claims was that Google did not do enough to block or remove content, which necessarily required the court to treat Google as a publisher. Throughout the opinion, the court expounded upon the focus and goals of Congress in enacting Section 230 (Gonzalez, 2 F.4th 871, 896–97).
In their petition to the Supreme Court, the plaintiffs accepted that Section 230 protects Google when ISIS posts videos on YouTube, and focused instead on YouTube recommendations of the videos at issue. They argue that major tech companies vie for users’ attention and, in order to increase the time that people spend on their websites, provide algorithmic recommendations of material that may interest users. These recommendations, the plaintiffs contend, move beyond “traditional editorial functions.” The plaintiffs argued that application of Section 230 to these recommendations would “remove all civil liability incentives for interactive computer services to eschew recommending ... harmful materials, and den[y] redress to victims who could have shown that those recommendations had caused their injuries.” The crux of the plaintiffs’ legal argument is that recent decisions granting Section 230 protections for recommendations were wrongly decided, and that the Court should look to the text of the statute to return to a narrower interpretation of its protections.
In its opposition, Google emphasized that both circuit courts that had considered content recommendations had determined that they were protected by Section 230, and that the Supreme Court had denied certiorari in both cases. It went on to argue that, if YouTube recommendations are not shielded from legal liability, “Section 230 would be a dead letter.” It noted that every publication provides recommendations to users simply through prompts like “read on” or “click here,” and that these are protected as typical publisher functions.
Although the questions presented do not address Section 230, the Supreme Court also agreed to hear another case decided in the same Ninth Circuit opinion seeking to hold a number of social media companies liable for extremist content published on their platforms following terrorist attacks.
III. Pending Supreme Court’s Review
Gonzalez presents a narrow but important question, asking whether recommendations provided by online platforms qualify for Section 230 immunity. By deciding to hear this case, the Court has shown for the first time that it is ready to make a decision regarding the scope of Section 230, at a time when the statute has never been more relevant, or more controversial. A decision in this case is expected before the Court breaks for its summer recess in July 2023.
* Additional examples include:
- Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157 (9th Cir. 2008), holding that Section 230 did not apply where a questionnaire on the defendant’s website asked users to submit discriminatory responses;
- F.T.C. v. Accusearch Inc., 570 F.3d 1187 (10th Cir. 2009), holding that Section 230 did not apply when the service provider was itself an “information content provider” with respect to disclosure of protected materials;
Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009), as amended (Sept. 28, 2009), holding that Section 230 did not apply when the service provider agreed to remove content but failed to do so; and
- Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016), holding that Section 230 did not apply when a website provider has knowledge of criminal activity and negligently failed to warn users of a known risk.