Section 230 of the Communications Decency Act continues to act as one of the strongest legal protections that social media companies have to avoid being saddled with crippling damage awards based on the misdeeds of their users.

The strong protections afforded by Section 230(c) were recently reaffirmed by Judge Caproni of the Southern District of New York, in Herrick v. Grindr. The case involved a dispute between the social networking platform Grindr and an individual who was maliciously targeted through the platform by his former lover. For the unfamiliar, Grindr is mobile app directed to gay and bisexual men that, using geolocation technology, helps them to connect with other users who are located nearby.

Plaintiff Herrick alleged that his ex-boyfriend set up several fake profiles on Grindr that claimed to be him. Over a thousand users responded to the impersonating profiles. Herrick’s ex‑boyfriend, pretending to be Herrick, would then direct the men to Herrick’s’ work-place and home. The ex-boyfriend, still posing as Herrick, would also tell these would-be suitors that Herrick had certain rape fantasies, that he would initially resist their overtures, and that they should attempt to overcome Herrick’s initial refusals. The impersonating profiles were reported to Grindr (the app’s operator), but Herrick claimed that Grindr did not respond, other than to send an automated message.

Herrick then sued Grindr, claiming that the company was liable to him because of the defective design of the app and the failure to police such conduct on the app. Specifically, Herrick alleged that the Grindr app lacked safety features that would prevent bad actors such as his former boyfriend from using the app to impersonate others. Herrick also claimed that Grindr had a duty to warn him and other users that it could not protect them from harassment stemming from impersonators.

Grindr moved to dismiss Herrick’s suit under Section 230 of the Communications and Decency Act (CDA). Section 230 provides that “no provider or users of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In order for the Section 230 safe harbor to apply, the defendant invoking the safe harbor must prove each of the following: (1) it “is a provider . . . of an interactive computer service; (2) the claim is based upon information provided by another information content provider; and (3) the claim would treat the defendant as the publisher or speaker of that information.”

With respect to each of the numerous different theories of liability asserted by Herrick—other than the claim of copyright infringement for hosting his picture without his authorization—the court found that either Herrick failed to state a claim for relief or the claim was subject to Section 230 immunity.

Regarding the first prong of the Section 230 test, the court swiftly rejected Herrick’s claim that Grindr is not an interactive computer service as defined in the CDA. The court held that it is a distinction without a difference that the Grindr service is accessed through a smart phone app rather than a website.

With respect to Herrick’s products liability, negligent design and failure to warn claims, the court found that they were all predicated upon content provided by another user of the app, in this case Herrick’s ex-boyfriend, thus satisfying the second prong of the Section 230 test. Any assistance, including algorithmic filtering, aggregation and display functions, that Grindr provided to the ex was “neutral assistance” that is available to good and bad actors on the app alike.

The court also found that the third prong of the Section 230 test was satisfied. For Herrick’s claims to be successful, they would each result in Grindr being held liable as the “publisher or speaker” of the impersonating profiles. The court noted that liability based upon the failure to incorporate adequate protections against impersonating or fake accounts is “just another way of asserting that Grindr is liable because it fails to police and remove impersonating content.”

Moreover, the court observed that decisions to include (or not) methods of removal of content are “editorial choices” that are one of many functions of being a publisher, as are the decisions to remove or not to remove any content at all. So, because choosing to remove content or to let it stay on an app is an editorial choice, finding Grindr liable based on its choice to let the impersonating profiles remain would be finding Grindr liable as if it were the publisher of that content.

The court further held that liability for failure to warn would require treating Grindr as the “publisher” of the impersonating profiles. The court noted that the warning would only be necessary because Grindr does not remove content and found that requiring Grindr to post a warning about the potential for impersonating profiles or harassment would be indistinguishable from requiring Grindr to review and supervise the content itself. Reviewing and supervising content is, the court noted, a traditional role for publishers. The court held that, because the theory underlying the failure to warn claims depended upon Grindr’s decision not to review impersonating profiles before publishing them—which the court described as an editorial choice—liability would depend upon treating Grindr as the publisher of the third-party content.

In holding that Herrick failed to state a claim for failure to warn, the court distinguished the Ninth Circuit’s 2016 decision, Doe v. Internet Brands, Inc. In that case, an aspiring model posted information about herself on a networking website, ModelMayhem.com, that is directed to people in the modeling industry and hosted by the defendant. Two individuals found the model’s profile on the website, contacted the model through means other than the website, and arranged to meet with her in person, ostensibly for a modeling shoot. Upon meeting the model, the two men sexually assaulted her.

The court viewed Internet Brands’ holding as limited to instances in which the “duty to warn arises from something other than user-generated content.” In Internet Brands, the proposed warning was about bad actors who were using the website to select targets to sexually assault, but the men never posted their own profiles on the site. Also, the website operator had prior warning about the bad actors from a source external to the website, rather than from user-generated content uploaded to the site or its review of site-hosted content.

In contrast, here, the court noted, the Herrick’s proposed warnings would be about user-generated content and about Grindr’s publishing functions and choices, including the choice not to take certain actions against impersonating content generated by users and the choices not to employ the most sophisticated impersonation detection capabilities. The court specifically declined to read Internet Brands to hold that an ICS “could be required to publish a warning about the potential misuse of content posted to its site.”

In addition to claims for products liability, negligent design and failure to warn, the court also dismissed Herrick’s claims for negligence, intentional infliction of emotional distress, negligent infliction of emotional distress, fraud, negligent misrepresentation, promissory estoppel and deceptive practices. While Herrick was granted leave to replead a copyright infringement claim based on allegations that Grindr hosted his photograph without his authorization, the court denied Herrick’s request to replead any of the other claims.

When Congress enacted Section 230 of the CDA in 1996, it sought to provide protections that would permit online services to thrive without the threat of crippling civil liability for the bad acts of its users. Over 20 years since its passage, the Act has indisputably served that purpose. The array of social media and other online services and mobile apps available today could have barely been imagined in 1996 and have transformed our society. It is also indisputable, however, that for all of the invaluable services now available to us online and through mobile apps, these same services can be seriously misused by wrongdoers. Providers of these services will want to study closely the Herrick and Internet Brands decisions and to keep an eye out for further guidance from the courts regarding the extent to which Section 230 does (Herrick) or does not (Internet Brands) shield providers from “failure to warn” claims.

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: 2018: Predictions From Socially Aware’s Editors and Contributors; Snapchat Clocks Section 230 Win in Speed Filter Case; The Decline and Fall of Section 230?; In a Rough Year for CDA Section 230, Manchanda v. Google Provides Comfort to Website Operators; and Yelp Case Shows CDA §230 Still Has Teeth.