Section 230 Safe Harbor

A rear view of a businessman in a suit, with an upheld umbrella, standing in a large field during a thunderstorm. Lightning is seen descending from a gray and cloudy sky.

2016 has been a tough year for a lot of reasons, most of which are outside the scope of this blog (though if you’d like to hear our thoughts about Bowie, Prince or Leonard Cohen, feel free to drop us a line). But one possible victim of this annus horribilis is well within the ambit of Socially Aware: Section 230 of the Communications Decency Act (CDA).

Often hailed as the law that gave us the modern Internet, CDA Section 230 provides immunity against liability for website operators for certain claims arising from third-party or user-generated content. The Electronic Frontier Foundation has called Section 230 “the most important law protecting Internet speech,” and companies including Google, Yelp and Facebook have benefited from the protections offered by the law, which was enacted 20 years ago.

But it’s not all sunshine and roses for Internet publishers and Section 230, particularly over the past 18 months. Plaintiffs are constantly looking for chinks in Section 230’s armor and, in an unusually large number of recent cases, courts have held that Section 230 did not apply, raising the question of whether the historical trend towards broadening the scope of Section 230 immunity may now be reversing. This article provides an overview of recent cases that seem to narrow the scope of Section 230. Continue Reading The Decline and Fall of Section 230?

PrintAs we noted in our recent post on the Ninth Circuit case Kimzey v. Yelp! Inc., in the right circumstances, Section 230 of the Communications Decency Act (CDA) still provides robust protection against liability for website operators despite the unusually large number of decisions this year seemingly narrowing the scope of the statute. Defendants notched another Section 230 win recently in Manchanda v. Google, a case in the Southern District of New York. The case began in May 2016 when Rahul Manchanda, an attorney, filed a complaint alleging that Google, Yahoo and Microsoft harmed his reputation by indexing certain websites that described him in negative terms.

Manchanda asserted various claims against the three defendants, including defamation, libel, slander, tortious interference with contract, breach of fiduciary duty, breach of the duty of loyalty, unfair trade practices, false advertising, unlawful trespass, civil RICO, unjust enrichment, intentional infliction of emotional distress, negligent infliction of emotional distress and trademark infringement. Manchanda sought injunctive relief requiring the defendants to “de-index or remove the offending websites from their search engines” in addition to damages.

The court made quick work of dismissing most of Manchanda’s claims on Section 230 grounds, emphasizing that the CDA “immunizes search engines from civil liability for reputational damage resulting from third-party content that they aggregate and republish.” The court went on to note that “[t]his immunity attaches regardless of the specific claim asserted against the search engine, so long as the claim arises from the publication or distribution of content produced by a third party and the alleged injury involves damage to a plaintiff’s reputation based on that content.” Continue Reading In a Rough Year for CDA Section 230, Manchanda v. Google Provides Comfort to Website Operators

"Unlike" on a screen. More>>

2016 has been a challenging year for Section 230 of the Communications Decency Act (CDA) and the website operators who depend on it for protection against liability stemming from user-generated content. An unusually large number of cases this year have resulted in decisions holding that the defendant website operators were not entitled to immunity under Section 230. For example, as we’ve discussed recently, in Hassel v. Bird, the California Court of Appeal held that Section 230 did not prevent the court from ordering Yelp to remove from its website allegedly defamatory reviews posted by users, even though Yelp was not a party in the underlying defamation suit.

We are working on an article surveying some of the recent cases holding that Section 230 did not apply. But in the meantime, it is important to remember that Section 230 remains a powerful shield against liability and that defendants continue to wield it successfully in many cases. The Ninth Circuit’s recent decision in Kimzey v. Yelp is one such case.

Kimzey arose from two negative Yelp reviews that user “Sarah K” posted in September 2011 about Douglas Kimzey’s locksmith business in the Seattle area. Sarah K’s reviews were extremely negative and rated Kimzey one out of five stars in Yelp’s multiple-choice star rating system. In all caps, she warned Yelpers that “THIS WAS BY FAR THE WORST EXPERIENCE I HAVE EVER ENCOUNTERED WITH A LOCKSMITH. DO NOT GO THROUGH THIS COMPANY . . . CALL THIS BUSINESS AT YOUR OWN RISK.” Continue Reading Yelp Case Shows CDA §230 Still Has Teeth

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

Here’s how Twitter is loosening up its 140-character limit.

The federal government will now check the social media history of prospective employees before granting them security clearance.

One expert says C-level executives shouldn’t entrust millennials with their companies’ social media feeds.

Federal court refuses to dismiss a lawsuit against Google for allegedly removing sites from its search engine results.

Before a larceny arrest, a “cry for help” on Facebook.

Do strong social media campaigns really beget successful brands, or is it the other way around?

A lawyer representing students suing Google is questioning the impartiality of the federal judge hearing the case because the judge was just hired by this unrelated tech giant.

The New York Times live streamed one of its pitch meetings on Facebook Live, and not everyone thinks it was a great idea.

Some social media marketing satire, courtesy of The Onion.

03_21_Signs_Today’s companies compete not only for dollars but also for likes, followers, views, tweets, comments and shares. “Social currency,” as some researchers call it, is becoming increasingly important and companies are investing heavily in building their social media fan bases. In some cases, this commitment of time, money and resources has resulted in staggering success. Coca-Cola, for example, has amassed over 96 million likes on its Facebook page and LEGO’s YouTube videos have been played over 2 billion times.

With such impressive statistics, there is no question that a company’s social media presence and the associated pages and profiles can be highly valuable business assets, providing an important means for disseminating content and connecting with customers. But how much control does a company really have over these social media assets? What recourse would be available if a social media platform decided to delete a company’s page or migrate its fans to another page?

The answer may be not very much. Over the past few years, courts have repeatedly found in favor of social media platforms in a number of cases challenging the platforms’ ability to delete or suspend accounts and to remove or relocate user content.

Legal Show-Downs on Social Media Take-Downs

In a recent California case, Lewis v. YouTube, LLC, the plaintiff Jan Lewis’s account was removed by YouTube due to allegations that she artificially inflated view counts in violation of YouTube’s Terms of Service. YouTube eventually restored Lewis’s account and videos but not the view counts or comments that her videos had generated prior to the account’s suspension.

Lewis sued YouTube for breach of contract, alleging that YouTube had deprived her of her reasonable expectations under the Terms of Service that her channel would be maintained and would continue to reflect the same number of views and comments. She sought damages as well as specific performance to compel YouTube to restore her account to its original condition.

The court first held that Lewis could not show damages due to the fact that the YouTube Terms of Service contained a limitation of liability provision that disclaimed liability for any omissions relating to content. The court also held that Lewis was not entitled to specific performance because there was nothing in the Terms of Service that required YouTube to maintain particular content or to display view counts or comments. Accordingly, the court affirmed dismissal of Lewis’s complaint.

In a similar case, Darnaa LLC v. Google, Inc., Darnaa, a singer, posted a music video on YouTube. Again, due to allegations of view count inflation, YouTube removed and relocated the video to a different URL, disclosing on the original page that the video had been removed for violating its Terms of Service. Darnaa sued for breach of the covenant of good faith and fair dealing, interference with prospective economic advantage and defamation. In an email submitted with the complaint, Darnaa’s agent explained that she had launched several large campaigns (each costing $250,000 to $300,000) to promote the video and that the original link was already embedded in thousands of websites and blogs. Darnaa sought damages as well as an injunction to prevent YouTube from removing the video or changing its URL.

The court dismissed all of Darnaa’s claims because YouTube’s Terms of Service require lawsuits to be filed within one year and Darnaa had filed her case too late. In its discussion, however, the court made several interesting points. In considering whether YouTube’s Terms of Service were unconscionable, the court held that, although the terms are by nature a “contract of adhesion,” the level of procedural unconscionability was slight, since the plaintiff could have publicized her videos on a different website. Further, in ruling that the terms were not substantively unconscionable, the court pointed out that “[b]ecause YouTube offers its hosting services free of charge, it is reasonable for YouTube to retain broad discretion over [its] services.”

Although the court ultimately dismissed Darnaa’s claims based on the failure to timely file the suit, the decision was not a complete victory for YouTube. The court granted leave to amend to give Darnaa the opportunity to plead facts showing that she was entitled to equitable tolling of the contractual limitations period. Therefore, the court went on to consider whether Darnaa’s allegations were sufficient to state a claim. Among other things, the court held that YouTube’s Terms of Service were ambiguous regarding the platform’s rights to remove and relocate user videos in its sole discretion. Thus, the court further held that if Darnaa were able to amend the complaint to avoid the consequences of the failure to timely file, then the complaint would be sufficient to state a claim for breach of the contractual covenant of good faith and fair dealing.

Continue Reading How to Protect Your Company’s Social Media Currency

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.

In 2012, we reported on a pair of district court decisions that, based on similar facts, split on whether defendant TheDirty.com, a gossip website, qualified for immunity under Section 230 of the Communications Decency Act (CDA), the 1996 law that states “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have generally held that Section 230 precludes defamation suits against website operators for content that their users create and post.

TheDirty.com—which claims 22 million monthly unique visitors—invites users to “submit dirt” about themselves or others via a submission form requesting the basics of the “dirt,” with fields for “what’s happening” and “who, what, when, where, why,” and a link for users to upload photographs. Website operator Nik Richie then reposts the content, sometimes adding his own comments. Unsurprisingly, unhappy subjects of the gossip postings have sued Richie and his company on numerous occasions.

In one case, Jones v. Dirty World Entertainment Recordings, LLC in the Eastern District of Kentucky, former teacher and Cincinnati Bengals cheerleader Sarah Jones brought defamation and other state law claims related to two posts showing her photo and stating that she had sex with players and contracted sexually transmitted diseases. In 2011, Richie moved for judgment as a matter of law on grounds that Section 230 gave him immunity as the “provider of an interactive computer service” because, he argued, the defamatory content originated with a user of the site and not Richie, though he had added his own comments. The court denied the motion, citing “the very name of the site, the manner in which it is managed, and the personal comments of defendant Richie” as leading to its conclusion that Richie “specifically encouraged development of what is offensive about the content” and thereby lost immunity under Section 230. The court noted that Richie made comments addressed directly to Jones, including that he “love[d] how the Dirty Army [Richie’s term for the site’s users] ha[d] a war mentality,” a comment that the court held encouraged the posting of offensive content.

After a mistrial in February 2013, Richie moved for summary judgment, asking the court to reconsider its ruling that he failed to qualify for CDA immunity. He noted that “since the CDA was first enacted in 1996, there have been approximately 300 reported decisions addressing immunity claims” (a statistic set forth in Hill v. Stubhub) but that his was the only one ever to go to trial, even though, Richie argued, other cases involved worse facts and clearer damage to the plaintiff. Richie also discussed in detail the Western District of Missouri opinion we reported on last year that granted summary judgment to Richie on CDA immunity grounds, explicitly disagreeing with the Jones court’s initial ruling. The court was not convinced, denying the motion simply “for the reasons set forth in the Court’s previous opinion.”

The case went to trial on July 8, 2013. The jury deliberated for more than ten hours and homed in on the key issue: in a note to the judge, the jury “request[ed] the evidence presented to the court detailing screenshots of how one submits a post to website TheDirty.com.” The jury, it seems, was asking for information to help it consider whether Richie and the site “encouraged the development of what is offensive”—the standard in the Sixth Circuit, of which the Eastern District of Kentucky is a part—about the ensuing posts about Jones. The jury awarded Jones $38,000 in actual damages and $300,000 in punitive damages.

Search Engine Watch, a respected analyst of the Internet industry, predicts that “[t]he success of this lawsuit is going to open a flood of new lawsuits against The Dirty and other sites like it that host third-party content” and noted that the case was good for the online reputation management industry—companies that provide services for individuals to manage what is said about them online—because the threat of suit would make website operators more responsive to requests to remove user-generated content.

From the courthouse steps, a tearful Jones said the jury got it right, and Richie’s attorney promised an immediate appeal. See video here. A few days later, Richie filed his appeal to the Sixth Circuit. We will keep you posted on the result.

In the latest issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we look at recent First Amendment, intellectual property, labor and privacy law developments affecting corporate users of social media and the Internet. We also recap major events from 2012 that have had a substantial impact on social media law, and we take a look at some of the big numbers racked up by social media companies over the past year.

To read the latest issue of our newsletter, click here.

For an archive of previous issues of Socially Aware, click here.

In a string of cases against Google, approximately 20 separate plaintiffs have claimed that, through advertisements on its AdWords service, Google engaged in trademark infringement. These claims have been based on Google allowing its advertisers to use their competitors’ trademarks in Google-generated online advertisements. In a recent decision emerging from these cases, CYBERsitter v. Google, the U.S. District Court for the Central District of California found that Section 230 of the Communications Decency Act (CDA) provides protection for Google against some of the plaintiff’s state law claims.

As we have discussed previously (see here and here), Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The Section 230 safe harbor immunizes websites from liability for content created by users, as long as the website did not “materially contribute” to the development or creation of the content. An important limitation on this safe harbor, however, is that it shall not “be construed to limit or expand any law pertaining to intellectual property.”

In the CYBERsitter case, plaintiff CYBERsitter, which sells an Internet content-filtering program, sued Google for selling and displaying advertisements incorporating the CYBERsitter trademark to ContentWatch, one of CYBERsitter’s competitors. CYBERsitter’s complaint alleged that Google had violated numerous federal and California laws by, first, selling the right to use CYBERsitter’s trademark to ContentWatch and, second, permitting and encouraging ContentWatch to use the CYBERsitter mark in Google’s AdWords advertising. Specifically, CYBERsitter’s complaint included the following claims: Trademark infringement, contributory trademark infringement, false advertising, unfair competition and unjust enrichment.

Google filed a motion to dismiss, arguing that Section 230 of the CDA shielded it from liability for CYBERsitter’s state law claims. The court agreed with Google for the state law claims of trademark infringement, contributory trademark infringement, unfair competition and unjust enrichment, but only to the extent that these claims sought to hold Google liable for the infringing content of the advertisements. The court, however, did not discuss the apparent inapplicability of the Section 230 safe harbor to trademark claims. As noted above, Section 230 does not apply to intellectual property claims and, despite the fact that trademarks are a form of intellectual property, the court applied Section 230 without further note. This is because the Ninth Circuit has held that the term “intellectual property” in Section 230 of the CDA refers to federal intellectual property law and therefore state intellectual property law claims are not excluded from the safe harbor. The Ninth Circuit, however, appears to be an outlier with this interpretation; decisions from other circuit courts suggest disagreement with the Ninth Circuit’s approach, and district courts outside the Ninth Circuit have not followed the Ninth Circuit’s lead.

Google was not let off the hook entirely with regard to the plaintiff’s state trademark law claims. In dismissing the trademark infringement and contributory trademark infringement claims, the court distinguished between Google’s liability for the content of the advertisements and its liability for its potentially tortious conduct unrelated to the content of the advertisements. The court refused to dismiss these claims to the extent they sought to hold Google liable for selling to third parties the right to use CYBERsitter’s trademark, and for encouraging and facilitating third parties to use CYBERsitter’s trademark, without CYBERsitter’s authorization. Because such action by Google has nothing to do with the online content of the advertisements, the court held that Section 230 is inapplicable.

The court also found that CYBERsitter’s false advertising claim was not barred by Section 230 because Google may have “materially contributed” to the content of the advertisements and, therefore, under Section 230 would have been an “information content provider” and not immune from liability. Prof. Eric Goldman, who blogs frequently on CDA-related matters, has pointed out an apparent inconsistency in the CYBERsitter court’s reasoning, noting that Google did not materially contribute to the content of the advertisements for the purposes of the trademark infringement, contributory infringement, unfair competition and unjust enrichment claims, but that Google might have done so for the purposes of the false advertising claim.

CYBERsitter highlights at least two key points for website operators, bloggers, and other providers of interactive computer services. First, at least in the Ninth Circuit, but not necessarily in other circuits, the Section 230 safe harbor provides protection from state intellectual property law claims with regard to user-generated content. Second, to be protected under the Section 230 safe harbor, the service provider must not have created the content and it must not have materially contributed to such content’s creation.