Section 230 Safe Harbor

03_April_SociallyAware_thumbnailThe latest issue of our Socially Aware newsletter is now available here.

In this edition, we explore the threat to U.S. jobs posed by rapid advances in emerging technologies; we examine a Federal Trade Commission report on how companies engaging in cross-device tracking can stay on the right side of the law; we take a look at a Second Circuit opinion that fleshes out the “repeat infringer” requirement online service providers must fulfill to qualify for the Digital Millennium Copyright Act’s safe harbors; we discuss a state court decision holding that Section 230 of the Communications Decency Act immunizes Snapchat from liability for a car wreck that was allegedly caused by the app’s “speed filter” feature; we describe a recent decision by the District Court of the Hague confirming that an app provider could be subject to the privacy laws of a country in the European Union merely by making its app available on mobile phones in that country; and we review a federal district court order requiring Google to comply with search warrants for foreign stored user data.

All this—plus an infographic illustrating how emerging technology will threaten U.S. jobs.

Read our newsletter.

SociallyAware_Vol8Issue1_Thumb2The latest issue of our Socially Aware newsletter is now available here.

In this edition,we examine a spate of court decisions that appear to rein in the historically broad scope of the Communications Decency Act’s Section 230 safe harbor for website operators; we outline ten steps companies can take to be better prepared for a security breach incident; we describe the implications of the Second Circuit’s recent opinion in Microsoft v. United States regarding the U.S. government’s efforts to require Microsoft to produce email messages stored outside the country; we explore the EU’s draft regulation prohibiting geo-blocking; and we take a look at UK Consumer Protection regulators’ efforts to combat undisclosed endorsements on social media.

All this—plus an infographic highlighting the most popular social-media-post topics in 2016.

Read our newsletter.

Speedometer_166009527We have been monitoring a trend of cases narrowing the immunity provided to website operators under Section 230 of the Communications Decency Act (CDA).  A recent decision by a state court in Georgia, however, demonstrates that Section 230 continues to be applied expansively in at least some cases.

The case, Maynard v. McGee, arose from an automobile collision in Clayton County, Georgia.  Christal McGee, the defendant, had allegedly been using Snapchat’s “speed filter” feature, which tracks a car’s speed in real-time and superimposes the speed on a mobile phone’s camera view. According to the plaintiffs, one of whom had been injured in the collision, McGee was using the speed filter when the accident occurred, with the intention of posting a video on Snapchat showing how fast she was driving.  The plaintiffs sued McGee and Snapchat for negligence, and Snapchat moved to dismiss based on the immunity provided by Section 230.

The plaintiffs alleged that Snapchat was negligent because it knew its users would use the speed filter “in a manner that might distract them from obeying traffic or safety laws” and that “users might put themselves or others in harm’s way in order to capture a Snap.” To demonstrate that Snapchat had knowledge, the plaintiffs pointed to a previous automobile collision that also involved the use of Snapchat’s speed filter.  The plaintiffs claimed that “[d]espite Snapchat’s actual knowledge of the danger from using its product’s speed filter while driving at excessive speeds, Snapchat did not remove or restrict access to the speed filter.” Continue Reading Snapchat Clocks Section 230 Win in Speed Filter Case

A rear view of a businessman in a suit, with an upheld umbrella, standing in a large field during a thunderstorm. Lightning is seen descending from a gray and cloudy sky.

2016 has been a tough year for a lot of reasons, most of which are outside the scope of this blog (though if you’d like to hear our thoughts about Bowie, Prince or Leonard Cohen, feel free to drop us a line). But one possible victim of this annus horribilis is well within the ambit of Socially Aware: Section 230 of the Communications Decency Act (CDA).

Often hailed as the law that gave us the modern Internet, CDA Section 230 provides immunity against liability for website operators for certain claims arising from third-party or user-generated content. The Electronic Frontier Foundation has called Section 230 “the most important law protecting Internet speech,” and companies including Google, Yelp and Facebook have benefited from the protections offered by the law, which was enacted 20 years ago.

But it’s not all sunshine and roses for Internet publishers and Section 230, particularly over the past 18 months. Plaintiffs are constantly looking for chinks in Section 230’s armor and, in an unusually large number of recent cases, courts have held that Section 230 did not apply, raising the question of whether the historical trend towards broadening the scope of Section 230 immunity may now be reversing. This article provides an overview of recent cases that seem to narrow the scope of Section 230. Continue Reading The Decline and Fall of Section 230?

PrintAs we noted in our recent post on the Ninth Circuit case Kimzey v. Yelp! Inc., in the right circumstances, Section 230 of the Communications Decency Act (CDA) still provides robust protection against liability for website operators despite the unusually large number of decisions this year seemingly narrowing the scope of the statute. Defendants notched another Section 230 win recently in Manchanda v. Google, a case in the Southern District of New York. The case began in May 2016 when Rahul Manchanda, an attorney, filed a complaint alleging that Google, Yahoo and Microsoft harmed his reputation by indexing certain websites that described him in negative terms.

Manchanda asserted various claims against the three defendants, including defamation, libel, slander, tortious interference with contract, breach of fiduciary duty, breach of the duty of loyalty, unfair trade practices, false advertising, unlawful trespass, civil RICO, unjust enrichment, intentional infliction of emotional distress, negligent infliction of emotional distress and trademark infringement. Manchanda sought injunctive relief requiring the defendants to “de-index or remove the offending websites from their search engines” in addition to damages.

The court made quick work of dismissing most of Manchanda’s claims on Section 230 grounds, emphasizing that the CDA “immunizes search engines from civil liability for reputational damage resulting from third-party content that they aggregate and republish.” The court went on to note that “[t]his immunity attaches regardless of the specific claim asserted against the search engine, so long as the claim arises from the publication or distribution of content produced by a third party and the alleged injury involves damage to a plaintiff’s reputation based on that content.” Continue Reading In a Rough Year for CDA Section 230, Manchanda v. Google Provides Comfort to Website Operators

"Unlike" on a screen. More>>

2016 has been a challenging year for Section 230 of the Communications Decency Act (CDA) and the website operators who depend on it for protection against liability stemming from user-generated content. An unusually large number of cases this year have resulted in decisions holding that the defendant website operators were not entitled to immunity under Section 230. For example, as we’ve discussed recently, in Hassel v. Bird, the California Court of Appeal held that Section 230 did not prevent the court from ordering Yelp to remove from its website allegedly defamatory reviews posted by users, even though Yelp was not a party in the underlying defamation suit.

We are working on an article surveying some of the recent cases holding that Section 230 did not apply. But in the meantime, it is important to remember that Section 230 remains a powerful shield against liability and that defendants continue to wield it successfully in many cases. The Ninth Circuit’s recent decision in Kimzey v. Yelp is one such case.

Kimzey arose from two negative Yelp reviews that user “Sarah K” posted in September 2011 about Douglas Kimzey’s locksmith business in the Seattle area. Sarah K’s reviews were extremely negative and rated Kimzey one out of five stars in Yelp’s multiple-choice star rating system. In all caps, she warned Yelpers that “THIS WAS BY FAR THE WORST EXPERIENCE I HAVE EVER ENCOUNTERED WITH A LOCKSMITH. DO NOT GO THROUGH THIS COMPANY . . . CALL THIS BUSINESS AT YOUR OWN RISK.” Continue Reading Yelp Case Shows CDA §230 Still Has Teeth

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

Here’s how Twitter is loosening up its 140-character limit.

The federal government will now check the social media history of prospective employees before granting them security clearance.

One expert says C-level executives shouldn’t entrust millennials with their companies’ social media feeds.

Federal court refuses to dismiss a lawsuit against Google for allegedly removing sites from its search engine results.

Before a larceny arrest, a “cry for help” on Facebook.

Do strong social media campaigns really beget successful brands, or is it the other way around?

A lawyer representing students suing Google is questioning the impartiality of the federal judge hearing the case because the judge was just hired by this unrelated tech giant.

The New York Times live streamed one of its pitch meetings on Facebook Live, and not everyone thinks it was a great idea.

Some social media marketing satire, courtesy of The Onion.

03_21_Signs_Today’s companies compete not only for dollars but also for likes, followers, views, tweets, comments and shares. “Social currency,” as some researchers call it, is becoming increasingly important and companies are investing heavily in building their social media fan bases. In some cases, this commitment of time, money and resources has resulted in staggering success. Coca-Cola, for example, has amassed over 96 million likes on its Facebook page and LEGO’s YouTube videos have been played over 2 billion times.

With such impressive statistics, there is no question that a company’s social media presence and the associated pages and profiles can be highly valuable business assets, providing an important means for disseminating content and connecting with customers. But how much control does a company really have over these social media assets? What recourse would be available if a social media platform decided to delete a company’s page or migrate its fans to another page?

The answer may be not very much. Over the past few years, courts have repeatedly found in favor of social media platforms in a number of cases challenging the platforms’ ability to delete or suspend accounts and to remove or relocate user content.

Legal Show-Downs on Social Media Take-Downs

In a recent California case, Lewis v. YouTube, LLC, the plaintiff Jan Lewis’s account was removed by YouTube due to allegations that she artificially inflated view counts in violation of YouTube’s Terms of Service. YouTube eventually restored Lewis’s account and videos but not the view counts or comments that her videos had generated prior to the account’s suspension.

Lewis sued YouTube for breach of contract, alleging that YouTube had deprived her of her reasonable expectations under the Terms of Service that her channel would be maintained and would continue to reflect the same number of views and comments. She sought damages as well as specific performance to compel YouTube to restore her account to its original condition.

The court first held that Lewis could not show damages due to the fact that the YouTube Terms of Service contained a limitation of liability provision that disclaimed liability for any omissions relating to content. The court also held that Lewis was not entitled to specific performance because there was nothing in the Terms of Service that required YouTube to maintain particular content or to display view counts or comments. Accordingly, the court affirmed dismissal of Lewis’s complaint.

In a similar case, Darnaa LLC v. Google, Inc., Darnaa, a singer, posted a music video on YouTube. Again, due to allegations of view count inflation, YouTube removed and relocated the video to a different URL, disclosing on the original page that the video had been removed for violating its Terms of Service. Darnaa sued for breach of the covenant of good faith and fair dealing, interference with prospective economic advantage and defamation. In an email submitted with the complaint, Darnaa’s agent explained that she had launched several large campaigns (each costing $250,000 to $300,000) to promote the video and that the original link was already embedded in thousands of websites and blogs. Darnaa sought damages as well as an injunction to prevent YouTube from removing the video or changing its URL.

The court dismissed all of Darnaa’s claims because YouTube’s Terms of Service require lawsuits to be filed within one year and Darnaa had filed her case too late. In its discussion, however, the court made several interesting points. In considering whether YouTube’s Terms of Service were unconscionable, the court held that, although the terms are by nature a “contract of adhesion,” the level of procedural unconscionability was slight, since the plaintiff could have publicized her videos on a different website. Further, in ruling that the terms were not substantively unconscionable, the court pointed out that “[b]ecause YouTube offers its hosting services free of charge, it is reasonable for YouTube to retain broad discretion over [its] services.”

Although the court ultimately dismissed Darnaa’s claims based on the failure to timely file the suit, the decision was not a complete victory for YouTube. The court granted leave to amend to give Darnaa the opportunity to plead facts showing that she was entitled to equitable tolling of the contractual limitations period. Therefore, the court went on to consider whether Darnaa’s allegations were sufficient to state a claim. Among other things, the court held that YouTube’s Terms of Service were ambiguous regarding the platform’s rights to remove and relocate user videos in its sole discretion. Thus, the court further held that if Darnaa were able to amend the complaint to avoid the consequences of the failure to timely file, then the complaint would be sufficient to state a claim for breach of the contractual covenant of good faith and fair dealing.

Continue Reading How to Protect Your Company’s Social Media Currency

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.