"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

The Newspaper Association of America has filed a first-of-its-kind complaint with the FTC over certain ad blocking technologies.

Is it “Internet” or “internet”? The Associated Press is about to change the capitalization rule.

Lots of people criticized Instagram’s new logo, but, according to a design-analysis app, it’s much better than the old logo at doing this.

Twitter has finally realized that people don’t use it to buy things.

Facebook wants to help sell every ad on the web.

A Russian law enforcement agency is investigating controversial groups alleged to have encouraged more than 100 teenage suicides on social media.

A self-proclaimed “badass lawyer” lost a defamation suit against a Twitter account that parodied him.

The Internet of every single thing must be stopped.

*        *       *

To stay abreast of social media-related legal and business developments, please subscribe to our free newsletter.

 

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.

 

0428_BLOG_iStock_000034491882_LargeVirginia’s highest court recently held that Yelp could not be forced to turn over the identities of anonymous online reviewers that a Virginia carpet-cleaning owner claimed tarnished his business.

In the summer of 2012, Joseph Hadeed, owner of Hadeed Carpet Cleaning, sued seven anonymous Yelp reviewers after receiving a series of critical reviews. Hadeed alleged that the reviewers were competitors masking themselves as Hadeed’s customers and that his sales tanked after the reviews were posted. Hadeed sued the reviewers as John Doe defendants for defamation and then subpoenaed Yelp, demanding that it reveal the reviewers’ identities.

Yelp argued that, without any proof that the reviewers were not Hadeed’s customers, the reviewers had a First Amendment right to post anonymously.

A Virginia trial court and the Court of Appeals sided with Hadeed, ordering Yelp to turn over the reviewers’ identities and holding it in contempt when it did not. But in April 2015, the Virginia Supreme Court vacated the lower court decisions on procedural grounds. Because Virginia’s legislature did not give Virginia’s state courts subpoena power over non-resident non-parties, the Supreme Court concluded, the Virginia trial court could not order the California-headquartered Yelp to produce documents located in California for Hadeed’s defamation action in Virginia.

Although the decision was a victory for Yelp, it was a narrow one, resting on procedural grounds. The Virginia Supreme Court did not address the broader First Amendment argument about anonymous posting and noted that it wouldn’t quash the subpoena because Hadeed could still try to enforce it under California law.

After the ruling, Yelp’s senior director of litigation, Aaron Schur, posted a statement on the company’s blog stating that, if Hadeed pursued the subpoena in California, Yelp would “continue to fight for the rights of these reviewers under the reasonable standards that California courts, and the First Amendment, require (standards we pushed the Virginia courts to adopt).” Schur added, “Fortunately the right to speak under a pseudonym is constitutionally protected and has long been recognized for the important information it allows individuals to contribute to public discourse.”

In 2009, a California law took effect, allowing anonymous Internet speakers whose identity is sought under a subpoena in California in connection with a lawsuit filed in another state to challenge the subpoena and recover attorneys’ fees if they are successful. In his Yelp post, Schur added that Hadeed’s case “highlights the need for stronger online free speech protection in Virginia and across the country.”

Had Hadeed sought to enforce the subpoena in California, the result may have been the same but possibly on different grounds. In California, where Yelp and many other social media companies are headquartered, the company would have been subject to a court’s subpoena power. Still, Yelp may have been protected from having to disclose its users’ identities. California courts have offered protections for anonymous speech under the First Amendment to the U.S. Constitution and the state constitutional right of privacy.

Nevertheless, there is no uniform rule as to whether companies must reveal identifying information of their anonymous users. In 2013, in Chevron v. Danziger, federal Magistrate Judge Nathanael M. Cousins of the Northern District of California concluded that Chevron’s subpoenas seeking identifying information of non-party Gmail and Yahoo Mail users were enforceable against Google and Yahoo, respectively, because the subpoenas did not seek expressive activity and because there is no privacy interest in subscriber and user information associated with email addresses.

On the other hand, in March 2015, Magistrate Judge Laurel Beeler of the same court held, in Music Group Macao Commercial Offshore Ltd. v. Does, that the plaintiffs could not compel nonparty Twitter to reveal the identifying information of its anonymous users, who, as in the Hadeed case, were Doe defendants. Music Group Macao sued the Doe defendants in Washington federal court for anonymously tweeting disparaging remarks about the company, its employees, and its CEO. After the Washington court ruled that the plaintiffs could obtain the identifying information from Twitter, the plaintiffs sought to enforce the subpoena in California. Magistrate Judge Bheeler concluded that the Doe defendants’ First Amendment rights to speak anonymously outweighed the plaintiffs’ need for the requested information, citing familiar concerns that forcing Twitter to disclose the speakers’ identities would unduly chill protected speech.

Courts in other jurisdictions have imposed a range of evidentiary burdens on plaintiffs seeking the disclosure of anonymous Internet speakers. For example, federal courts in Connecticut and New York have required plaintiffs to make a prima facie showing of their claims before requiring internet service providers (ISPs) to disclose anonymous defendants’ identities. A federal court in Washington found that a higher standard should apply when a subpoena seeks the identity of an Internet user who is not a party to the litigation. The Delaware Supreme Court has applied an even higher standard, expressing concern “that setting the standard too low will chill potential posters from exercising their First Amendment right to speak anonymously.”

These cases show that courts are continuing to grapple with social media as a platform for expressive activity. Although Yelp and Twitter were protected from having to disclose their anonymous users’ identities in these two recent cases, this area of law remains unsettled, and companies with social media presence should be familiar with the free speech and privacy law in the states where they conduct business and monitor courts’ treatment of these evolving issues.