Please join Socially Aware editor John Delaney as he chairs Practising Law Institute’s (PLI) “Social Media 2014: Addressing Corporate Risks.” Issues to be addressed at the conference include:

  • Social media: how it works, and why it is transforming the business world
  • Drafting and updating social media policies
  • User-generated content and related IP concerns
  • Ensuring protection under the CDA’s Safe Harbor
  • Legal issues in connection with online data harvesting
  • Online marketing: new opportunities, new risks
  • Privacy law considerations
  • Practical tips for handling real-world issues

Representatives from Facebook, Pinterest, Google and other companies will be speaking at the event. The conference is being held in San Francisco on Monday, February 10th and in New York City on February 26th. The February 10th event will be webcasted. For more information or to register, please visit PLI’s website here.

In the latest issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we look at recent First Amendment, intellectual property, labor and privacy law developments affecting corporate users of social media and the Internet. We also recap major events from 2012 that have had a substantial impact on social media law, and we take a look at some of the big numbers racked up by social media companies over the past year.

To read the latest issue of our newsletter, click here.

For an archive of previous issues of Socially Aware, click here.

In a string of cases against Google, approximately 20 separate plaintiffs have claimed that, through advertisements on its AdWords service, Google engaged in trademark infringement. These claims have been based on Google allowing its advertisers to use their competitors’ trademarks in Google-generated online advertisements. In a recent decision emerging from these cases, CYBERsitter v. Google, the U.S. District Court for the Central District of California found that Section 230 of the Communications Decency Act (CDA) provides protection for Google against some of the plaintiff’s state law claims.

As we have discussed previously (see here and here), Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The Section 230 safe harbor immunizes websites from liability for content created by users, as long as the website did not “materially contribute” to the development or creation of the content. An important limitation on this safe harbor, however, is that it shall not “be construed to limit or expand any law pertaining to intellectual property.”

In the CYBERsitter case, plaintiff CYBERsitter, which sells an Internet content-filtering program, sued Google for selling and displaying advertisements incorporating the CYBERsitter trademark to ContentWatch, one of CYBERsitter’s competitors. CYBERsitter’s complaint alleged that Google had violated numerous federal and California laws by, first, selling the right to use CYBERsitter’s trademark to ContentWatch and, second, permitting and encouraging ContentWatch to use the CYBERsitter mark in Google’s AdWords advertising. Specifically, CYBERsitter’s complaint included the following claims: Trademark infringement, contributory trademark infringement, false advertising, unfair competition and unjust enrichment.

Google filed a motion to dismiss, arguing that Section 230 of the CDA shielded it from liability for CYBERsitter’s state law claims. The court agreed with Google for the state law claims of trademark infringement, contributory trademark infringement, unfair competition and unjust enrichment, but only to the extent that these claims sought to hold Google liable for the infringing content of the advertisements. The court, however, did not discuss the apparent inapplicability of the Section 230 safe harbor to trademark claims. As noted above, Section 230 does not apply to intellectual property claims and, despite the fact that trademarks are a form of intellectual property, the court applied Section 230 without further note. This is because the Ninth Circuit has held that the term “intellectual property” in Section 230 of the CDA refers to federal intellectual property law and therefore state intellectual property law claims are not excluded from the safe harbor. The Ninth Circuit, however, appears to be an outlier with this interpretation; decisions from other circuit courts suggest disagreement with the Ninth Circuit’s approach, and district courts outside the Ninth Circuit have not followed the Ninth Circuit’s lead.

Google was not let off the hook entirely with regard to the plaintiff’s state trademark law claims. In dismissing the trademark infringement and contributory trademark infringement claims, the court distinguished between Google’s liability for the content of the advertisements and its liability for its potentially tortious conduct unrelated to the content of the advertisements. The court refused to dismiss these claims to the extent they sought to hold Google liable for selling to third parties the right to use CYBERsitter’s trademark, and for encouraging and facilitating third parties to use CYBERsitter’s trademark, without CYBERsitter’s authorization. Because such action by Google has nothing to do with the online content of the advertisements, the court held that Section 230 is inapplicable.

The court also found that CYBERsitter’s false advertising claim was not barred by Section 230 because Google may have “materially contributed” to the content of the advertisements and, therefore, under Section 230 would have been an “information content provider” and not immune from liability. Prof. Eric Goldman, who blogs frequently on CDA-related matters, has pointed out an apparent inconsistency in the CYBERsitter court’s reasoning, noting that Google did not materially contribute to the content of the advertisements for the purposes of the trademark infringement, contributory infringement, unfair competition and unjust enrichment claims, but that Google might have done so for the purposes of the false advertising claim.

CYBERsitter highlights at least two key points for website operators, bloggers, and other providers of interactive computer services. First, at least in the Ninth Circuit, but not necessarily in other circuits, the Section 230 safe harbor provides protection from state intellectual property law claims with regard to user-generated content. Second, to be protected under the Section 230 safe harbor, the service provider must not have created the content and it must not have materially contributed to such content’s creation.

We’ve reported before on Section 230 of the Communications Decency Act (CDA), the 1996 statute that states, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Courts have interpreted Section 230 to immunize social media and other websites from liability for publishing content created by their users, provided the site owners are not “responsible in whole or in part, for the creation or development of” the offending content. 

Two recent federal cases involving the website TheDirty.com show that, 15 years after the landmark Zeran v. AOL case interpreting Section 230 immunity broadly, courts still grapple with the statute and, arguably, get cases wrong, particularly when faced with unsavory content.

TheDirty.com is an ad-supported website that features gossip, salacious content, news and sports stories.  The site, run by owner/editor Hooman Karamian, a/k/a Nik Richie, prompts users to “submit dirt” via a basic text form requesting “what’s happening” and “who, what, when, where, why,” and allows users to upload files. In response, users, referred to on the site as the “Dirty Army,” submit stories and photographs along with gossip about the people pictured. Richie then posts the pictures and information, often accompanied by his own comments. Two such racy posts, one detailing the sex habits of a Cincinnati Bengals cheerleader and the other about the supposed exploits of a “Church Girl,” led their subjects to bring defamation claims in federal court. Third-party users, not TheDirty.com, generated the content. Cases dismissed on Section 230 grounds, right?  Not quite.

In Jones v. Dirty World Entertainment Recordings, a case in the U.S. District Court for the Eastern District of Kentucky, plaintiff Sarah Jones, a cheerleader for the Cincinnati Bengals football team and also a high school teacher, sued TheDirty.com based on two user-submitted posts that included her picture and statements regarding her sex partners, as well as allegations that she had sexually transmitted diseases. Richie added a one-line comment— “why are all high school teachers freaks in the sack?”—and published the post. Jones requested that the posts be removed, but TheDirty.com refused. Richie also commented on the site directly addressing Jones, saying her concern about the post was misguided and that she was “d[igging] her own grave” by calling attention to it. Jones sought damages for defamation and invasion of privacy under state tort law, and TheDirty.com moved for judgment as a matter of law on CDA immunity grounds. 

The court held that TheDirty.com did not qualify for CDA immunity because it “specifically encouraged the development of what is offensive about the content” (citing the Tenth Circuit’s opinion in Federal Trade Comm’n v. Accusearch).  The court found that the TheDirty.com encouraged the development of, and therefore was responsible for, the offensive content based on the site’s name, the fact that the site encouraged the posting of “dirt,” Richie’s personal comments added to users’ posts, and his direct reference to the plaintiff’s request that the post be taken down. The court focused on Richie’s comments, including his statement “I love how the Dirty Army has war mentality. Why go after one ugly cheerleader when you can go after all the brown baggers.”

The Jones court’s analysis diverges from prevailing CDA case law in a few respects. For example, regarding the issue of responding to a subject’s request that an allegedly defamatory post be taken down, the Ninth Circuit has held that deciding what to post and what to remove are “traditional duties of a publisher” for which the CDA provides immunity to website operators.  More critically, in adopting the “specifically encouraged the development of what is offensive” standard coined in Accusearch, the court in Jones reasoned that by requesting “dirt,” the site “encourage[d] material which is potentially defamatory or an invasion of the subject’s privacy,” and therefore lost CDA immunity.  That reasoning, though, could extend to any website functionality, such as free-form text boxes, that permits users to input potentially defamatory material. To hold that a website operator loses immunity based on the mere potential that users will post defamatory content effectively vitiates CDA immunity and parts ways with cases like the Ninth Circuit’s Roommates.com case, which held that a website’s provision of “neutral tools” cannot constitute development of content for purposes of the exception to CDA immunity. For these and other reasons, one leading Internet law commentator calls the case a “terrible ruling that needs to be fixed on appeal.” TheDirty.com’s appeal to the Sixth Circuit is pending.

In a more recent case, S.C. v. Dirty World, LLC, the U.S. District Court for Western District of Missouri held that Richie and TheDirty.com did qualify for CDA Section 230 immunity on facts similar to those in Jones. The plaintiff in S.C. brought suit based on a user-generated post on TheDirty.com that showed her picture along with a description alleging that she had relations with the user’s boyfriend and attempted to do so with the user’s son. Richie published the post, adding a comment about the plaintiff’s appearance. The court explained that, because a third party authored the allegedly defamatory content, CDA immunity turned on whether TheDirty “developed” the content by having “materially contribute[d] to [its] alleged illegality.”  The court held that the defendants did not materially contribute to the post’s alleged illegality because the defendants never instructed or requested the third party to submit the post at issue, “did nothing to specifically induce it,” and did not add to or substantively alter the post before publishing it on the site.

After having noted these facts, and how they differed from the facts in Jones, which the S.C. plaintiff had cited, the court explicitly “distanced itself from certain legal implications set forth in Jones.”  The S.C. court pointed out that a “broad” interpretation of CDA immunity is the accepted view.  It explained that CDA immunity does not, and should not, turn on the “name of the site in and of itself,” but instead focuses on the content that is actually defamatory or otherwise gives rise to legal liability.  The court noted, for example, that the site itself has a variety of content, much of it not defamatory or capable of being defamatory (e.g., sports stories and other news).

Given that some may consider TheDirty.com’s gossip content and mission extreme, cases like S.C. likely provide peace of mind to operators of more conventional social media sites.  Still, should Jones survive appeal, it could lead to forum shopping in cases where plaintiffs expect to face CDA immunity defenses, because the “specifically encouraged” standard could, as in Jones, lead to a loss of immunity. We’ll keep you posted on the appeal.

As we reported last month, the safe harbor in Section 230 of the Communications Decency Act (“CDA”) immunizes social media providers from liability based on content posted by users under most circumstances, but not from liability for content that the providers themselves generate.  But what about when providers block Internet traffic such as “spam” – does the CDA immunize service providers from liability for claims related to messages not reaching their intended recipients?

In two recent unpublished cases, Holomaxx Techs. Corp. v. Microsoft Corp. and Holomaxx Techs. Corp. v. Yahoo! Inc., Judge Fogel of the Federal District Court for the Northern District of California held that the CDA does provide immunity in such circumstances.  (Notably, Judge Fogel also decided earlier this year that Facebook postings qualify as “commercial electronic mail messages” regulated under CAN-SPAM, the federal anti-spam statute.)  The Holomaxx holdings did not break new ground, but the cases clearly show that Section 230 of the CDA provides immunity not just with respect to user-posted content, but also for service providers’ blocking and restriction of messages.

Plaintiff Holomaxx Technologies runs an email marketing and ecommerce business development service.  After what it alleged was MSN’s and Yahoo!’s continued refusal to deliver its legitimate emails, Holomaxx sued both companies for state law tort claims alleging interference with contract and business advantage, defamation, false light, and unfair competition, and for federal claims under the Wiretap Act, the Computer Fraud and Abuse Act, and the Stored Communications Act.  Seeking both damages and an injunction, Holomaxx claimed that MSN and Yahoo! “knowingly relie[d] on faulty spam filters” and that it was “entitled to send legitimate, permission-based emails to its clients’ customers now.”

In its complaints against Microsoft and Yahoo!, Holomaxx explained that it delivers for its customers ten million email messages a day, including three million to Hotmail/MSN users and six million to Yahoo! users.  Holomaxx claimed that it sent only legitimate, requested emails to consenting users and complied with CAN-SPAM.  According to Holomaxx, MSN’s and Yahoo!’s email filtering systems began blocking, rerouting, and/or throttling Holomaxx-generated emails to MSN and Yahoo! users, and MSN and Yahoo! ignored its requests to be unblocked and failed to identify specific problems with Holomaxx’s emails.  Also according to Holomaxx, MSN and Yahoo! users acted in bad faith because they did not work with Holomaxx in the manner prescribed by the abuse desk guidelines of the Messaging Anti-Abuse Working Group, to which both companies belong and which Holomaxx characterized as an “industry standard.”  Finally, Holomaxx claimed that anticompetitive purposes drove MSN’s and Yahoo!’s blocking, and that the fact that the two companies had initially resumed delivery of Holomaxx emails and then stopped again showed that the companies acted in bad faith.

MSN and Yahoo! moved to dismiss, citing CDA Section 230(c)(2), which on its face immunizes service providers for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers … objectionable,” and arguing that the facts that Holomaxx alleged were insufficient to overcome this statutory immunity.

Agreeing, Judge Fogel called CDA immunity “robust” and, citing the Ninth Circuit’s opinion in Fair Housing Council v. Roommates.com, LLC, noted that “all doubts must be resolved in favor of immunity.”  The court cited Zango v. Kaspersky, where the Ninth Circuit explained that the CDA “plainly immunizes” providers that “make[s] available software that filters or screens material that the user or the provider deems objectionable.”  In Zango, the Ninth Circuit affirmed the district court’s dismissal of a software maker’s suit against an anti-adware security firm for allegedly making it difficult for users who had installed the security firm’s anti-adware tools to use the plaintiff’s software.  However, the Ninth Circuit explained that a provider might lose immunity where it “block[s] content for anticompetitive purposes or merely at its malicious whim.”  Under that standard, the question was whether Holomaxx alleged sufficient facts to show that MSN and Yahoo! acted in an “absence of good faith” when they blocked Holomaxx’s emails.

The answer was no.  The court discounted Holomaxx’s reliance on the MAAWG guidelines because Holomaxx had not shown them to be an industry standard.

The fact that the companies temporarily resumed delivery of Holomaxx’s emails did not demonstrate an anticompetitive motive because the CDA gives providers wide discretion in deeming content objectionable.  As to alleged malice, the court explained that, “[T]o permit Holomaxx to proceed solely on the basis of a conclusory allegation that Yahoo! acted in bad faith essentially would rewrite the CDA.”  (Note:  On its face, the CDA did not apply to Holomaxx’s Wiretap Act and Stored Communications Act claims; the court dismissed those claims because it found that Holomaxx failed to adequately allege how MSN or Yahoo! had violated those statutes.)

A leading commentator has noted that the Ninth Circuit’s Zango case provided website operators a “high degree of freedom to make judgments about how to best serve their customers.”  The Holomaxx dismissals confirm that point.  With social media spam on the rise even  as email spam decreases and web-based email in general declines, both the Holomaxx and Zango cases could assist social media providers in their efforts to prevent unsolicited messages and abuse while at the same time maintaining the instant, social, viral qualities that keep users engaged and advertisers paying.

One final point – as one observer notes, Holomaxx’s compliance with CAN-SPAM, described in great detail in each of the complaints, did not matter to Judge Fogel’s holding.  That is, the mere fact that Holomaxx’s marketing messages were legal, did not compel Microsoft or Yahoo! to either deliver those messages or lose CDA immunity.  Thus, the court rejected an argument that might have resulted implicitly in the requirements of CAN-SPAM setting a ceiling, rather than a floor, for service providers’ anti-abuse efforts.

Although common law generally holds publishers responsible for the content that they publish, the Communications Decency Act (“CDA”) gives website operators broad protection from liability for content posted by users.  Courts have applied the CDA in favor of website owners in nearly 200 cases, including cases involving Google, Facebook, MySpace, and even bloggers for content posted by their co-bloggers.  Commentators hail the CDA as the legal framework that made possible the rise of social media.  CDA immunity, however, is not limitless.  For example, as the Ninth Circuit explained in Fair Housing Council  of San Fernando Valley v. Roommates.com, where “a website helps to develop unlawful content,” it loses CDA immunity “if it contributes materially to the alleged illegality of the conduct.”  Two recent cases illustrate how websites can lose CDA immunity as a result of contributing to offending content.

The district court in Levitt v. Yelp considered business owners’ claims that Yelp manipulated Yelp pages, rankings, and reviews in an extortionate manner that violated California’s unfair business practices law.  Plaintiffs alleged that Yelp threatened to, and did, take down positive reviews if plaintiffs did not buy ads, and that Yelp’s salespeople manipulated rankings on Yelp.  The court first rejected Yelp’s jurisdictional argument that the CDA prevented the court from hearing the claims.  Second, the court held the CDA did not immunize Yelp because some of the claims focused on Yelp’s sales practices, and not merely Yelp’s editing or selective display of user reviews.  The court dismissed the plaintiffs’ claims anyway— finding that they had not pleaded sufficient facts to show extortion by Yelp—but it gave the plaintiffs leave to amend.

In Hill v. StubHub a North Carolina state court considered claims that StubHub violated state anti-scalping statutes.  The court rejected StubHub’s CDA defense because StubHub’s service suggested that users input particular prices for Miley Cyrus concert tickets, and profited when they did.  That StubHub suggested the illegal prices, monitored its inventory for particular events, and only made money if sufficient tickets were sold, and even then made a percentage of the ticket price, all meant StubHub “developed” the unlawful content:  a system where users scalped tickets.  The court explained that StubHub “encouraged, materially contributed to, and made aggressive use” of the pricing content posted by users, so StubHub could not avoid liability for it.

Together, the Yelp and StubHub cases show that CDA immunity, although critical for social media operators’ use of user-generated content, is not boundless.  Sites can lose CDA immunity by directing or contributing to offending content or as a result of the actions of their salespeople.