Communications Decency Act

Mark Zuckerberg famously stated that the purpose of Facebook is “to make the world more open and connected,” and indeed Facebook, other social media outlets and the Internet in general have brought worldwide openness and connection-through-sharing to levels unparalleled at any point in history. With this new universe of limitless dissemination often comes the stripping away of privacy, and “revenge porn,” a relatively new but seemingly inevitable outgrowth of social media and the Internet, is stripping away privacy in the most literal sense.

Defining “revenge porn” is relatively simple and does not require any sort of “I know it when I see it” test; in short, “revenge porn” is the act of publicly disseminating nude photographs or videos of somebody without her or his consent. The name derives from the fact that the act is most often associated with spurned men posting photos on the Internet that were received from their ex-girlfriends in confidence as “revenge” for breaking up with them or otherwise hurting them. But recently, more and more photos are popping up that were either taken without the victim’s consent or that were obtained by hacking a victim’s email or computer.  Revenge porn website operators invite users to post nude photos of their exes (or of anybody else, for that matter) and often allow the community to comment on the photos (which in many cases results in a barrage of expletives aimed at shaming the victim).

Recently, operators of revenge porn sites have taken attacks to a higher level, inviting visitors to post victims’ full names, addresses, phone numbers, places of work and other items of personal information alongside their photographs.  In some cases, victims’ faces are realistically superimposed onto nude photographs of pornographic actors or actresses in order to achieve the same effect when no actual nude photographs of the victims can be found. Victims of revenge porn often suffer significant harm, facing humiliation, loss of reputation, and in some cases, loss of employment. Due to the all-pervasive and permanent nature of the Internet, once a victim’s photo is posted online, it is very difficult for him or her to have it completely removed.  Operators of revenge porn sites have sometimes capitalized on this fact by offering to remove the photos for a fee (or running advertisements for services that will do so).

Operators of revenge porn websites often shield themselves behind the First Amendment, and website operators have been known to employ sophisticated legal teams in order to protect themselves from civil and criminal liability and to maintain operation of their sites.  Nonetheless, the law provides several avenues for victims seeking to have photos removed from websites, obtain restitution and, to the extent damage has not already been done, clear their names.

Self-Help as a First Step

Although the Internet is the tool used to disseminate revenge porn, it also now provides resources for victims who seek help in dealing with this invasion of privacy.  The website contains a step-by-step guide to getting nude photos removed from the Internet, as well as contact information for lawyers and other advocates for revenge porn victims in various states.

According to, the first step to mitigating the damage of revenge porn is to establish more of an online presence.  Although this may be counterintuitive, it is actually a logical approach: one of the biggest harms of revenge porn is that a friend, family member or employer will find nude photos when entering the victim’s name into a search engine.  By opening Facebook, Twitter, Pinterest and Instagram accounts under his or her name, a victim may be able to move the revenge porn photo to a lower position in search engine results.

Because nude photos tend to be spread quickly on the Internet, also encourages victims to use Google’s reverse image search engine to find all websites where the victim’s photos may appear.  After taking careful note of all locations where such photos appear, victims are encouraged to file police reports.

Copyright Infringement

The next step in removing photos recommended by, which has been successful in a number of cases (including as described in this particularly fascinating account), is for the victim to take advantage of U.S. copyright law.  Under U.S. copyright law, a person who takes a nude photo of herself or himself is the owner of the copyright in that photo and thus can enjoin others from reproducing or displaying the photo.  A victim may, therefore, submit a “takedown” notice under Section 512 of the Digital Millennium Copyright Act (DMCA) to the webmasters and web hosts of the offending sites as well as to search engine sites where the nude photo may come up as a search result (Google even provides step-by-step instructions).  Because the DMCA provides an infringement safe harbor to web service providers who comply with the statute’s requirements, many search engines and web hosts will remove revenge porn photos upon receipt of a takedown notice.  If the photo is not removed, the victim may consider registering his or her copyrights in the photos and suing the web host or search engine in federal court, although this may not always be a desirable approach for the reasons described below.

Using copyright law to fight revenge porn, while effective to an extent, is not without problems, including the following:

  • It only works if the victim owns the copyright.  While many revenge porn photos are taken by the victim himself or herself and then posted without his or her consent, this is not always the case. In situations where another person took the photo –e.g., if the victim’s girlfriend or boyfriend took it, or if the photo was taken secretly without the victim’s consent–the victim would not be the copyright owner and thus could not use copyright law to force removal.
  • Website operators may reject copyright infringement claims and refuse to remove the offending photos.  Although a victim could move forward with litigation to obtain an injunction and possibly monetary damages, revenge porn operators are often confident that (a) the costs of litigation are too expensive for many revenge porn victims and (b) many revenge porn victims fear making their situations even more public by bringing suit. To mitigate the risk of such increased exposure, victims can attempt to bring suit pseudonymously, and there are resources on the Internet devoted to assisting with this.
  • Even if a website operator removes the photos of one victim who follows all of the necessary steps to enforce his or her copyright, the website will still display photos of hundreds, if not thousands of other victims.

Thus, copyright law is not always enough to effectively combat revenge porn.

Defamation, Privacy and Other Related Laws

Several victims of revenge porn, as well as people who have had other personal information of a sexual or otherwise inappropriate nature published on revenge porn websites, have launched civil lawsuits under theories such as defamation, invasion of privacy, and identity theft.  As we have reported previously, one high profile example of this came in July 2013, when a federal judge in Kentucky allowed a defamation lawsuit against the operator of a site called to proceed and a jury awarded the victim (about whom the site had published false accounts of her sexual history) $338,000.

Prosecutors have also taken advantage of the fact that the operators of these sites often engage in criminal activity in order to obtain and capitalize on nude photos.  On January 23, 2014, Hunter Moore, known by some as the “most hated man on the Internet” and probably the most famous and successful revenge pornographer to date, was arrested on charges of illegally accessing personal email accounts in order to obtain photos for his revenge porn site.  Further, California Attorney General Kamala Harris recently announced the arrest of a revenge porn site operator for 31 accounts of conspiracy, identify theft and extortion based on the unauthorized posting of nude photos.  Depending on the outcome of these cases and civil cases such as that against (and their inevitable appeals), revenge porn victims may soon have additional avenues of legal recourse.

The most commonly used defense of website operators against charges like those discussed above is 47 U.S. Code § 230(c)(1), the provision of the Communications Decency Act of 1996 (CDA) that states: “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Revenge porn website operators have cited this statutory provision to argue that they are not responsible for the images they host if the content was provided by other users.  However, § 230 might not provide a defense in all cases.  First, § 230 does not grant a website operator immunity from federal criminal laws, intellectual property laws or communications privacy laws (such as the laws that Hunter Moore allegedly violated).  For example, if a website operator uses a photo of a victim submitted by a third party to extort money from the victim, § 230 would not provide any defense. Second, § 230 may not protect a website operator if the site contributes to the creation of the offending content.  In the case against referenced above, the court rejected the operator’s § 230 defense, pointing out that the operator, who edited and added commentary to the submitted offending content, “did far more than just allow postings by others or engage in editorial or self-regulatory functions.” It is noteworthy, however, that the website operator of has filed an appeal in the Sixth Circuit and that did prevail in a 2012 case based on similar facts).

State Anti-Revenge Porn Laws

Another approach to deterring website operators from posting unauthorized nude photos is passing laws that criminalize that specific activity.  As of today, only two states, New Jersey and California, have such laws. These laws are fairly limited in scope in order to pass constitutional muster under the First Amendment. California’s law, enacted on October 1, 2013, is subject to a number of limitations. For example, it does not cover photos taken by the victim himself or herself, it does not apply if a third party obtains the photos through hacking, and a website operator can only be prosecuted if the state can prove that the operator intended to cause emotional distress.  Further, the penalties under this law are relatively minor: distribution of unauthorized nude images or videos is a misdemeanor, with convicted perpetrators facing six months in jail and a $1000 fine.  Nonetheless, free speech advocates, including the Electronic Frontier Foundation (EFF), have criticized the law, stating that it is overly broad, criminalizes innocent behavior, and violates free speech rights.

Despite broad objections against anti-revenge porn laws from the EFF and various other free speech advocates, legislatures in several other states, including New York, Rhode Island, Maryland and Virginia, have introduced laws that would criminalize operation of revenge porn websites.  There is also discussion about enacting a federal anti-revenge porn statute. Whether these laws will be enacted, and the extent to which prosecutors will actually invoke these laws if they are passed, remains uncertain. But such laws could become powerful weapons in the fight to eliminate revenge porn.

As revenge porn is a worldwide phenomenon, jurisdictions outside the U.S. have also passed laws aimed at punishing the practice. For example, a law criminalizing non-consensual distribution of nude photographs of other people was passed in the Australian state of Victoria in December 2013. And, in January 2014, the Israeli parliament passed a law that criminalizes revenge porn, punishing website operators who publish unauthorized photos or videos of a sexual nature with up to five years in prison.


As long as people fall in (or out of) love (or lust) and cameras and the Internet exist, the proliferation of revenge porn websites will remain a troubling issue.  As discussed above, however, the law does provide at least some recourse to the victims of revenge porn.

In 2012, we reported on a pair of district court decisions that, based on similar facts, split on whether defendant, a gossip website, qualified for immunity under Section 230 of the Communications Decency Act (CDA), the 1996 law that states “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have generally held that Section 230 precludes defamation suits against website operators for content that their users create and post.—which claims 22 million monthly unique visitors—invites users to “submit dirt” about themselves or others via a submission form requesting the basics of the “dirt,” with fields for “what’s happening” and “who, what, when, where, why,” and a link for users to upload photographs. Website operator Nik Richie then reposts the content, sometimes adding his own comments. Unsurprisingly, unhappy subjects of the gossip postings have sued Richie and his company on numerous occasions.

In one case, Jones v. Dirty World Entertainment Recordings, LLC in the Eastern District of Kentucky, former teacher and Cincinnati Bengals cheerleader Sarah Jones brought defamation and other state law claims related to two posts showing her photo and stating that she had sex with players and contracted sexually transmitted diseases. In 2011, Richie moved for judgment as a matter of law on grounds that Section 230 gave him immunity as the “provider of an interactive computer service” because, he argued, the defamatory content originated with a user of the site and not Richie, though he had added his own comments. The court denied the motion, citing “the very name of the site, the manner in which it is managed, and the personal comments of defendant Richie” as leading to its conclusion that Richie “specifically encouraged development of what is offensive about the content” and thereby lost immunity under Section 230. The court noted that Richie made comments addressed directly to Jones, including that he “love[d] how the Dirty Army [Richie’s term for the site’s users] ha[d] a war mentality,” a comment that the court held encouraged the posting of offensive content.

After a mistrial in February 2013, Richie moved for summary judgment, asking the court to reconsider its ruling that he failed to qualify for CDA immunity. He noted that “since the CDA was first enacted in 1996, there have been approximately 300 reported decisions addressing immunity claims” (a statistic set forth in Hill v. Stubhub) but that his was the only one ever to go to trial, even though, Richie argued, other cases involved worse facts and clearer damage to the plaintiff. Richie also discussed in detail the Western District of Missouri opinion we reported on last year that granted summary judgment to Richie on CDA immunity grounds, explicitly disagreeing with the Jones court’s initial ruling. The court was not convinced, denying the motion simply “for the reasons set forth in the Court’s previous opinion.”

The case went to trial on July 8, 2013. The jury deliberated for more than ten hours and homed in on the key issue: in a note to the judge, the jury “request[ed] the evidence presented to the court detailing screenshots of how one submits a post to website” The jury, it seems, was asking for information to help it consider whether Richie and the site “encouraged the development of what is offensive”—the standard in the Sixth Circuit, of which the Eastern District of Kentucky is a part—about the ensuing posts about Jones. The jury awarded Jones $38,000 in actual damages and $300,000 in punitive damages.

Search Engine Watch, a respected analyst of the Internet industry, predicts that “[t]he success of this lawsuit is going to open a flood of new lawsuits against The Dirty and other sites like it that host third-party content” and noted that the case was good for the online reputation management industry—companies that provide services for individuals to manage what is said about them online—because the threat of suit would make website operators more responsive to requests to remove user-generated content.

From the courthouse steps, a tearful Jones said the jury got it right, and Richie’s attorney promised an immediate appeal. See video here. A few days later, Richie filed his appeal to the Sixth Circuit. We will keep you posted on the result.

In the latest issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we look at recent First Amendment, intellectual property, labor and privacy law developments affecting corporate users of social media and the Internet. We also recap major events from 2012 that have had a substantial impact on social media law, and we take a look at some of the big numbers racked up by social media companies over the past year.

To read the latest issue of our newsletter, click here.

For an archive of previous issues of Socially Aware, click here.

In a string of cases against Google, approximately 20 separate plaintiffs have claimed that, through advertisements on its AdWords service, Google engaged in trademark infringement. These claims have been based on Google allowing its advertisers to use their competitors’ trademarks in Google-generated online advertisements. In a recent decision emerging from these cases, CYBERsitter v. Google, the U.S. District Court for the Central District of California found that Section 230 of the Communications Decency Act (CDA) provides protection for Google against some of the plaintiff’s state law claims.

As we have discussed previously (see here and here), Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The Section 230 safe harbor immunizes websites from liability for content created by users, as long as the website did not “materially contribute” to the development or creation of the content. An important limitation on this safe harbor, however, is that it shall not “be construed to limit or expand any law pertaining to intellectual property.”

In the CYBERsitter case, plaintiff CYBERsitter, which sells an Internet content-filtering program, sued Google for selling and displaying advertisements incorporating the CYBERsitter trademark to ContentWatch, one of CYBERsitter’s competitors. CYBERsitter’s complaint alleged that Google had violated numerous federal and California laws by, first, selling the right to use CYBERsitter’s trademark to ContentWatch and, second, permitting and encouraging ContentWatch to use the CYBERsitter mark in Google’s AdWords advertising. Specifically, CYBERsitter’s complaint included the following claims: Trademark infringement, contributory trademark infringement, false advertising, unfair competition and unjust enrichment.

Google filed a motion to dismiss, arguing that Section 230 of the CDA shielded it from liability for CYBERsitter’s state law claims. The court agreed with Google for the state law claims of trademark infringement, contributory trademark infringement, unfair competition and unjust enrichment, but only to the extent that these claims sought to hold Google liable for the infringing content of the advertisements. The court, however, did not discuss the apparent inapplicability of the Section 230 safe harbor to trademark claims. As noted above, Section 230 does not apply to intellectual property claims and, despite the fact that trademarks are a form of intellectual property, the court applied Section 230 without further note. This is because the Ninth Circuit has held that the term “intellectual property” in Section 230 of the CDA refers to federal intellectual property law and therefore state intellectual property law claims are not excluded from the safe harbor. The Ninth Circuit, however, appears to be an outlier with this interpretation; decisions from other circuit courts suggest disagreement with the Ninth Circuit’s approach, and district courts outside the Ninth Circuit have not followed the Ninth Circuit’s lead.

Google was not let off the hook entirely with regard to the plaintiff’s state trademark law claims. In dismissing the trademark infringement and contributory trademark infringement claims, the court distinguished between Google’s liability for the content of the advertisements and its liability for its potentially tortious conduct unrelated to the content of the advertisements. The court refused to dismiss these claims to the extent they sought to hold Google liable for selling to third parties the right to use CYBERsitter’s trademark, and for encouraging and facilitating third parties to use CYBERsitter’s trademark, without CYBERsitter’s authorization. Because such action by Google has nothing to do with the online content of the advertisements, the court held that Section 230 is inapplicable.

The court also found that CYBERsitter’s false advertising claim was not barred by Section 230 because Google may have “materially contributed” to the content of the advertisements and, therefore, under Section 230 would have been an “information content provider” and not immune from liability. Prof. Eric Goldman, who blogs frequently on CDA-related matters, has pointed out an apparent inconsistency in the CYBERsitter court’s reasoning, noting that Google did not materially contribute to the content of the advertisements for the purposes of the trademark infringement, contributory infringement, unfair competition and unjust enrichment claims, but that Google might have done so for the purposes of the false advertising claim.

CYBERsitter highlights at least two key points for website operators, bloggers, and other providers of interactive computer services. First, at least in the Ninth Circuit, but not necessarily in other circuits, the Section 230 safe harbor provides protection from state intellectual property law claims with regard to user-generated content. Second, to be protected under the Section 230 safe harbor, the service provider must not have created the content and it must not have materially contributed to such content’s creation.

We’ve reported before on Section 230 of the Communications Decency Act (CDA), the 1996 statute that states, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Courts have interpreted Section 230 to immunize social media and other websites from liability for publishing content created by their users, provided the site owners are not “responsible in whole or in part, for the creation or development of” the offending content.

Two recent federal cases involving the website show that, 15 years after the landmark Zeran v. AOL case interpreting Section 230 immunity broadly, courts still grapple with the statute and, arguably, get cases wrong, particularly when faced with unsavory content. is an ad-supported website that features gossip, salacious content, news and sports stories.  The site, run by owner/editor Hooman Karamian, a/k/a Nik Richie, prompts users to “submit dirt” via a basic text form requesting “what’s happening” and “who, what, when, where, why,” and allows users to upload files. In response, users, referred to on the site as the “Dirty Army,” submit stories and photographs along with gossip about the people pictured. Richie then posts the pictures and information, often accompanied by his own comments. Two such racy posts, one detailing the sex habits of a Cincinnati Bengals cheerleader and the other about the supposed exploits of a “Church Girl,” led their subjects to bring defamation claims in federal court. Third-party users, not, generated the content. Cases dismissed on Section 230 grounds, right?  Not quite.

In Jones v. Dirty World Entertainment Recordings, a case in the U.S. District Court for the Eastern District of Kentucky, plaintiff Sarah Jones, a cheerleader for the Cincinnati Bengals football team and also a high school teacher, sued based on two user-submitted posts that included her picture and statements regarding her sex partners, as well as allegations that she had sexually transmitted diseases. Richie added a one-line comment— “why are all high school teachers freaks in the sack?”—and published the post. Jones requested that the posts be removed, but refused. Richie also commented on the site directly addressing Jones, saying her concern about the post was misguided and that she was “d[igging] her own grave” by calling attention to it. Jones sought damages for defamation and invasion of privacy under state tort law, and moved for judgment as a matter of law on CDA immunity grounds.

The court held that did not qualify for CDA immunity because it “specifically encouraged the development of what is offensive about the content” (citing the Tenth Circuit’s opinion in Federal Trade Comm’n v. Accusearch).  The court found that the encouraged the development of, and therefore was responsible for, the offensive content based on the site’s name, the fact that the site encouraged the posting of “dirt,” Richie’s personal comments added to users’ posts, and his direct reference to the plaintiff’s request that the post be taken down. The court focused on Richie’s comments, including his statement “I love how the Dirty Army has war mentality. Why go after one ugly cheerleader when you can go after all the brown baggers.”

The Jones court’s analysis diverges from prevailing CDA case law in a few respects. For example, regarding the issue of responding to a subject’s request that an allegedly defamatory post be taken down, the Ninth Circuit has held that deciding what to post and what to remove are “traditional duties of a publisher” for which the CDA provides immunity to website operators.  More critically, in adopting the “specifically encouraged the development of what is offensive” standard coined in Accusearch, the court in Jones reasoned that by requesting “dirt,” the site “encourage[d] material which is potentially defamatory or an invasion of the subject’s privacy,” and therefore lost CDA immunity.  That reasoning, though, could extend to any website functionality, such as free-form text boxes, that permits users to input potentially defamatory material. To hold that a website operator loses immunity based on the mere potential that users will post defamatory content effectively vitiates CDA immunity and parts ways with cases like the Ninth Circuit’s case, which held that a website’s provision of “neutral tools” cannot constitute development of content for purposes of the exception to CDA immunity. For these and other reasons, one leading Internet law commentator calls the case a “terrible ruling that needs to be fixed on appeal.”’s appeal to the Sixth Circuit is pending.

In a more recent case, S.C. v. Dirty World, LLC, the U.S. District Court for Western District of Missouri held that Richie and did qualify for CDA Section 230 immunity on facts similar to those in Jones. The plaintiff in S.C. brought suit based on a user-generated post on that showed her picture along with a description alleging that she had relations with the user’s boyfriend and attempted to do so with the user’s son. Richie published the post, adding a comment about the plaintiff’s appearance. The court explained that, because a third party authored the allegedly defamatory content, CDA immunity turned on whether TheDirty “developed” the content by having “materially contribute[d] to [its] alleged illegality.”  The court held that the defendants did not materially contribute to the post’s alleged illegality because the defendants never instructed or requested the third party to submit the post at issue, “did nothing to specifically induce it,” and did not add to or substantively alter the post before publishing it on the site.

After having noted these facts, and how they differed from the facts in Jones, which the S.C. plaintiff had cited, the court explicitly “distanced itself from certain legal implications set forth in Jones.”  The S.C. court pointed out that a “broad” interpretation of CDA immunity is the accepted view.  It explained that CDA immunity does not, and should not, turn on the “name of the site in and of itself,” but instead focuses on the content that is actually defamatory or otherwise gives rise to legal liability.  The court noted, for example, that the site itself has a variety of content, much of it not defamatory or capable of being defamatory (e.g., sports stories and other news).

Given that some may consider’s gossip content and mission extreme, cases like S.C. likely provide peace of mind to operators of more conventional social media sites.  Still, should Jones survive appeal, it could lead to forum shopping in cases where plaintiffs expect to face CDA immunity defenses, because the “specifically encouraged” standard could, as in Jones, lead to a loss of immunity. We’ll keep you posted on the appeal.

As we reported last month, the safe harbor in Section 230 of the Communications Decency Act (“CDA”) immunizes social media providers from liability based on content posted by users under most circumstances, but not from liability for content that the providers themselves generate.  But what about when providers block Internet traffic such as “spam” – does the CDA immunize service providers from liability for claims related to messages not reaching their intended recipients?

In two recent unpublished cases, Holomaxx Techs. Corp. v. Microsoft Corp. and Holomaxx Techs. Corp. v. Yahoo! Inc., Judge Fogel of the Federal District Court for the Northern District of California held that the CDA does provide immunity in such circumstances.  (Notably, Judge Fogel also decided earlier this year that Facebook postings qualify as “commercial electronic mail messages” regulated under CAN-SPAM, the federal anti-spam statute.)  The Holomaxx holdings did not break new ground, but the cases clearly show that Section 230 of the CDA provides immunity not just with respect to user-posted content, but also for service providers’ blocking and restriction of messages.

Plaintiff Holomaxx Technologies runs an email marketing and ecommerce business development service.  After what it alleged was MSN’s and Yahoo!’s continued refusal to deliver its legitimate emails, Holomaxx sued both companies for state law tort claims alleging interference with contract and business advantage, defamation, false light, and unfair competition, and for federal claims under the Wiretap Act, the Computer Fraud and Abuse Act, and the Stored Communications Act.  Seeking both damages and an injunction, Holomaxx claimed that MSN and Yahoo! “knowingly relie[d] on faulty spam filters” and that it was “entitled to send legitimate, permission-based emails to its clients’ customers now.”

In its complaints against Microsoft and Yahoo!, Holomaxx explained that it delivers for its customers ten million email messages a day, including three million to Hotmail/MSN users and six million to Yahoo! users.  Holomaxx claimed that it sent only legitimate, requested emails to consenting users and complied with CAN-SPAM.  According to Holomaxx, MSN’s and Yahoo!’s email filtering systems began blocking, rerouting, and/or throttling Holomaxx-generated emails to MSN and Yahoo! users, and MSN and Yahoo! ignored its requests to be unblocked and failed to identify specific problems with Holomaxx’s emails.  Also according to Holomaxx, MSN and Yahoo! users acted in bad faith because they did not work with Holomaxx in the manner prescribed by the abuse desk guidelines of the Messaging Anti-Abuse Working Group, to which both companies belong and which Holomaxx characterized as an “industry standard.”  Finally, Holomaxx claimed that anticompetitive purposes drove MSN’s and Yahoo!’s blocking, and that the fact that the two companies had initially resumed delivery of Holomaxx emails and then stopped again showed that the companies acted in bad faith.

MSN and Yahoo! moved to dismiss, citing CDA Section 230(c)(2), which on its face immunizes service providers for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers … objectionable,” and arguing that the facts that Holomaxx alleged were insufficient to overcome this statutory immunity.

Agreeing, Judge Fogel called CDA immunity “robust” and, citing the Ninth Circuit’s opinion in Fair Housing Council v., LLC, noted that “all doubts must be resolved in favor of immunity.”  The court cited Zango v. Kaspersky, where the Ninth Circuit explained that the CDA “plainly immunizes” providers that “make[s] available software that filters or screens material that the user or the provider deems objectionable.”  In Zango, the Ninth Circuit affirmed the district court’s dismissal of a software maker’s suit against an anti-adware security firm for allegedly making it difficult for users who had installed the security firm’s anti-adware tools to use the plaintiff’s software.  However, the Ninth Circuit explained that a provider might lose immunity where it “block[s] content for anticompetitive purposes or merely at its malicious whim.”  Under that standard, the question was whether Holomaxx alleged sufficient facts to show that MSN and Yahoo! acted in an “absence of good faith” when they blocked Holomaxx’s emails.

The answer was no.  The court discounted Holomaxx’s reliance on the MAAWG guidelines because Holomaxx had not shown them to be an industry standard.

The fact that the companies temporarily resumed delivery of Holomaxx’s emails did not demonstrate an anticompetitive motive because the CDA gives providers wide discretion in deeming content objectionable.  As to alleged malice, the court explained that, “[T]o permit Holomaxx to proceed solely on the basis of a conclusory allegation that Yahoo! acted in bad faith essentially would rewrite the CDA.”  (Note:  On its face, the CDA did not apply to Holomaxx’s Wiretap Act and Stored Communications Act claims; the court dismissed those claims because it found that Holomaxx failed to adequately allege how MSN or Yahoo! had violated those statutes.)

A leading commentator has noted that the Ninth Circuit’s Zango case provided website operators a “high degree of freedom to make judgments about how to best serve their customers.”  The Holomaxx dismissals confirm that point.  With social media spam on the rise even  as email spam decreases and web-based email in general declines, both the Holomaxx and Zango cases could assist social media providers in their efforts to prevent unsolicited messages and abuse while at the same time maintaining the instant, social, viral qualities that keep users engaged and advertisers paying.

One final point – as one observer notes, Holomaxx’s compliance with CAN-SPAM, described in great detail in each of the complaints, did not matter to Judge Fogel’s holding.  That is, the mere fact that Holomaxx’s marketing messages were legal, did not compel Microsoft or Yahoo! to either deliver those messages or lose CDA immunity.  Thus, the court rejected an argument that might have resulted implicitly in the requirements of CAN-SPAM setting a ceiling, rather than a floor, for service providers’ anti-abuse efforts.

Although common law generally holds publishers responsible for the content that they publish, the Communications Decency Act (“CDA”) gives website operators broad protection from liability for content posted by users.  Courts have applied the CDA in favor of website owners in nearly 200 cases, including cases involving Google, Facebook, MySpace, and even bloggers for content posted by their co-bloggers.  Commentators hail the CDA as the legal framework that made possible the rise of social media.  CDA immunity, however, is not limitless.  For example, as the Ninth Circuit explained in Fair Housing Council  of San Fernando Valley v., where “a website helps to develop unlawful content,” it loses CDA immunity “if it contributes materially to the alleged illegality of the conduct.”  Two recent cases illustrate how websites can lose CDA immunity as a result of contributing to offending content.

The district court in Levitt v. Yelp considered business owners’ claims that Yelp manipulated Yelp pages, rankings, and reviews in an extortionate manner that violated California’s unfair business practices law.  Plaintiffs alleged that Yelp threatened to, and did, take down positive reviews if plaintiffs did not buy ads, and that Yelp’s salespeople manipulated rankings on Yelp.  The court first rejected Yelp’s jurisdictional argument that the CDA prevented the court from hearing the claims.  Second, the court held the CDA did not immunize Yelp because some of the claims focused on Yelp’s sales practices, and not merely Yelp’s editing or selective display of user reviews.  The court dismissed the plaintiffs’ claims anyway— finding that they had not pleaded sufficient facts to show extortion by Yelp—but it gave the plaintiffs leave to amend.

In Hill v. StubHub a North Carolina state court considered claims that StubHub violated state anti-scalping statutes.  The court rejected StubHub’s CDA defense because StubHub’s service suggested that users input particular prices for Miley Cyrus concert tickets, and profited when they did.  That StubHub suggested the illegal prices, monitored its inventory for particular events, and only made money if sufficient tickets were sold, and even then made a percentage of the ticket price, all meant StubHub “developed” the unlawful content:  a system where users scalped tickets.  The court explained that StubHub “encouraged, materially contributed to, and made aggressive use” of the pricing content posted by users, so StubHub could not avoid liability for it.

Together, the Yelp and StubHub cases show that CDA immunity, although critical for social media operators’ use of user-generated content, is not boundless.  Sites can lose CDA immunity by directing or contributing to offending content or as a result of the actions of their salespeople.

Consumers often turn to the Internet for reviews before purchasing products or services, and companies are increasingly interested in ensuring that such reviews reflect positively and accurately on their businesses.  When patients post negative or allegedly inaccurate reviews about their doctors on the Internet, however, doctors are often prevented from responding due to ethical obligations such as patient confidentiality.  Moreover, even if such reviews were to constitute defamation, under U.S. law, Section 230 of the Communications Decency Act (“CDA”) would prevent doctors from holding the website operators liable for hosting defamatory statements posted by others, such as reviews posted by site visitors.  Doctors would thus be left with the undesirable option of pursuing action against the patients directly, which often involves additional legal proceedings to determine the authors of anonymous reviews.  As a way to obtain greater control under such circumstances, an organization known as Medical Justice has created controversy by recommending that doctors require patients to sign contracts limiting their rights to publish reviews.

Over time, these contracts have reflected different approaches.  In an earlier version, the patient agreed to “refrain from directly or indirectly publishing or airing commentary regarding Physician and his practice, expertise and/or treatment.” The doctor would presumably be able to seek an injunction against the patient for breaches of the contract, such as the publication of reviews.  The patient’s agreement to such restrictions was described as consideration for the doctor’s treatment and for the doctor’s agreement not to exploit “legal privacy loopholes” that the contract claimed would otherwise be permissible under federal privacy law.

While this initial approach would have imposed liability on the patient for publishing reviews, it would still have allowed websites to continue hosting such reviews under the protection of Section 230 of the CDA.  More recent contracts—possibly revised in response to this problem—do not directly restrain patients from posting reviews, but instead require the patient to prospectively assign to the doctor the copyright in any such reviews.  “[I]f Patient prepares such commentary for publication on web pages, blogs, and/or mass correspondence about Physician, the Patient exclusively assigns all Intellectual Property rights, including copyrights . . . ” to the physician.  If valid, such an assignment would allow doctors to send “take-down” notices under the Digital Millennium Copyright Act (“DMCA”) to websites hosting the patient reviews, thus requiring such websites to remove such reviews or face liability for copyright infringement.  Section 230 of the CDA would not protect websites that receive such DMCA take-down notices, because Section 230 expressly does not provide any defense to infringement of copyright or other intellectual property rights.

As a novel use of copyright law, the Medical Justice approach may raise more problems for doctors than it solves.  The website DoctoredReviews has identified several issues facing doctors who wish to enforce such contracts against patients or to serve take-down notices to websites hosting patient reviews.  For example, such contracts may be unconscionable under state law and thus unenforceable, given the nature of the terms and the superior bargaining power of the doctor.  Doctors may even face liability for attempting to exercise their rights under the DMCA.  For example, if a doctor knows that he has not actually received a copyright assignment from the author of the review, then the doctor is potentially liable under the DMCA for submitting a take-down notice based on misrepresented information.  Because many reviews are published anonymously, some doctors require all patients to sign the contracts, in hopes of establishing that any patient publishing a review must necessarily have assigned the copyright to the doctor.  Even if a doctor does hold copyright assignments from all of her patients, the doctor may still know or suspect that a review had been fictitiously authored by a non-patient, who would not have signed any agreement.  The publication of patient reviews may also constitute noninfringing fair use, and at least one court has found that copyright owners must consider whether fair use applies before sending DMCA take-down notices.

In addition to potential liability under the DMCA, doctors may face problems arising from the legal consideration that they offer to patients in exchange for the copyright assignments.  In certain instances, the U.S. Department of Health & Human Services has prohibited doctors from representing that a patient’s agreement is in consideration for “providing greater privacy protection than required by law” when the law does, in fact, require such greater privacy protection.  Beyond the legal issues, the use of such contracts may also violate a doctor’s ethical obligation to put the patient’s interests before the doctor’s own financial interests.

Other industries have also explored the use of prospective copyright assignments, although with different— and less ambitious—approaches than Medical Justice recommends.  The Burning Man festival, for example obtains a joint ownership interest, together with attendees, in the copyright to any photographs taken at the event.  Attendees also agree to make only “personal use” of such photographs.  The agreement clarifies that, with respect to social networks, a use is only deemed “personal” if the attendee does not upload the images “with the intent to publicly display them beyond one’s immediate network, and if one’s immediate network is not inordinately large.”  The festival’s representatives have stated that these terms are intended to protect the event from commercialization, and to protect the privacy of the attendees.  In another example, the pop singer Lady Gaga reportedly requires a copyright assignment of photographs taken at concerts as a condition to obtaining press credentials.  The photographers receive a limited license to use the photographs in connection with a specific website for a four-month period.

As user-generated review websites such as Yelp continue to grow in popularity, one can anticipate increasingly clever uses of intellectual property law by businesses intent on exercising greater control over their online personae.  Yet, as the Medical Justice situation shows, too clever by half may not be clever enough.  In the end, while social media may provide a company with the world’s largest, most cost-effective platform for promoting its goods and services, that same platform is also available to the company’s detractors.