0428_BLOG_iStock_000034491882_LargeVirginia’s highest court recently held that Yelp could not be forced to turn over the identities of anonymous online reviewers that a Virginia carpet-cleaning owner claimed tarnished his business.

In the summer of 2012, Joseph Hadeed, owner of Hadeed Carpet Cleaning, sued seven anonymous Yelp reviewers after receiving a series of critical reviews. Hadeed alleged that the reviewers were competitors masking themselves as Hadeed’s customers and that his sales tanked after the reviews were posted. Hadeed sued the reviewers as John Doe defendants for defamation and then subpoenaed Yelp, demanding that it reveal the reviewers’ identities.

Yelp argued that, without any proof that the reviewers were not Hadeed’s customers, the reviewers had a First Amendment right to post anonymously.

A Virginia trial court and the Court of Appeals sided with Hadeed, ordering Yelp to turn over the reviewers’ identities and holding it in contempt when it did not. But in April 2015, the Virginia Supreme Court vacated the lower court decisions on procedural grounds. Because Virginia’s legislature did not give Virginia’s state courts subpoena power over non-resident non-parties, the Supreme Court concluded, the Virginia trial court could not order the California-headquartered Yelp to produce documents located in California for Hadeed’s defamation action in Virginia.

Although the decision was a victory for Yelp, it was a narrow one, resting on procedural grounds. The Virginia Supreme Court did not address the broader First Amendment argument about anonymous posting and noted that it wouldn’t quash the subpoena because Hadeed could still try to enforce it under California law.

After the ruling, Yelp’s senior director of litigation, Aaron Schur, posted a statement on the company’s blog stating that, if Hadeed pursued the subpoena in California, Yelp would “continue to fight for the rights of these reviewers under the reasonable standards that California courts, and the First Amendment, require (standards we pushed the Virginia courts to adopt).” Schur added, “Fortunately the right to speak under a pseudonym is constitutionally protected and has long been recognized for the important information it allows individuals to contribute to public discourse.”

In 2009, a California law took effect, allowing anonymous Internet speakers whose identity is sought under a subpoena in California in connection with a lawsuit filed in another state to challenge the subpoena and recover attorneys’ fees if they are successful. In his Yelp post, Schur added that Hadeed’s case “highlights the need for stronger online free speech protection in Virginia and across the country.”

Had Hadeed sought to enforce the subpoena in California, the result may have been the same but possibly on different grounds. In California, where Yelp and many other social media companies are headquartered, the company would have been subject to a court’s subpoena power. Still, Yelp may have been protected from having to disclose its users’ identities. California courts have offered protections for anonymous speech under the First Amendment to the U.S. Constitution and the state constitutional right of privacy.

Nevertheless, there is no uniform rule as to whether companies must reveal identifying information of their anonymous users. In 2013, in Chevron v. Danziger, federal Magistrate Judge Nathanael M. Cousins of the Northern District of California concluded that Chevron’s subpoenas seeking identifying information of non-party Gmail and Yahoo Mail users were enforceable against Google and Yahoo, respectively, because the subpoenas did not seek expressive activity and because there is no privacy interest in subscriber and user information associated with email addresses.

On the other hand, in March 2015, Magistrate Judge Laurel Beeler of the same court held, in Music Group Macao Commercial Offshore Ltd. v. Does, that the plaintiffs could not compel nonparty Twitter to reveal the identifying information of its anonymous users, who, as in the Hadeed case, were Doe defendants. Music Group Macao sued the Doe defendants in Washington federal court for anonymously tweeting disparaging remarks about the company, its employees, and its CEO. After the Washington court ruled that the plaintiffs could obtain the identifying information from Twitter, the plaintiffs sought to enforce the subpoena in California. Magistrate Judge Bheeler concluded that the Doe defendants’ First Amendment rights to speak anonymously outweighed the plaintiffs’ need for the requested information, citing familiar concerns that forcing Twitter to disclose the speakers’ identities would unduly chill protected speech.

Courts in other jurisdictions have imposed a range of evidentiary burdens on plaintiffs seeking the disclosure of anonymous Internet speakers. For example, federal courts in Connecticut and New York have required plaintiffs to make a prima facie showing of their claims before requiring internet service providers (ISPs) to disclose anonymous defendants’ identities. A federal court in Washington found that a higher standard should apply when a subpoena seeks the identity of an Internet user who is not a party to the litigation. The Delaware Supreme Court has applied an even higher standard, expressing concern “that setting the standard too low will chill potential posters from exercising their First Amendment right to speak anonymously.”

These cases show that courts are continuing to grapple with social media as a platform for expressive activity. Although Yelp and Twitter were protected from having to disclose their anonymous users’ identities in these two recent cases, this area of law remains unsettled, and companies with social media presence should be familiar with the free speech and privacy law in the states where they conduct business and monitor courts’ treatment of these evolving issues.

Mark Zuckerberg famously stated that the purpose of Facebook is “to make the world more open and connected,” and indeed Facebook, other social media outlets and the Internet in general have brought worldwide openness and connection-through-sharing to levels unparalleled at any point in history. With this new universe of limitless dissemination often comes the stripping away of privacy, and “revenge porn,” a relatively new but seemingly inevitable outgrowth of social media and the Internet, is stripping away privacy in the most literal sense.

Defining “revenge porn” is relatively simple and does not require any sort of “I know it when I see it” test; in short, “revenge porn” is the act of publicly disseminating nude photographs or videos of somebody without her or his consent. The name derives from the fact that the act is most often associated with spurned men posting photos on the Internet that were received from their ex-girlfriends in confidence as “revenge” for breaking up with them or otherwise hurting them. But recently, more and more photos are popping up that were either taken without the victim’s consent or that were obtained by hacking a victim’s email or computer.  Revenge porn website operators invite users to post nude photos of their exes (or of anybody else, for that matter) and often allow the community to comment on the photos (which in many cases results in a barrage of expletives aimed at shaming the victim).

Recently, operators of revenge porn sites have taken attacks to a higher level, inviting visitors to post victims’ full names, addresses, phone numbers, places of work and other items of personal information alongside their photographs.  In some cases, victims’ faces are realistically superimposed onto nude photographs of pornographic actors or actresses in order to achieve the same effect when no actual nude photographs of the victims can be found. Victims of revenge porn often suffer significant harm, facing humiliation, loss of reputation, and in some cases, loss of employment. Due to the all-pervasive and permanent nature of the Internet, once a victim’s photo is posted online, it is very difficult for him or her to have it completely removed.  Operators of revenge porn sites have sometimes capitalized on this fact by offering to remove the photos for a fee (or running advertisements for services that will do so).

Operators of revenge porn websites often shield themselves behind the First Amendment, and website operators have been known to employ sophisticated legal teams in order to protect themselves from civil and criminal liability and to maintain operation of their sites.  Nonetheless, the law provides several avenues for victims seeking to have photos removed from websites, obtain restitution and, to the extent damage has not already been done, clear their names.

Self-Help as a First Step

Although the Internet is the tool used to disseminate revenge porn, it also now provides resources for victims who seek help in dealing with this invasion of privacy.  The website WomenAgainstRevengePorn.com contains a step-by-step guide to getting nude photos removed from the Internet, as well as contact information for lawyers and other advocates for revenge porn victims in various states.

According to WomenAgainstRevengePorn.com, the first step to mitigating the damage of revenge porn is to establish more of an online presence.  Although this may be counterintuitive, it is actually a logical approach: one of the biggest harms of revenge porn is that a friend, family member or employer will find nude photos when entering the victim’s name into a search engine.  By opening Facebook, Twitter, Pinterest and Instagram accounts under his or her name, a victim may be able to move the revenge porn photo to a lower position in search engine results.

Because nude photos tend to be spread quickly on the Internet, WomenAgainstRevengePorn.com also encourages victims to use Google’s reverse image search engine to find all websites where the victim’s photos may appear.  After taking careful note of all locations where such photos appear, victims are encouraged to file police reports.

Copyright Infringement

The next step in removing photos recommended by WomenAgainstRevengePorn.com, which has been successful in a number of cases (including as described in this particularly fascinating account), is for the victim to take advantage of U.S. copyright law.  Under U.S. copyright law, a person who takes a nude photo of herself or himself is the owner of the copyright in that photo and thus can enjoin others from reproducing or displaying the photo.  A victim may, therefore, submit a “takedown” notice under Section 512 of the Digital Millennium Copyright Act (DMCA) to the webmasters and web hosts of the offending sites as well as to search engine sites where the nude photo may come up as a search result (Google even provides step-by-step instructions).  Because the DMCA provides an infringement safe harbor to web service providers who comply with the statute’s requirements, many search engines and web hosts will remove revenge porn photos upon receipt of a takedown notice.  If the photo is not removed, the victim may consider registering his or her copyrights in the photos and suing the web host or search engine in federal court, although this may not always be a desirable approach for the reasons described below.

Using copyright law to fight revenge porn, while effective to an extent, is not without problems, including the following:

  • It only works if the victim owns the copyright.  While many revenge porn photos are taken by the victim himself or herself and then posted without his or her consent, this is not always the case. In situations where another person took the photo –e.g., if the victim’s girlfriend or boyfriend took it, or if the photo was taken secretly without the victim’s consent–the victim would not be the copyright owner and thus could not use copyright law to force removal.
  • Website operators may reject copyright infringement claims and refuse to remove the offending photos.  Although a victim could move forward with litigation to obtain an injunction and possibly monetary damages, revenge porn operators are often confident that (a) the costs of litigation are too expensive for many revenge porn victims and (b) many revenge porn victims fear making their situations even more public by bringing suit. To mitigate the risk of such increased exposure, victims can attempt to bring suit pseudonymously, and there are resources on the Internet devoted to assisting with this.
  • Even if a website operator removes the photos of one victim who follows all of the necessary steps to enforce his or her copyright, the website will still display photos of hundreds, if not thousands of other victims.

Thus, copyright law is not always enough to effectively combat revenge porn.

Defamation, Privacy and Other Related Laws

Several victims of revenge porn, as well as people who have had other personal information of a sexual or otherwise inappropriate nature published on revenge porn websites, have launched civil lawsuits under theories such as defamation, invasion of privacy, and identity theft.  As we have reported previously, one high profile example of this came in July 2013, when a federal judge in Kentucky allowed a defamation lawsuit against the operator of a site called TheDirty.com to proceed and a jury awarded the victim (about whom the site had published false accounts of her sexual history) $338,000.

Prosecutors have also taken advantage of the fact that the operators of these sites often engage in criminal activity in order to obtain and capitalize on nude photos.  On January 23, 2014, Hunter Moore, known by some as the “most hated man on the Internet” and probably the most famous and successful revenge pornographer to date, was arrested on charges of illegally accessing personal email accounts in order to obtain photos for his revenge porn site.  Further, California Attorney General Kamala Harris recently announced the arrest of a revenge porn site operator for 31 accounts of conspiracy, identify theft and extortion based on the unauthorized posting of nude photos.  Depending on the outcome of these cases and civil cases such as that against TheDirty.com (and their inevitable appeals), revenge porn victims may soon have additional avenues of legal recourse.

The most commonly used defense of website operators against charges like those discussed above is 47 U.S. Code § 230(c)(1), the provision of the Communications Decency Act of 1996 (CDA) that states: “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Revenge porn website operators have cited this statutory provision to argue that they are not responsible for the images they host if the content was provided by other users.  However, § 230 might not provide a defense in all cases.  First, § 230 does not grant a website operator immunity from federal criminal laws, intellectual property laws or communications privacy laws (such as the laws that Hunter Moore allegedly violated).  For example, if a website operator uses a photo of a victim submitted by a third party to extort money from the victim, § 230 would not provide any defense. Second, § 230 may not protect a website operator if the site contributes to the creation of the offending content.  In the case against TheDirty.com referenced above, the court rejected the operator’s § 230 defense, pointing out that the operator, who edited and added commentary to the submitted offending content, “did far more than just allow postings by others or engage in editorial or self-regulatory functions.” It is noteworthy, however, that the website operator of TheDirty.com has filed an appeal in the Sixth Circuit and that TheDirty.com did prevail in a 2012 case based on similar facts).

State Anti-Revenge Porn Laws

Another approach to deterring website operators from posting unauthorized nude photos is passing laws that criminalize that specific activity.  As of today, only two states, New Jersey and California, have such laws. These laws are fairly limited in scope in order to pass constitutional muster under the First Amendment. California’s law, enacted on October 1, 2013, is subject to a number of limitations. For example, it does not cover photos taken by the victim himself or herself, it does not apply if a third party obtains the photos through hacking, and a website operator can only be prosecuted if the state can prove that the operator intended to cause emotional distress.  Further, the penalties under this law are relatively minor: distribution of unauthorized nude images or videos is a misdemeanor, with convicted perpetrators facing six months in jail and a $1000 fine.  Nonetheless, free speech advocates, including the Electronic Frontier Foundation (EFF), have criticized the law, stating that it is overly broad, criminalizes innocent behavior, and violates free speech rights.

Despite broad objections against anti-revenge porn laws from the EFF and various other free speech advocates, legislatures in several other states, including New York, Rhode Island, Maryland and Virginia, have introduced laws that would criminalize operation of revenge porn websites.  There is also discussion about enacting a federal anti-revenge porn statute. Whether these laws will be enacted, and the extent to which prosecutors will actually invoke these laws if they are passed, remains uncertain. But such laws could become powerful weapons in the fight to eliminate revenge porn.

As revenge porn is a worldwide phenomenon, jurisdictions outside the U.S. have also passed laws aimed at punishing the practice. For example, a law criminalizing non-consensual distribution of nude photographs of other people was passed in the Australian state of Victoria in December 2013. And, in January 2014, the Israeli parliament passed a law that criminalizes revenge porn, punishing website operators who publish unauthorized photos or videos of a sexual nature with up to five years in prison.

Conclusion

As long as people fall in (or out of) love (or lust) and cameras and the Internet exist, the proliferation of revenge porn websites will remain a troubling issue.  As discussed above, however, the law does provide at least some recourse to the victims of revenge porn.

In 2012, we reported on a pair of district court decisions that, based on similar facts, split on whether defendant TheDirty.com, a gossip website, qualified for immunity under Section 230 of the Communications Decency Act (CDA), the 1996 law that states “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have generally held that Section 230 precludes defamation suits against website operators for content that their users create and post.

TheDirty.com—which claims 22 million monthly unique visitors—invites users to “submit dirt” about themselves or others via a submission form requesting the basics of the “dirt,” with fields for “what’s happening” and “who, what, when, where, why,” and a link for users to upload photographs. Website operator Nik Richie then reposts the content, sometimes adding his own comments. Unsurprisingly, unhappy subjects of the gossip postings have sued Richie and his company on numerous occasions.

In one case, Jones v. Dirty World Entertainment Recordings, LLC in the Eastern District of Kentucky, former teacher and Cincinnati Bengals cheerleader Sarah Jones brought defamation and other state law claims related to two posts showing her photo and stating that she had sex with players and contracted sexually transmitted diseases. In 2011, Richie moved for judgment as a matter of law on grounds that Section 230 gave him immunity as the “provider of an interactive computer service” because, he argued, the defamatory content originated with a user of the site and not Richie, though he had added his own comments. The court denied the motion, citing “the very name of the site, the manner in which it is managed, and the personal comments of defendant Richie” as leading to its conclusion that Richie “specifically encouraged development of what is offensive about the content” and thereby lost immunity under Section 230. The court noted that Richie made comments addressed directly to Jones, including that he “love[d] how the Dirty Army [Richie’s term for the site’s users] ha[d] a war mentality,” a comment that the court held encouraged the posting of offensive content.

After a mistrial in February 2013, Richie moved for summary judgment, asking the court to reconsider its ruling that he failed to qualify for CDA immunity. He noted that “since the CDA was first enacted in 1996, there have been approximately 300 reported decisions addressing immunity claims” (a statistic set forth in Hill v. Stubhub) but that his was the only one ever to go to trial, even though, Richie argued, other cases involved worse facts and clearer damage to the plaintiff. Richie also discussed in detail the Western District of Missouri opinion we reported on last year that granted summary judgment to Richie on CDA immunity grounds, explicitly disagreeing with the Jones court’s initial ruling. The court was not convinced, denying the motion simply “for the reasons set forth in the Court’s previous opinion.”

The case went to trial on July 8, 2013. The jury deliberated for more than ten hours and homed in on the key issue: in a note to the judge, the jury “request[ed] the evidence presented to the court detailing screenshots of how one submits a post to website TheDirty.com.” The jury, it seems, was asking for information to help it consider whether Richie and the site “encouraged the development of what is offensive”—the standard in the Sixth Circuit, of which the Eastern District of Kentucky is a part—about the ensuing posts about Jones. The jury awarded Jones $38,000 in actual damages and $300,000 in punitive damages.

Search Engine Watch, a respected analyst of the Internet industry, predicts that “[t]he success of this lawsuit is going to open a flood of new lawsuits against The Dirty and other sites like it that host third-party content” and noted that the case was good for the online reputation management industry—companies that provide services for individuals to manage what is said about them online—because the threat of suit would make website operators more responsive to requests to remove user-generated content.

From the courthouse steps, a tearful Jones said the jury got it right, and Richie’s attorney promised an immediate appeal. See video here. A few days later, Richie filed his appeal to the Sixth Circuit. We will keep you posted on the result.

On May 15, 2013, in a case filed against Google by an entrepreneur selling dietary supplements and cosmetics (the “Plaintiff”), the German Federal Court of Justice in Karlsruhe (Bundesgerichtshof, the “Federal Court”) ruled that Google must remove any defamatory suggestions generated by its autocomplete search function. The Federal Court overturned an earlier ruling by the Cologne Higher Regional Court (Zivilkammer des Landgerichts Köln) favoring Google.

The Plaintiff claimed that, when his name is entered into Google’s German-language search field, Google’s autocomplete search function offers defamatory suggestions linking him to “Scientology” and “fraud”. The Plaintiff claimed that he is not involved with Scientology and that he has never been accused of, or investigated for, fraud. He also observed that the search results generated by the autocomplete suggestions did not present or support any such connections. The Plaintiff sought an injunction against Google to block the autocomplete suggestions, as well as monetary damages for defamation of his personality rights and business reputation.

While the Cologne Higher Regional Court held that no intelligible meaning can be attached to such autocomplete suggestions, the Federal Court disagreed. The Federal Court found that the search suggestions offered by the autocomplete function implied a factual connection between the Plaintiff and the suggested terms, and stated that search engine operators are responsible for defamatory autocomplete suggestions once they become aware of or have been alerted to such violations of personality rights and reputation. Once they become aware or are alerted, search engine operators have the responsibility under German law to remove such autocomplete suggestions and prevent any further violations.

The case is currently being re-examined in the Cologne Higher Regional Court on remand to determine whether the autocomplete suggestions are in fact defamatory and infringe the personality rights and the honor of the Plaintiff. This means that the Cologne court will have to determine whether the suggestions at issue are factually correct, that is, whether there are facts justifying the association of the Plaintiff with the terms suggested by Google’s autocomplete function.

When a user enables Google’s autocomplete function, lists of search queries appear automatically as such user begins typing a search term. This function expedites the search process, helps to avoid spelling mistakes and allows the user to view popular searches featuring the same search term. If a user is signed into his or her Google Account and has Google’s “Web History” feature enabled, Google’s autocomplete suggestions will also incorporate the user’s own past searches. According to the information provided on Google’s Inside Search Help, these “useful” suggestions are “a reflection of the search activity of all web users and the content of web pages indexed by Google.” Autocomplete suggestions are generated by Google’s algorithms “based on a number of factors (including popularity of search terms) without any human intervention.” The queries presented may therefore include “silly or strange or surprising terms and phrases.” Google explains that, while it strives to “reflect the diversity of content on the web (some good, some objectionable),” it also applies “a narrow set of removal policies for pornography, violence, hate speech, and terms that are frequently used to find content that infringes copyrights.”

In its decision, the Federal Court ruled that the search suggestions offered by the autocomplete function suggest a factual connection between the Plaintiff and the terms “Scientology” and “fraud.” These terms have “negative connotations.” The Federal Court characterized Scientology as a “sect” that has a negative public perception due to unflattering media coverage. As for the term “fraud,” the Federal Court observed that, while the average Internet user may not be familiar with the precise meaning of this legal term, he or she is likely to associate the term with morally reprehensible conduct.

The Federal Court also noted that Google presents its autocomplete function to its users as a service that contains suggestions based on searches most often made by other users of Google’s search service. This creates the expectation that search results based on such autocomplete suggestions will be helpful to users because they reflect actual searches. Hence, the autocomplete suggestions at issue here may imply a factual connection or link between the Plaintiff and the two negatively perceived terms.

The Federal Court concluded that, if the associations with the search terms were wrong, the autocomplete function would constitute an infringement of the Plaintiff’s personality rights and reputation protected under Articles 823(1) and 1004 of the German Civil Code, in conjunction with Article 7(1) of the German Telemedia Act.

The Federal Court also held Google responsible for the function because the search word combinations at issue were generated by Google’s own technology. The Federal Court emphasized that search engine operators are not required to regularly police content or check whether the content generated by algorithms is free of violations. Such an obligation would render the operation of a search engine impracticable, if not impossible. While automated filters should be applied for specific areas (such as child pornography), search engines cannot prevent all possible violations of individuals’ rights via the autocomplete function. However, once an operator becomes aware of unlawful violations of such rights, then it becomes responsible for removing the objectionable terms from its automated search suggestions and for preventing such violations from occurring in the future. The Federal Court’s approach indicates that individuals now have a legal right under German law to notify Google of any defamatory autocomplete search suggestions that infringe their personality rights and demand the immediate removal of such suggestions.

Although the Federal Court’s ruling may be surprising to U.S. readers, we note that this ruling is consistent with earlier decisions in Italy (Tribunale Ordinario di Milano, March 24, 2011, 10847/2011, see link to the order (unofficial source)) and France (Cour de cassation – Première chambre civile, Arrêt n° 832 du 12 juillet 2012 (11-20.358)) holding search engine operators responsible for claims arising from search-related functionality.

Following concerns raised by bloggers, the UK government has clarified that small blogs will be exempt from the scope of the new UK press watchdog which is to be introduced as a result of the findings of the Leveson Inquiry.

In 2007, Clive Goodman, then-editor of UK newspaper News of the World, and private investigator Glenn Mulcaire were convicted of the illegal interception of phone messages, and in early 2011, it was revealed that other News of the World reporters had also hacked phones.  Later in 2011, the UK government Department for Culture, Media and Sport (DCMS) commenced a public inquiry into the culture, practices and ethics of the British press, chaired by Lord Justice Leveson.  In November 2012, following a series of public hearings, Lord Justice Leveson’s inquiry published the Leveson Report, which made recommendations for a new independent regulator for the UK press.  As a result of the Leveson Report, the UK government has proposed that a new press watchdog be established by royal charter and backed by legislation; this new self-regulatory system will apply to all “relevant publishers.”

The Crime and Courts Act 2013

The relevant legislation, the Crime and Courts Act 2013 (the “Act”), became law on April 25, 2013.  (In terms of the royal charter itself, a draft royal charter put forward by the UK government and a rival draft put forward by some of the leading UK newspapers are due to be considered by the Privy Council in June 2013.)  Section 41 of the Act sets out the four criteria that a publication must meet to be a “relevant publisher.”  A relevant publisher must:

  • Publish “news-related” material (i.e., news, information or opinion about current affairs or gossip about celebrities, public figures or other persons in the news);
  • Publish in the course of a business;
  • Publish material written by different authors; and
  • Publish material subject to editorial controls.

For purposes of the Act, “publication” means on a website, in hard copy or by any other means.

The draft royal charter proposed by the UK government goes on to make clear that the proposed self-regulatory scheme will cover those who publish in the UK, where a person is deemed to publish in the UK if “the publication takes place in the United Kingdom or is targeted primarily at an audience in the United Kingdom”; the rival royal charter drafted by the press does not suggest any changes to these provisions.  Although there is no guidance in the draft royal charter as to the interpretation of “takes place in the United Kingdom,” it appears that the royal charter could cover foreign operators that publish in the UK, in addition to the UK press itself.  We note that the risk to such publishers that are based in the United States, at least with respect to defamation claims, may be limited by the SPEECH Act,” which was signed into law in the U.S. in August 2010 as a response to so-called “libel tourism.”

(As a general matter, the SPEECH Act prohibits a U.S. federal or state court from recognizing or enforcing a foreign defamation judgment unless the foreign jurisdiction’s defamation law provided at least as much protection of freedom of speech and press as the U.S. Constitution, as well as the constitution and laws of the state in which the court is located.  The SPEECH Act further prohibits U.S. courts from recognizing or enforcing a foreign defamation judgment against the provider of an “interactive computer service,” as defined in Section 230 of the Communications Decency Act (CDA), unless such court determines that the judgment would be consistent with Section 230 if the relevant information had been provided in the U.S.)

A website operator is not considered to have editorial control over material published on its site if the operator did not post the material, even if the operator moderates statements published by others.  This is consistent with the approach taken in Section 5 of the UK’s new Defamation Act 2013, which provides that a website operator’s defence of not having posted defamatory material will not be defeated merely because the operator has moderated a statement posted by others.

“Micro-Businesses” and the Small Blog Exemption

Schedule 15 of the Crime and Courts Act 2013 states that a person who, in carrying out a “micro-business,” publishes news-related material which is either (i) contained in a multi-author blog (a blog that contains contributions from different authors) or (ii) published on an incidental basis that is relevant to the main activities of the business, will not be classified as a relevant publisher for purposes of the Act.  “Micro-businesses” are defined as those with fewer than 10 employees and an annual turnover of less than £2 million.

Note, however, that a publication that is exempt from the Act as a micro-business could still choose to join the regulatory system and receive the legal benefits otherwise only available to relevant publishers—benefits that include cost protection if a claimant chooses to sue in court instead of using the regulator’s arbitration scheme.

DCMS has created the following infographic for use in determining whether or not a publication is a relevant publisher:

Image by DCMS via Creative Commons Attribution-NoDerivs 2.0 Generic (CC BY-ND 2.0) license.

Other Exemptions

Schedule 15 also specifies other categories of publications which are exempt from the new system, even when the test for relevant publishers is met.  These exemptions cover special-interest titles, scientific or academic journals, broadcasters and book publishers, as well as any public body, charity or company that publishes news about their activities.

In our May 30, 2012 post on the Socially Aware blog—“Should We All Be Getting the Twitter “Jitters”? Be Careful What You Say Online (Particularly in the United Kingdom)”—we considered a variety of UK laws being used to regulate the content of tweets and other online messages. Since that post, there has been a series of legal developments affecting the regulation of social media in the UK, in particular:

The following is an overview of each of these important developments.

1. Tamiz v. Google

In February 2013, the Court of Appeal considered the potential liability of website operators in relation to defamatory comments posted by third parties.

Google Inc. (“Google”) operates the Blogger.com blogging platform (“Blogger”). In April 2011, the “London Muslim” blog used Blogger to publish an article about the claimant, Mr Tamiz. After a number of users anonymously posted comments below the article, Tamiz wrote to Google complaining that the comments were defamatory. Google did not remove the comments, however, Google passed on the complaint to the blogger, who then removed the article and the related comments.

Meanwhile, Tamiz applied to the court for permission to serve libel proceedings on Google. Google contested the application, arguing that it was not a “publisher” of the allegedly defamatory statements, and in any event Google sought to rely on the available defences for a website operator under Section 1 of the Defamation Act 1996 and Regulation 19 of the E-Commerce Regulations 2002.

IN FOCUS: What is the Section 1 Defence?

Section 1 of the Defamation Act 1996 provides that a person has a defence to an action for defamation if such person: (i) is not the author, editor or publisher of the statement complained of; (ii) takes reasonable care in relation to its publication; and (iii) does not know, and has no reason to believe, that such person’s actions caused, or contributed to, the publication of a defamatory statement. For these purposes, “author” means the originator of the statement, “editor” means a person having editorial or equivalent responsibility for the content of the statement or the decision to publish it, and “publisher” means a person whose business is issuing material to the public, or a section of the public, and who issues material containing the statement in the course of that business.

Under Section 1, a person will not be considered an author, editor or publisher if such person is involved only, amongst other things:

  • in processing, making copies of, distributing or selling any electronic medium in or on which the statement is recorded;
  • as an operator or provider of a system or service by means of which a statement is made available in electronic form; or
  • as the operator of or provider of access to a communications system by means of which the statement is transmitted, or made available, by a person over whom he or she has no effective control.

Regulation 19 of the E-Commerce Regulations 2002 provides another defence for website operatorsone that can be easier to establish than the Section 1 defence. Regulation 19 protects online service providers by providing that an entity which hosts information provided by a recipient of the online service will not have any liability arising from its storage of the information as long as it has no actual knowledge of any unlawful activity or information, and if, on obtaining actual knowledge of the unlawful information or activity, such entity acts expeditiously to remove or disable access to the material.

At first instance, the court found in favour of Google on the basis that Tamiz’s notification of Google concerning the offending material did not turn Google into a publisher of that material. Google’s role was purely passive and analogous to the owner of a wall which had been covered overnight with defamatory graffiti; although the owner could acquire scaffolding and whitewash the graffiti, that did not mean that the owner should be considered a publisher in the meantime. The court also stated that in any event, if Google had been a publisher of the comments, it could have relied on the Section 1 defence because it was not a commercial publisher and it had no effective control over people using Blogger. (Although there had been a delay between Tamiz’s letter to Google and Google’s notification to the blogger, the judge found that Google had still responded within a reasonable period of time.) The judge also stated that Google would have had a defence under Regulation 19, for purposes of which Google was the information society service provider and the blogger was the recipient. The judge emphasized the importance of the term “unlawful” in Regulation 19; in order for the material to be unlawful, the operator would need to have known something of the strengths and weaknesses of the available defences. Tamiz appealed.

The Court of Appeal agreed that Google was not a publisher before it was notified by Tamiz of the offending materials because it could not be said that Google either knew or ought reasonably to have known of the defamatory comments. However, the Court of Appeal departed from the earlier decision on the question of post-notification liability. Rather than a wall, the Court of Appeal likened Blogger to a large notice board, where Google had the ability to remove or block any material posted on the board that breached its rules. The court held that by failing to have the material removed until five weeks after notification, Google was arguably a publisher post-notification because, by continuing to host the blog in question, Google’s actions may have been held to contribute to the publication of the defamatory statement. Despite its ruling, ultimately the Court of Appeal rejected Tamiz’s appeal on the basis that any harm to Tamiz’s reputation was trivial—and as the appeal failed, the court did not consider the availability of the Regulation 19 defence.

The Tamiz v. Google decision potentially widens the circumstances in which website operators can be liable for defamatory content posted by others. The key lesson for social media platform operators under UK law is this: remove allegedly defamatory material as swiftly as possible following notification, in order to avoid any argument that you are a publisher of that material.

2. Defamation Act 2013

After a difficult passage through parliament, the long-awaited Defamation Act 2013 (the “Act”) was introduced on April 25, 2013. The majority of its provisions will come into effect via statutory instrument later in 2013. The Act is intended to “overhaul the libel laws in England and Wales and bring them into the 21st century, creating a more balanced and fair law.” (The Act does not apply to Northern Ireland, as it was blocked by the Northern Ireland Assembly; further, only those sections which relate to scientific and academic privilege apply to Scotland, which has its own libel laws).

Serious Harm

Section 1 of the Act makes clear that, in order to be defamatory, a statement must cause or be likely to cause “serious harm” to a claimant’s reputation. Where a business is the claimant, it must show that the statement has caused or is likely to cause “serious financial loss” to the business in order for the “serious harm” requirement to be met. (This clarification was brought in as a last-minute amendment as a result of concerns that companies could use the fear of defamation claims to silence their critics.)

General Defences

Sections 2, 3 and 4 of the Act replace the previous common law defences of justification, fair comment and the Reynolds defence with new statutory defences of truth, honest opinion and publication on a matter of public interest. The new provisions broadly reflect the previous common law position, with the exception that the defence of honest opinion is now not required to be on a matter of public interest.

Section 5 Defence

For website operators, one of the key provisions of the Act is the new Section 5 defence. Although the Section 1 and Regulation 19 defences referred to above remain and are not abolished by the Act, Section 5 of the Act introduces a new additional defence specifically for website operators. Under Section 5, a website operator will have a defence to a defamation claim if it can show that it was not the entity that “posted the statement.” The defence will be defeated if the claimant can show the following:

  • it was not possible to identify the person who posted the statement (for these purposes, “identify” means that a claimant must have sufficient information to bring proceedings against the suspected defendant);
  • the claimant provided a notice of complaint in relation to the statement; and
  • the operator failed to respond to the notice of complaint in accordance with the applicable regulations.

Any malice by the website operator’s actions in connection with the statement concerned will defeat the defence.

Importantly, given previous case law which had indicated that moderation of third-party content could result in an operator attracting liability as an editor or publisher, the Act makes clear that the Section 5 defence is not defeated solely by reason of the fact that the operator of the website moderates the statements posted on it by others.

Section 10 Defence

Section 10 of the Act states that a court will not have jurisdiction to hear any action for defamation brought against a person who was not the author, editor or publisher of the applicable material, unless the court is satisfied that it is not reasonably practicable for an action to be brought against the author, editor or publisher.

Privilege

In response to lobbying from the scientific and academic communities, Section 6 of the Act provides protection for scientists and academics publishing in peer-reviewed journals. Section 7 clarifies when the defences of absolute and qualified privilege will be available.

Single Publication

Previously, each new publication of the same defamatory material would give rise to a separate cause of action. This has been of particular concern where defamatory statements have been published online. Section 8 of the Act provides a “single publication” rule that makes clear that the limitation period for bringing a claim will run for one year from the date of first publication.

Overseas Publishers

Section 9 of the Act has been introduced to address the contentious issue of “libel tourism.” It applies to any defendant who is not domiciled in the UK, an EU member state, or a state which is a party to the Lugano Convention (i.e., Iceland, Norway, Denmark and Switzerland). In such circumstances, the courts will not have jurisdiction to hear such claim unless the court is satisfied that England and Wales is the most appropriate place in which to bring an action.

Removal of Statements

Section 13 of the Act provides that, where a court has given judgment in favour of a claimant in an action for defamation, the court may require (i) the operator of a website on which the statement is posted to remove the statement or (ii) any person who was not the author, editor or publisher of the defamatory statement to stop distributing, selling or exhibiting material containing the statement.

Although we will need to await publication of the proposed “notice and takedown” regulations envisaged by the Act and monitor how the Act is implemented in practice by the courts, the Act appears to introduce more certainty and protection for website operators in terms of liability for third-party content—particularly in light of Tamiz v. Google—and as such has been broadly welcomed.

3. Interim Guidelines on Prosecution of Social Media Communications

As we reported in May 2012, various UK laws are currently being used to regulate the content of tweets and other online messages, although there is no consistency as to which laws will be used to regulate which messages. The relevant laws include section 127 of the Communications Act 2003, section 1 of the Malicious Communications Act 1988, the Contempt of Court Act 1981 and the Serious Crime Act 2007.

In December 2012, in response to a spate of high profile cases prosecuted under these laws, the Crown Prosecution Service (CPS) published interim guidelines in relation to the prosecution of cases in England and Wales that involve communications sent via social media. A public consultation was launched alongside such guidelines; at the end of the consultation, the interim guidelines will be reviewed in light of the responses received, and final guidelines will be published.

The guidelines identify four categories of communications that may constitute criminal offences:

  1. credible threats of violence or damage to property;
  2. communications targeting specific individuals;
  3. breach of court orders; and
  4. communications which are grossly offensive, indecent, obscene or false.

In terms of category 4, the CPS acknowledged the huge number of communications made daily using social media and identified the desire to avoid unnecessary prosecutions which would have a chilling effect on free speech. A balance had to be struck between an individual’s right to freedom of expression under Article 10 of the European Convention on Human Rights and the protection of individuals. For these reasons, the CPS identified that a high threshold must be met before criminal proceedings are brought, and in many cases, a prosecution is unlikely to be in the public interest.

Category 4 communications fall under section 1 of the Malicious Communications Act 1988 and section 127 of the Communications Act 2003. These provisions refer to communications which are grossly offensive, indecent, obscene, menacing or false. The interim guidelines clarify that for a prosecution to be brought under such laws, a communication must be more than:

  • offensive, shocking or disturbing;
  • satirical, iconoclastic or rude; or
  • the expression of unpopular or unfashionable opinion, or banter or humour (even if distasteful to some or painful to those subjected to it).

Furthermore, a prosecution must be in the public interest and, where a suspect has taken swift action to remove the communication or has expressed genuine remorse, or other relevant parties (such as service providers) have taken similar swift action to remove the communication in question or otherwise block access to it, the guidance emphasizes that it may not be in the public interest to prosecute. The guidelines also stress the need to take into account the instantaneous nature of social media and the fact that the audience of such social media cannot be predicted, e.g., an individual may post something privately which is then repeated and re-published to a much wider audience than originally intended.

The interim guidelines have been broadly welcomed as reflecting a common sense approach, although some organizations concerned with freedom of expression, such as Justice and the Open Rights Group, have suggested in their consultation responses that the interim guidelines do not go far enough and have called for clarification of the underlying laws themselves. In terms of next steps, March 13, 2013 marked the deadline for consultation responses, and the CPS is expected to publish the results of the consultation later this year. Any updated guidelines will then follow.

Conclusion

The UK’s laws are slowly being updated to reflect the digital age, and these latest developments should help social media platform operators and other organizations to better understand how they can stay on the right side of the law. However, as always, organizations will need to keep a close watch on how the courts interpret the new laws to ensure that they continue to operate safely online. And taking a step back, it may be the case that these new developments will motivate the public to more carefully consider their social media etiquette and how they balance their right of freedom of expression with their social obligations of courtesy and respect for others. As one commentator has noted, “It’s not just the law that needs to catch up with social media, but manners too and manners can’t be legislated for.”

History is littered with examples of the law being slow to catch up with the use of technology.  Social media is no exception.  As our Socially Aware blog attests, countries around the world are having to think fast to apply legal norms to rapidly evolving communications technologies and practices.

Law enforcement authorities in the United Kingdom have not found the absence of a codified “social media law” to be a problem.  They have applied a “horses for courses” approach, and brought prosecutions or allowed claims under a range of different laws that were designed for other purposes.  Of course, this presents problems to users, developers and providers of social media platforms, who can be by no means certain which legal standards apply.

The use of Twitter and other forms of social media is ever increasing and the attraction is obvious—social media gives people a platform to share views and ideas. Online communities can bring like-minded people together to discuss their passions and interests; and, with an increasing number of celebrities harnessing social media for both personal and commercial purposes, Twitter often provides a peek into the lives of the rich and famous.

As an increased number of Twitter-related cases have hit the front pages and the UK courts, it is becoming increasingly clear that, in the United Kingdom at least, the authorities are working hard to re-purpose laws designed for other purposes to catch unwary and unlawful online posters.

It’s typically hard to argue that someone who maliciously trolls a Facebook page set up in the memory of a dead teenager or sends racist tweets should not be prosecuted for the hurt they cause.  But in other cases, it may not be so clear-cut—how does the law decide what is and what is not unlawful?  For example, would a tweet criticizing a religious belief be caught?  What about a tweet that criticizes someone’s weight or looks?  Where is the line drawn between our freedom of expression and the rights of others?  Aren’t people merely restating online what was previously (and still is) being discussed down the pub?

A range of UK laws is currently being used to regulate the content of tweets and other online messages.  At the moment, there is no particular consistency as to which laws will be used to regulate which messages.  It appears to depend on what evidence is available.  As a spokesman of the Crown Prosecution Service remarked, “Cases are prosecuted under different laws.  We review the evidence given to us and decide what is the most appropriate legislation to charge under.”

Communications Act 2003

In 2011, there were 2,000 prosecutions in the United Kingdom under section 127 of the Communications Act 2003. A recent string of high-profile cases has brought the Communications Act under the spotlight.

Under section 127(1)(a), a person is guilty of an offense if he sends “a message or other matter that is grossly offensive or of an indecent, obscene or menacing character” by means of a public electronic communications network.  The offense is punishable by up to six months’ imprisonment or a fine, or both.

So… what is “grossly offensive” or “indecent, obscene or menacing”?

In DPP v Collins [2006], proceedings were brought under section 127(1)(a) in relation to a number of offensive and racist phone calls made by Mr. Collins to the offices of his local Member of Parliament.  The House of Lords held that whether a message was grossly offensive was to be determined as a question of fact applying the standards of an open and just multiracial society, and taking into account the context of the words and all relevant circumstances.  The yardstick was the application of reasonably enlightened, but not perfectionist, contemporary standards to the particular message set in its particular context.  The test was whether a message was couched in terms that were liable to cause gross offense to those to whom it related.  The defendant had to have intended his words to be grossly offensive to those to whom they related, or to have been aware that they may be taken to be so.  The court made clear that an individual is entitled to express his views and to do so strongly, however, the question was whether he had used language that went beyond the pale of what was tolerable in society.  The court considered that at least some of the language used by the defendant could only have been chosen because it was highly abusive, insulting and pejorative.  The messages sent by the defendant were grossly offensive and would be found by a reasonable person to be so.

Proceedings are also being brought under section 127(1)(a) for racist messages.  In March 2012, Joshua Cryer, a student who sent racially abusive messages on Twitter to the ex-footballer, Stan Collymore, was successfully prosecuted under section 127(1)(a) and sentenced to two years’ community service and ordered to pay £150 costs. (However, interestingly, Liam Stacey, who was sentenced  to 56 days’ imprisonment for 26 racially offensive tweets in relation to Bolton Wanderers footballer Fabrice Muamba, was charged with racially aggravated disorderly behavior with intent to cause harassment, alarm or distress under section 31 of the Crime and Disorder Act 1998, rather than under the Communications Act).

Similarly, religious abuse is also being caught under the Act.  In April 2012, Amy Graham, a former police cadet, was charged under the Communications Act for abusive anti-Muslim messages posted on Twitter.  She awaits sentencing.

These cases may appear relatively clear-cut, but there have been some other high-profile cases where the grounds for prosecution appear more questionable.

In April 2012, John Kerlen was found guilty of sending tweets that the court determined were both grossly offensive and menacing, for posting a picture of a Bexley councilor’s house and asking: “Which c**t lives in a house like this. Answers on a postcard to #bexleycouncil”; followed by a second tweet saying: “It’s silly posting a picture of a house on Twitter without an address, that will come later. Please feel free to post actual s**t.”  He avoided a jail sentence —instead being sentenced to 80 hours of unpaid labor over 12 months, asked to pay £620 in prosecution costs, and subjected to a five-year restraining order.  Were these messages really menacing or grossly offensive?  If he was going to be prosecuted, was the Communications Act the appropriate law or should he have been prosecuted for incitement to cause criminal damage (if he was genuinely inciting others to post feces) or for harassment?

Even more controversial is the case that has become widely known as the “Twitter joke trial.” Paul Chambers was prosecuted under section 127(1)(a) for sending the following tweet: “Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your s**t together otherwise I’m blowing the airport sky high!!”  He appealed against his conviction to the Crown Court.  In dismissing the appeal, the judge said his tweet was “menacing in its content and obviously so.  It could not be more clear.  Any ordinary person reading this would see it in that way and be alarmed.” This was despite the fact that Robin Hood Airport had classified the threat as non-credible on the basis that “there is no evidence at this stage to suggest that this is anything other than a foolish comment posted as a joke for only his close friends to see.”  The case attracted a huge following among Twitter users, including high profile users such as Stephen Fry and Al Murray.  Following a February 2012 appeal to the High Court, it was announced on May 28 that the High Court judges who heard the case were unable to reach agreement, and that therefore, a new appeal would need to be re-heard by a three-judge panel. Such a “split decision” is extremely unusual.  No date has yet been set for the new hearing.

Malicious Communications Act 1988

Cases are also being brought under section 1 of the Malicious Communications Act 1988. Under this Act, it is an offense to send an electronic communication that conveys a message that is grossly offensive to another person, where the message is sent with the purpose of causing distress or anxiety to that person.

In February 2012, Sunderland fan, Peter Copeland, received a four-month suspended sentence after posting racist comments on Twitter aimed at Newcastle United fans.  More recently, a 13th person was arrested by police investigating the alleged naming of a rape victim on social media sites after Sheffield United Striker, Ched Evans, was jailed for raping a 19-year-old woman.  The individuals involved have been arrested for offenses under various laws, including the Malicious Communications Act.

What’s next?

So, what’s next for malicious communications?  Perhaps sexist remarks.

Earlier this month, Louise Mensch, a Member of Parliament, highlighted a variety of sexist comments that had been sent to her Twitter account.  In response to this, Stuart Hyde, who is the Chief Constable of Cumbria Police and the national e-crime prevention lead for the Association of Chief Police Officers, remarked that the comments made to Mensch were “horrendous” and “sexist bigotry at its worst.”  He referred to the offenses available to the authorities:  “We are taking people to court. People do need to understand that while this is a social media it’s also a media with responsibilities and if you are going to act illegally using social media expect to face the full consequences of the law. Accepting that this is fairly new, even for policing … we do need to take action where necessary.”  Whether any of these comments will lead to charges remains to be seen.

In another example of online abuse, Alexa Chung, the TV presenter, recently received nasty comments criticizing her weight in response to some Instagram photos she had posted on Twitter.  She removed the photos in response, but is it possible that these kinds of messages could be considered grossly offensive and therefore unlawful?

We will have to wait and see what other cases are brought under the Communications Act and Malicious Communications Act and what balance is ultimately struck between freedom of expression and protecting individuals from receiving malicious messages.  However, it is not just criminal laws relating to communications that could apply to online behavior.  Recent events have also led to broader legislation such as the Contempt of Court Act and the Serious Crime Act being considered in connection with messages posted on Twitter and other social media services.

Contempt of Court Act 1981

If someone posts information online that is banned from publication by the UK courts, they could be found in contempt of court under the Contempt of Court Act 1981 and liable for an unlimited fine or a two-year prison sentence. However, as we saw in 2011, the viability of injunctions in the age of social media is questionable.  When the footballer, Ryan Giggs, requested that Twitter hand over details about Twitter users who had revealed his identity in breach of the terms of a “super-injunction,” hundreds of Twitter users simply responded by naming him again.  No users have, to date, been prosecuted for their breach of the injunction.

In another high profile case, in February 2012, the footballer, Joey Barton, was examined for contempt of court when he tweeted some comments regarding the trial of footballer, John Terry. Under the Contempt of Court Act 1981, once someone has been arrested or charged, there should be no public comments about them which could risk seriously prejudicing the trial.  In that case, it was found that Barton’s comments would not compromise the trial and therefore he was not prosecuted for his comments.

Serious Crime Act 2007

Last summer’s riots in Englandled to Jordan Blackshaw and Perry Sutcliffe-Keenan being found guilty under sections 44 and 46 of the Serious Crime Act and jailed for having encouraged others to riot. Blackshaw had created a Facebook event entitled “Smash d[o]wn in Northwich Town” and Sutcliffe-Keenan had invited people to “riot” in Warrington.  Both men were imprisoned for four years.

Defamation Act 1996

Of course, posting controversial messages online is not just a criminal issue.  Messages can also attract civil claims for defamation, under the Defamation Act 1996.

In March 2012, in the first UKruling of its kind, former New Zealandcricket captain, Chris Cairns, won a defamation claim against Lalit Modi, former Indian Premier League (IPL) chairman, for defamatory tweets.  Mr. Modi had tweeted that Mr. Cairns had been removed from the IPL list of players eligible and available to play in the IPL “due to his past record of match fixing.”  Mr. Cairns was awarded damages of £90,000 (approximately £3,750 per word tweeted).

Conclusion

As in other countries, a whole host of UK laws that were designed in an age before social media—even, in some cases, far before the Internet as we know it—are now being used to regulate digital speech.  Digital speech, by its very nature, has permanent records that are easily searchable, making the police and the prosecution’s job much easier.

Accordingly, these types of cases are only going to increase, and it will be interesting to see where UK courts decide to draw the line between freedom of expression and the law.  One would hope that a sense of proportionality and common sense will be used so that freedom of expression offers protection for ill-judged comments said in the heat of the moment or “close to the knuckle” jokes, while ensuring that the victims of abusive and threatening trolls are rightly protected.  In the meantime, users need to be very careful when tweeting and posting messages online, particularly in terms of the language they use.  Tone can be extremely difficult to convey in 140 characters or less.

One has to feel sorry for the UK holiday makers who were barred in January 2012 from entering the United States for tweeting that they were going to “destroy America” (despite making clear to the U.S. airport officials who detained them that “destroy” was simply British slang for “party”).  No doubt they will think twice before clicking that Tweet button in the future.