In our May 30, 2012 post on the Socially Aware blog—“Should We All Be Getting the Twitter “Jitters”? Be Careful What You Say Online (Particularly in the United Kingdom)”—we considered a variety of UK laws being used to regulate the content of tweets and other online messages. Since that post, there has been a series of legal developments affecting the regulation of social media in the UK, in particular:
- the Court of Appeal ruling in the Tamiz v. Google case;
- the UK’s new Defamation Act 2013; and
- the Crown Prosecution Service’s publication of interim guidelines for social media prosecutions.
The following is an overview of each of these important developments.
In February 2013, the Court of Appeal considered the potential liability of website operators in relation to defamatory comments posted by third parties.
Google Inc. (“Google”) operates the Blogger.com blogging platform (“Blogger”). In April 2011, the “London Muslim” blog used Blogger to publish an article about the claimant, Mr Tamiz. After a number of users anonymously posted comments below the article, Tamiz wrote to Google complaining that the comments were defamatory. Google did not remove the comments, however, Google passed on the complaint to the blogger, who then removed the article and the related comments.
Meanwhile, Tamiz applied to the court for permission to serve libel proceedings on Google. Google contested the application, arguing that it was not a “publisher” of the allegedly defamatory statements, and in any event Google sought to rely on the available defences for a website operator under Section 1 of the Defamation Act 1996 and Regulation 19 of the E-Commerce Regulations 2002.
IN FOCUS: What is the Section 1 Defence?
Section 1 of the Defamation Act 1996 provides that a person has a defence to an action for defamation if such person: (i) is not the author, editor or publisher of the statement complained of; (ii) takes reasonable care in relation to its publication; and (iii) does not know, and has no reason to believe, that such person’s actions caused, or contributed to, the publication of a defamatory statement. For these purposes, “author” means the originator of the statement, “editor” means a person having editorial or equivalent responsibility for the content of the statement or the decision to publish it, and “publisher” means a person whose business is issuing material to the public, or a section of the public, and who issues material containing the statement in the course of that business.
Under Section 1, a person will not be considered an author, editor or publisher if such person is involved only, amongst other things:
- in processing, making copies of, distributing or selling any electronic medium in or on which the statement is recorded;
- as an operator or provider of a system or service by means of which a statement is made available in electronic form; or
- as the operator of or provider of access to a communications system by means of which the statement is transmitted, or made available, by a person over whom he or she has no effective control.
Regulation 19 of the E-Commerce Regulations 2002 provides another defence for website operators—one that can be easier to establish than the Section 1 defence. Regulation 19 protects online service providers by providing that an entity which hosts information provided by a recipient of the online service will not have any liability arising from its storage of the information as long as it has no actual knowledge of any unlawful activity or information, and if, on obtaining actual knowledge of the unlawful information or activity, such entity acts expeditiously to remove or disable access to the material.
At first instance, the court found in favour of Google on the basis that Tamiz’s notification of Google concerning the offending material did not turn Google into a publisher of that material. Google’s role was purely passive and analogous to the owner of a wall which had been covered overnight with defamatory graffiti; although the owner could acquire scaffolding and whitewash the graffiti, that did not mean that the owner should be considered a publisher in the meantime. The court also stated that in any event, if Google had been a publisher of the comments, it could have relied on the Section 1 defence because it was not a commercial publisher and it had no effective control over people using Blogger. (Although there had been a delay between Tamiz’s letter to Google and Google’s notification to the blogger, the judge found that Google had still responded within a reasonable period of time.) The judge also stated that Google would have had a defence under Regulation 19, for purposes of which Google was the information society service provider and the blogger was the recipient. The judge emphasized the importance of the term “unlawful” in Regulation 19; in order for the material to be unlawful, the operator would need to have known something of the strengths and weaknesses of the available defences. Tamiz appealed.
The Court of Appeal agreed that Google was not a publisher before it was notified by Tamiz of the offending materials because it could not be said that Google either knew or ought reasonably to have known of the defamatory comments. However, the Court of Appeal departed from the earlier decision on the question of post-notification liability. Rather than a wall, the Court of Appeal likened Blogger to a large notice board, where Google had the ability to remove or block any material posted on the board that breached its rules. The court held that by failing to have the material removed until five weeks after notification, Google was arguably a publisher post-notification because, by continuing to host the blog in question, Google’s actions may have been held to contribute to the publication of the defamatory statement. Despite its ruling, ultimately the Court of Appeal rejected Tamiz’s appeal on the basis that any harm to Tamiz’s reputation was trivial—and as the appeal failed, the court did not consider the availability of the Regulation 19 defence.
The Tamiz v. Google decision potentially widens the circumstances in which website operators can be liable for defamatory content posted by others. The key lesson for social media platform operators under UK law is this: remove allegedly defamatory material as swiftly as possible following notification, in order to avoid any argument that you are a publisher of that material.
2. Defamation Act 2013
After a difficult passage through parliament, the long-awaited Defamation Act 2013 (the “Act”) was introduced on April 25, 2013. The majority of its provisions will come into effect via statutory instrument later in 2013. The Act is intended to “overhaul the libel laws in England and Wales and bring them into the 21st century, creating a more balanced and fair law.” (The Act does not apply to Northern Ireland, as it was blocked by the Northern Ireland Assembly; further, only those sections which relate to scientific and academic privilege apply to Scotland, which has its own libel laws).
Section 1 of the Act makes clear that, in order to be defamatory, a statement must cause or be likely to cause “serious harm” to a claimant’s reputation. Where a business is the claimant, it must show that the statement has caused or is likely to cause “serious financial loss” to the business in order for the “serious harm” requirement to be met. (This clarification was brought in as a last-minute amendment as a result of concerns that companies could use the fear of defamation claims to silence their critics.)
Sections 2, 3 and 4 of the Act replace the previous common law defences of justification, fair comment and the Reynolds defence with new statutory defences of truth, honest opinion and publication on a matter of public interest. The new provisions broadly reflect the previous common law position, with the exception that the defence of honest opinion is now not required to be on a matter of public interest.
Section 5 Defence
For website operators, one of the key provisions of the Act is the new Section 5 defence. Although the Section 1 and Regulation 19 defences referred to above remain and are not abolished by the Act, Section 5 of the Act introduces a new additional defence specifically for website operators. Under Section 5, a website operator will have a defence to a defamation claim if it can show that it was not the entity that “posted the statement.” The defence will be defeated if the claimant can show the following:
- it was not possible to identify the person who posted the statement (for these purposes, “identify” means that a claimant must have sufficient information to bring proceedings against the suspected defendant);
- the claimant provided a notice of complaint in relation to the statement; and
- the operator failed to respond to the notice of complaint in accordance with the applicable regulations.
Any malice by the website operator’s actions in connection with the statement concerned will defeat the defence.
Importantly, given previous case law which had indicated that moderation of third-party content could result in an operator attracting liability as an editor or publisher, the Act makes clear that the Section 5 defence is not defeated solely by reason of the fact that the operator of the website moderates the statements posted on it by others.
Section 10 Defence
Section 10 of the Act states that a court will not have jurisdiction to hear any action for defamation brought against a person who was not the author, editor or publisher of the applicable material, unless the court is satisfied that it is not reasonably practicable for an action to be brought against the author, editor or publisher.
In response to lobbying from the scientific and academic communities, Section 6 of the Act provides protection for scientists and academics publishing in peer-reviewed journals. Section 7 clarifies when the defences of absolute and qualified privilege will be available.
Previously, each new publication of the same defamatory material would give rise to a separate cause of action. This has been of particular concern where defamatory statements have been published online. Section 8 of the Act provides a “single publication” rule that makes clear that the limitation period for bringing a claim will run for one year from the date of first publication.
Section 9 of the Act has been introduced to address the contentious issue of “libel tourism.” It applies to any defendant who is not domiciled in the UK, an EU member state, or a state which is a party to the Lugano Convention (i.e., Iceland, Norway, Denmark and Switzerland). In such circumstances, the courts will not have jurisdiction to hear such claim unless the court is satisfied that England and Wales is the most appropriate place in which to bring an action.
Removal of Statements
Section 13 of the Act provides that, where a court has given judgment in favour of a claimant in an action for defamation, the court may require (i) the operator of a website on which the statement is posted to remove the statement or (ii) any person who was not the author, editor or publisher of the defamatory statement to stop distributing, selling or exhibiting material containing the statement.
Although we will need to await publication of the proposed “notice and takedown” regulations envisaged by the Act and monitor how the Act is implemented in practice by the courts, the Act appears to introduce more certainty and protection for website operators in terms of liability for third-party content—particularly in light of Tamiz v. Google—and as such has been broadly welcomed.
3. Interim Guidelines on Prosecution of Social Media Communications
As we reported in May 2012, various UK laws are currently being used to regulate the content of tweets and other online messages, although there is no consistency as to which laws will be used to regulate which messages. The relevant laws include section 127 of the Communications Act 2003, section 1 of the Malicious Communications Act 1988, the Contempt of Court Act 1981 and the Serious Crime Act 2007.
In December 2012, in response to a spate of high profile cases prosecuted under these laws, the Crown Prosecution Service (CPS) published interim guidelines in relation to the prosecution of cases in England and Wales that involve communications sent via social media. A public consultation was launched alongside such guidelines; at the end of the consultation, the interim guidelines will be reviewed in light of the responses received, and final guidelines will be published.
The guidelines identify four categories of communications that may constitute criminal offences:
- credible threats of violence or damage to property;
- communications targeting specific individuals;
- breach of court orders; and
- communications which are grossly offensive, indecent, obscene or false.
In terms of category 4, the CPS acknowledged the huge number of communications made daily using social media and identified the desire to avoid unnecessary prosecutions which would have a chilling effect on free speech. A balance had to be struck between an individual’s right to freedom of expression under Article 10 of the European Convention on Human Rights and the protection of individuals. For these reasons, the CPS identified that a high threshold must be met before criminal proceedings are brought, and in many cases, a prosecution is unlikely to be in the public interest.
Category 4 communications fall under section 1 of the Malicious Communications Act 1988 and section 127 of the Communications Act 2003. These provisions refer to communications which are grossly offensive, indecent, obscene, menacing or false. The interim guidelines clarify that for a prosecution to be brought under such laws, a communication must be more than:
- offensive, shocking or disturbing;
- satirical, iconoclastic or rude; or
- the expression of unpopular or unfashionable opinion, or banter or humour (even if distasteful to some or painful to those subjected to it).
Furthermore, a prosecution must be in the public interest and, where a suspect has taken swift action to remove the communication or has expressed genuine remorse, or other relevant parties (such as service providers) have taken similar swift action to remove the communication in question or otherwise block access to it, the guidance emphasizes that it may not be in the public interest to prosecute. The guidelines also stress the need to take into account the instantaneous nature of social media and the fact that the audience of such social media cannot be predicted, e.g., an individual may post something privately which is then repeated and re-published to a much wider audience than originally intended.
The interim guidelines have been broadly welcomed as reflecting a common sense approach, although some organizations concerned with freedom of expression, such as Justice and the Open Rights Group, have suggested in their consultation responses that the interim guidelines do not go far enough and have called for clarification of the underlying laws themselves. In terms of next steps, March 13, 2013 marked the deadline for consultation responses, and the CPS is expected to publish the results of the consultation later this year. Any updated guidelines will then follow.
The UK’s laws are slowly being updated to reflect the digital age, and these latest developments should help social media platform operators and other organizations to better understand how they can stay on the right side of the law. However, as always, organizations will need to keep a close watch on how the courts interpret the new laws to ensure that they continue to operate safely online. And taking a step back, it may be the case that these new developments will motivate the public to more carefully consider their social media etiquette and how they balance their right of freedom of expression with their social obligations of courtesy and respect for others. As one commentator has noted, “It’s not just the law that needs to catch up with social media, but manners too and manners can’t be legislated for.”