iStock_000048822690_smThe European Commission has announced new draft laws that would give consumers new remedies where digital content supplied online is defective or not as described by the seller.

On Dec. 9, 2015, the European Commission proposed two new directives on the supply of digital content and the online sale of goods. In doing so, the Commission is making progress towards one of the main goals in the Digital Single Market Strategy (the “DSM Strategy”) announced in May 2015: to strengthen the European digital economy and increase consumer confidence in trading across EU Member States.

This is not the first time that the Commission has tried to align consumer laws across the EU; its last attempt at a Common European Sales Law faltered earlier this year. But the Commission has now proposed two new directives, dealing both with contracts for the supply of digital content and other online sales (the “Proposed Directives”).

National parliaments can raise objections to the Proposed Directives within eight weeks, on the grounds of non-compliance with the subsidiarity principle—that is, by arguing that that regulation of digital content and online sales is more effectively dealt with at a national level.

Objectives

Part of the issue with previous EU legislative initiatives in this area is that “harmonized” has really meant “the same as long as a country doesn’t want to do anything different.” This time, the Proposed Directives have been drafted as so-called “maximum harmonization measures,” which would preclude Member States from providing any greater or lesser protection on the matters falling within their scope. The Commission hopes that this consistent approach across Member States will encourage consumers to enter into transactions across EU borders, while also allowing traders to simplify their legal documentation by using a single set of terms and conditions for all customers within the EU.

An outline of the scope and key provisions of each of the Proposed Directives, as well as the effect on English law, are summarized after the jump.

Continue Reading Harmonizing B2C Online Sales of Goods and Digital Content in Europe

  • Bad chords. A European musician’s attempt to stop a negative concert review from continuing to appear in Internet search results is raising questions about whether the EU’s “right to be forgotten” ruling could prevent the Internet from being a source of objective truth.  Established in May by the European Court of Justice, the right to be forgotten ruling requires search engines like Google to remove “inadequate, irrelevant or… excessive” links that appear as a result of searches of an EC member’s name. Pursuant to the ruling, European pianist Dejan Lazic asked the Washington Post to remove a tepid review of one of his Kennedy Center concerts from Google search results. Lazic’s request was denied because it was posed to the wrong party—the right to be forgotten ruling applies to Internet search engines, not publishers—but it nevertheless serves as an example of a request that could be granted under the right to be forgotten rule, and that, argues Washington Post Internet culture columnist Caitlin Dewey, is “terrifying.” Dewey writes that such a result “torpedoes the very foundation of arts criticism… essentially invalidates the primary function of journalism,” and “undermines the greatest power of the Web as a record and a clearinghouse for our vast intellectual output.”
  • A tall tale. The FBI has admitted to fabricating an Associated Press story and sending its link to the MySpace page of a high-school-bombing-threat suspect in 2007 to lure him into downloading malware that revealed his location and Internet Protocol address. Agents arrested the suspect, a 15-year-old Seattle-area boy, within days of learning his whereabouts as the result of the malware, which downloaded automatically when the suspect clicked the link to a fabricated story bearing the headline “Technology savvy student holds Timberline High School hostage.” Civil libertarians are concerned about the FBI’s impersonation of news organizations to send malware to suspects, and an AP spokesman said the organization finds it “unacceptable that the FBI misappropriated the name of The Associated Press and published a false story attributed to AP.”
  • Suspicious expulsions. An Alabama school district recently expelled more than a dozen students after a review of their social media accounts revealed signs of gang involvement or gun possession. The investigation into the students’ social media accounts was conducted by a former FBI agent whom the school district had hired for $157,000 as a security consultant. Since 12 of the 14 expelled students were African-American, a county commissioner accused the investigation of  “effectively targeting or profiling black children in terms of behavior and behavioral issues.”

Earlier this year, the French consumer association UFC-Que Choisir initiated proceedings before the Paris District Court against Google Inc., Facebook Inc. and Twitter Inc., accusing these companies of using confusing and unlawful online privacy policies and terms of use agreements in the French versions of their social media platforms; in particular, the consumer association argued that these online policies and agreements provide the companies with too much leeway to collect and share user data.

In a press release published (in French) on its website, UFC-Que Choisir explains that the three Internet companies ignored a letter that the group had delivered to them in June 2013, containing recommendations on how to modify their online policies and agreements. The group sought to press the companies to modify their practices as part of a consumer campaign entitled “Je garde la main sur mes données” (or, in English, “I keep my hand on my data”).

According to the press release, the companies’ refusal to address UFC-Que Choisir’s concerns prompted it to initiate court proceedings. The group has requested that the court suppress or modify a “myriad of contentious clauses,” and alleged that one company had included 180 such “contentious clauses” in its user agreement.

The group has also invited French consumers to sign a petition calling for rapid adoption of the EU Data Protection Reform that will replace the current Directive on data protection with a Regulation with direct effects on the 28 EU Member States. UFC-Que Choisir published two possibly NSFW videos depicting a man and a woman being stripped bare while posting to their Google Plus, Facebook and Twitter accounts. A message associated with each video states: “Sur les réseaux sociaux, vous êtes vite à poil” (or, in English, “On social networks, you will be quickly stripped bare”). Continue Reading French Consumer Association Takes on Internet Giants

The European Court of Justice (ECJ) issued a quite surprising decision against Google which has significant implications for global companies.

On May 13, 2014 the ECJ issued a ruling which did not follow the rationale or the conclusions of its Advocate General, but instead sided with the Spanish data protection authority (DPA) and found that:

  • Individuals have a right to request from the search engine provider that content that was legitimately published on websites should not be searchable by name if the personal information published is inadequate, irrelevant or no longer relevant;
  • Google’s search function resulted in Google acting as a data controller within the meaning of the Data Protection Directive 95/46, despite the fact that Google did not control the data appearing on webpages of third party publishers;
  • Spanish law applied because Google Inc. processed data that was closely related to Google Spain’s selling of advertising space, even where Google Spain did not process any of the data. In doing so, it derogated from earlier decisions, arguing the services were targeted at the Spanish market, and such broad application was required for the effectiveness of the Directive.

The ruling will have significant implications for search engines, social media operators and businesses with operations in Europe generally. While the much debated “right to be forgotten” is strengthened, the decision may open the floodgates for people living in the 28 countries in the EU to demand that Google and other search engine operators remove links from search results. The problem is that the ECJ mentions a broad range of data that may be erased. Not only should incorrect or unlawful data be erased, but also all those data which are “inadequate, irrelevant, or no longer relevant”, as well as those which are “excessive or not kept up to date” in relation to the purposes for which they were processed. It is left to the companies to decide when data falls into these categories.

In that context, the ruling will likely create new costs for companies and possibly thousands of individual complaints. What is more, companies operating search engines for users in the EU will have the difficult task of assessing each complaint they process and whether the rights of the individuals prevail over the rights of the public. Internet search engines with operations in the EU will have to handle requests from individuals who want the deletion of search results that link to pages containing their personal data.

That said, the scope of the ruling is limited to name searches. While search engines will have to de-activate the name search, the data can still be available in relation to other keyword searches. The ECJ did not impose new requirements relating to the content of webpages, in an effort to maintain the freedom of expression, and more particularly, press freedom. But this will still result in a great deal of information legally published to be available only to a limited audience.

Below we set out the facts of the case and the most significant implications of the decision, and address its possible consequences on all companies operating search engines. Continue Reading European Court of Justice Strengthens Right to Be Forgotten

Cisco estimates that 25 billion devices will be connected in the Internet of Things (IoT) by 2015, and 50 billion by 2020. Analyst firm IDC makes an even bolder prediction: 212 billion connected devices by 2020. This massive increase in connectedness will drive a wave of innovation and could generate up to $19 trillion in savings over the next decade, according to Cisco’s estimates. 

In the first part of this two-part post, we examined the development of, and practical challenges facing businesses implementing, IoT solutions. In this second part, we will look at the likely legal and regulatory issues associated with the IoT, especially from an EU and U.S. perspective.

The Issues

In the new world of the IoT, the problem is, in many cases, the old problem squared. Contractually, the explosion of devices and platforms will create the need for a web of inter-dependent providers and alliances, with consequent issues such as liability, intellectual property ownership and compliance with consumer protection regulations. Continue Reading The Internet of Things Part 2: The Old Problem Squared

INTRODUCTION

This year, as the world celebrates the 25th anniversary of the World Wide Web, the Web’s founder, Tim Berners-Lee, has called for a fundamental reappraisal of copyright law.  By coincidence, this year we also anticipate a rash of UK and European legislative developments and court decisions centring on copyright and its application to the Web.

In our “Copyright: Europe Explores its Boundaries” series of posts—aimed at copyright owners, technology developers and digital philosophers alike—we will examine how UK and European copyright is coping with the Web and the novel social and business practices that it enables. Continue Reading Copyright: Europe Explores its Boundaries: Part 1: Link Hubs

Peer-to-peer (“P2P”) business models based on the Internet and technology platforms have become increasingly innovative.  As such models have proliferated, they frequently result in clashes with regulators or established market competitors using existing laws as a defensive tactic.  The legal battles that result illustrate the need for proactive planning and consideration of the likely legal risks during the early structuring phase of any new venture.

Collaborative consumption, or the “sharing economy” as it is also known, refers to the business model that involves individuals sharing their resources with strangers, often enabled by a third-party platform.  In recent years, there has been an explosion of these P2P businesses.  The more established businesses include online marketplaces for goods and services (eBay, Taskrabbit) and platforms that provide P2P accommodation (Airbnb, One Fine Stay), social lending (Zopa), crowdfunding (Kickstarter) and car sharing (BlaBlaCar, Lyft, Uber).  But these days, new sharing businesses are appearing at an unprecedented rate; you can now find a sharing platform for almost anything.  People are sharing meals, dog kennels, boats, driveways, bicycles, musical instruments – even excess capacity in their rucksacks (cyclists becoming couriers).

The Internet and, more specifically, social media platforms and mobile technology has brought about this economic and cultural shift.  Some commentators are almost evangelical about the potential disruption to traditional economic models that the sharing economy provides, and it’s clear that collaborative consumption offers a compelling proposition for many individuals.  It helps people to make money from under-utilized assets and tap into global markets; it gives people the benefits of ownership but with reduced costs and less environmental impact; it helps to empower the under-employed; and it brings strangers together and offers potentially unique experiences.  There’s clearly both supply and demand, and a very happy set of users for a great many of these new P2P services.

However, not everyone is in favor of the rapid growth of this new business model.  Naturally, most of the opposition comes from incumbent businesses or entrenched interests that are threatened by the new competition or those that have genuine concerns about the risk posed by unregulated entrants to the market.  Authorities and traditional businesses are challenging sharing economy businesses in a variety of ways, including arguing that the new businesses violate applicable laws, with accommodation providers and car-sharing companies appearing to take the brunt of the opposition to date.

Bed Surfing

One of the most successful P2P marketplaces, San Francisco-founded Airbnb is a platform that enables individuals to rent out part or all of their house or apartment.  It currently operates in 192 countries and 40,000 cities.  Other accommodation-focused P2P models include One Fine Stay, a London-based platform that allows home owners to rent out empty homes while they are out of town.

Companies such as these have faced opposition from hoteliers and local regulators who complain that home owners using these platforms have an unfair advantage by not being subject to the same laws as a traditional hotel.  City authorities have also cited zoning regulations and other rules governing short-term rentals as obstacles to this burgeoning market.  It has been reported that some residents have been served with eviction notices by landlords for renting out their apartments in violation of their leases, and some homeowner and neighborhood associations have adopted rules to restrict this type of short-term rental.

These issues are not unique to the United States.  Commentators have reported similar resistance with mixed responses from local or municipal governments in cities such as Barcelona, Berlin and Montreal.

It’s not particularly surprising that opposition to P2P accommodation platforms would come from existing incumbent traditional operators after all, that’s typical of most new disruptive business models in the early stages before mainstream acceptance.  But the approaches taken by P2P opponents illustrate that most regulations were originally devised to apply to full-time commercial providers of goods and services, and apply less well to casual or occasional providers.

This has consequences for regulators, who are likely to have to apply smarter regulatory techniques to affected markets.  Amsterdam is piloting such an approach to accommodation-sharing platforms, realizing the benefits that a suitably-managed approach to P2P platforms could have on tourism and the local economy.

Car Sharing

Companies that enable car-sharing services have also faced a barrage of opposition, both from traditional taxi companies and local authorities.  In many U.S. cities, operators such as Lyft and Uber have faced bans, fines and court battles.

It was reported in August 2013 that eleven Uber drivers and one Lyft driver were recently arrested at San Francisco airport on the basis of unlawful trespassing offenses.  In addition, during summer 2013, the Washington, D.C. Taxicab Commission proposed new restrictions that would prevent Uber and its rivals from operating there.  Further, in November 2012, the California Public Utilities Commission (“CPUC”) issued $20,000 fines against Lyft, SideCar and Uber for “operating as passenger carriers without evidence of public liability and property damage insurance coverage” and “engaging employee-drivers without evidence of workers’ compensation insurance.

All three firms appealed these fines, arguing that outdated regulations should not be applied to peer-rental services, and the CPUC allowed the companies to keep operating while it drafted new regulations, which were eventually issued in July 2013.  In August 2013, the Federal Trade Commission intervened and wrote to the Commissions arguing that the new rules were too restrictive and could stifle innovation.  The CPUC rules (approved on September 19, 2013) require operators to be licensed and meet certain criteria including in terms of background checks, training and insurance.  The ridesharing companies will be allowed to operate legally under the jurisdiction of the CPUC, and will now fall under a newly created category called “Transportation Network Company.”

Some operators have structured their businesses in an attempt to avoid at least some of the regulatory obstacles.  For example, Lyft does not set a price for a given journey; instead, riders are prompted to give drivers a voluntary “donation.”  Lyft receives an administrative fee in respect of each donation.  In addition, in its terms, Lyft states that it does not provide transportation services and is not a transportation carrier; rather, it is simply a platform that brings riders and drivers together.  In BlaBlaCar’s model, drivers cannot make a profit, just offset their actual costs, which helps to ensure that drivers are not considered to be traditional taxi drivers, thereby helping them avoid the regulation that applies to the provision of taxi services.

Traditional players embracing the new model

Interestingly, not all traditional players are taking a completely defensive approach.  From recent investment decisions, it appears that some companies appreciate that it could make sense for them to work closely with their upstart rivals, rather than oppose them.  For example, in 2011, GM Ventures invested $13 million in RelayRides and, in January 2013, Avis acquired Zipcar, giving Avis a stake in Wheelz, a P2P car rental firm in which Zipcar has invested $14 million.

The incentive for incumbent operators to embrace P2P models will likely vary by sector.  Perhaps it’s no surprise that this is best illustrated in the car rental industry, where there already exists a financial “pull” and a regulatory “push” towards greener and more sustainable models of service provision.

Legal and Regulatory Issues

Lawmakers and businesses around the world are currently grappling with how to interpret existing laws in the context of P2P sharing economy business models and considering whether new regulation is required.  For example, the European Union is preparing an opinion on collaborative consumption in the light of the growth of P2P businesses there.  One hopes that European policy makers focus more on incentivizing public investment in P2P projects via grants or subsidies than on prescriptive regulation of the sector.

Importantly, however, it’s a particular feature of the market for P2P platforms that much of the regulatory activity tends to be at the municipal or local level, rather than national.  This tends to make for a less cohesive regulatory picture.

In the meantime, anyone launching a social economy business will need to consider whether and how various thorny legal and regulatory issues will affect both the platform operator and the users of that platform.  Often, this may mean tailoring services to anticipate particular legal or regulatory concerns.

  • Consumer protection.  Operators will need to consider the extent to which their platforms comply with applicable consumer protection laws, for example when drafting appropriate terms of use for the platform.
  • Privacy.  Operators will need to address issues of compliance with applicable privacy laws in terms of the processing of the personal data of both users and users’ customers, and prepare appropriate privacy policies and cookie notices.
  • Employment.  Where services are being provided, the operator will need to consider compliance with any applicable employment or recruitment laws, e.g., rules governing employment agencies, worker safety and security, and minimum wage laws.
  • Discrimination.  Operators will need to consider potential discrimination issues, e.g., what are the consequences if a user refuses to loan their car or provide their spare room on discriminatory grounds, for example due to a person’s race or sexuality?  Could the operator attract liability under anti-discrimination laws?
  • Laws relating to payments.  One key to success for a P2P business model is to implement a reliable and effective payment model.  But most countries impose restrictions on certain types of payment structures in order to protect consumers’ money.  Where payments are made via the P2P platform rather than directly between users, operators will need to address compliance with applicable payment rules, and potentially deal with local payment services laws.  Fundamentally, it needs to be clear whose obligation it is to comply with these laws.
  • Taxation.  Operators will need to consider taxation issues that may apply – both in terms of the operator and its users.  Some sectors of the economy – hotels, for example – are subject to special tax rates by many cities or tax authorities.  In such cases, the relevant authorities can be expected to examine closely – and potentially challenge, or assess municipal, state or local taxes against – P2P models that provide equivalent services.  In some places, collection of such taxes can be a joint and several responsibility of the platform operator and its users.
  • Safety and security.  When strangers are being brought together via a platform, security issues will need to be addressed.  Most social economy businesses rely on ratings and reciprocal reviews to build accountability and trust among users.  However, some platforms also mitigate risks by carrying out background and/or credit checks on users.  Airbnb also takes a practical approach, employing a full-time Trust & Safety team to provide extra assurance for its users.
  • Liability.  One of the key questions to be considered is who is legally liable if something goes wrong.  Could the platform attract liability if a hired car crashes or a host’s apartment is damaged?
  • Insurance.  Responsibility for insurance is also a key consideration.  The issue of insurance for car-sharing ventures made headlines in April 2013 when it was reported that a Boston resident had crashed a car that he had borrowed via RelayRides.  The driver was killed in the collision and four other people were seriously injured. RelayRides’ liability insurance was capped at $1 million, but the claims potentially threaten to exceed that amount.  Given these types of risks, some insurance companies are refusing to provide insurance coverage if policyholders engage in P2P sharing.  Three U.S. states (California, Oregon and Washington) have passed laws relating to car sharing, placing liability squarely on the shoulders of the car-sharing service and its insurers.
  • Industry-specific law and regulation.  Companies will need to consider issues of compliance with any sector-specific laws, whether existing laws or new regulations that are specifically introduced to deal with their business model (such as crowd-funding rules under the JOBS Act in the United States, and P2P lending rules to be introduced shortly in the United Kingdom).  As noted above, some social economy businesses have already experienced legal challenges from regulators, and as collaborative consumption becomes even more widely adopted, regulatory scrutiny is likely to increase.  Accordingly, rather than resist regulation, the best approach for sharing economy businesses may be to create trade associations for their sector and/or engage early on with lawmakers and regulators in order to design appropriate, smarter policies and frameworks for their industry.

Conclusion

Erasmus said, “There is no joy in possession without sharing.”  Thanks to collaborative consumption, millions of strangers are now experiencing both the joy – and the financial benefits – of sharing their resources.  However, the legal challenges will need to be carefully navigated in order for the sharing economy to move from being merely disruptive to become a firmly established business model.

In our May 30, 2012 post on the Socially Aware blog—“Should We All Be Getting the Twitter “Jitters”? Be Careful What You Say Online (Particularly in the United Kingdom)”—we considered a variety of UK laws being used to regulate the content of tweets and other online messages. Since that post, there has been a series of legal developments affecting the regulation of social media in the UK, in particular:

The following is an overview of each of these important developments.

1. Tamiz v. Google

In February 2013, the Court of Appeal considered the potential liability of website operators in relation to defamatory comments posted by third parties.

Google Inc. (“Google”) operates the Blogger.com blogging platform (“Blogger”). In April 2011, the “London Muslim” blog used Blogger to publish an article about the claimant, Mr Tamiz. After a number of users anonymously posted comments below the article, Tamiz wrote to Google complaining that the comments were defamatory. Google did not remove the comments, however, Google passed on the complaint to the blogger, who then removed the article and the related comments.

Meanwhile, Tamiz applied to the court for permission to serve libel proceedings on Google. Google contested the application, arguing that it was not a “publisher” of the allegedly defamatory statements, and in any event Google sought to rely on the available defences for a website operator under Section 1 of the Defamation Act 1996 and Regulation 19 of the E-Commerce Regulations 2002.

IN FOCUS: What is the Section 1 Defence?

Section 1 of the Defamation Act 1996 provides that a person has a defence to an action for defamation if such person: (i) is not the author, editor or publisher of the statement complained of; (ii) takes reasonable care in relation to its publication; and (iii) does not know, and has no reason to believe, that such person’s actions caused, or contributed to, the publication of a defamatory statement. For these purposes, “author” means the originator of the statement, “editor” means a person having editorial or equivalent responsibility for the content of the statement or the decision to publish it, and “publisher” means a person whose business is issuing material to the public, or a section of the public, and who issues material containing the statement in the course of that business.

Under Section 1, a person will not be considered an author, editor or publisher if such person is involved only, amongst other things:

  • in processing, making copies of, distributing or selling any electronic medium in or on which the statement is recorded;
  • as an operator or provider of a system or service by means of which a statement is made available in electronic form; or
  • as the operator of or provider of access to a communications system by means of which the statement is transmitted, or made available, by a person over whom he or she has no effective control.

Regulation 19 of the E-Commerce Regulations 2002 provides another defence for website operatorsone that can be easier to establish than the Section 1 defence. Regulation 19 protects online service providers by providing that an entity which hosts information provided by a recipient of the online service will not have any liability arising from its storage of the information as long as it has no actual knowledge of any unlawful activity or information, and if, on obtaining actual knowledge of the unlawful information or activity, such entity acts expeditiously to remove or disable access to the material.

At first instance, the court found in favour of Google on the basis that Tamiz’s notification of Google concerning the offending material did not turn Google into a publisher of that material. Google’s role was purely passive and analogous to the owner of a wall which had been covered overnight with defamatory graffiti; although the owner could acquire scaffolding and whitewash the graffiti, that did not mean that the owner should be considered a publisher in the meantime. The court also stated that in any event, if Google had been a publisher of the comments, it could have relied on the Section 1 defence because it was not a commercial publisher and it had no effective control over people using Blogger. (Although there had been a delay between Tamiz’s letter to Google and Google’s notification to the blogger, the judge found that Google had still responded within a reasonable period of time.) The judge also stated that Google would have had a defence under Regulation 19, for purposes of which Google was the information society service provider and the blogger was the recipient. The judge emphasized the importance of the term “unlawful” in Regulation 19; in order for the material to be unlawful, the operator would need to have known something of the strengths and weaknesses of the available defences. Tamiz appealed.

The Court of Appeal agreed that Google was not a publisher before it was notified by Tamiz of the offending materials because it could not be said that Google either knew or ought reasonably to have known of the defamatory comments. However, the Court of Appeal departed from the earlier decision on the question of post-notification liability. Rather than a wall, the Court of Appeal likened Blogger to a large notice board, where Google had the ability to remove or block any material posted on the board that breached its rules. The court held that by failing to have the material removed until five weeks after notification, Google was arguably a publisher post-notification because, by continuing to host the blog in question, Google’s actions may have been held to contribute to the publication of the defamatory statement. Despite its ruling, ultimately the Court of Appeal rejected Tamiz’s appeal on the basis that any harm to Tamiz’s reputation was trivial—and as the appeal failed, the court did not consider the availability of the Regulation 19 defence.

The Tamiz v. Google decision potentially widens the circumstances in which website operators can be liable for defamatory content posted by others. The key lesson for social media platform operators under UK law is this: remove allegedly defamatory material as swiftly as possible following notification, in order to avoid any argument that you are a publisher of that material.

2. Defamation Act 2013

After a difficult passage through parliament, the long-awaited Defamation Act 2013 (the “Act”) was introduced on April 25, 2013. The majority of its provisions will come into effect via statutory instrument later in 2013. The Act is intended to “overhaul the libel laws in England and Wales and bring them into the 21st century, creating a more balanced and fair law.” (The Act does not apply to Northern Ireland, as it was blocked by the Northern Ireland Assembly; further, only those sections which relate to scientific and academic privilege apply to Scotland, which has its own libel laws).

Serious Harm

Section 1 of the Act makes clear that, in order to be defamatory, a statement must cause or be likely to cause “serious harm” to a claimant’s reputation. Where a business is the claimant, it must show that the statement has caused or is likely to cause “serious financial loss” to the business in order for the “serious harm” requirement to be met. (This clarification was brought in as a last-minute amendment as a result of concerns that companies could use the fear of defamation claims to silence their critics.)

General Defences

Sections 2, 3 and 4 of the Act replace the previous common law defences of justification, fair comment and the Reynolds defence with new statutory defences of truth, honest opinion and publication on a matter of public interest. The new provisions broadly reflect the previous common law position, with the exception that the defence of honest opinion is now not required to be on a matter of public interest.

Section 5 Defence

For website operators, one of the key provisions of the Act is the new Section 5 defence. Although the Section 1 and Regulation 19 defences referred to above remain and are not abolished by the Act, Section 5 of the Act introduces a new additional defence specifically for website operators. Under Section 5, a website operator will have a defence to a defamation claim if it can show that it was not the entity that “posted the statement.” The defence will be defeated if the claimant can show the following:

  • it was not possible to identify the person who posted the statement (for these purposes, “identify” means that a claimant must have sufficient information to bring proceedings against the suspected defendant);
  • the claimant provided a notice of complaint in relation to the statement; and
  • the operator failed to respond to the notice of complaint in accordance with the applicable regulations.

Any malice by the website operator’s actions in connection with the statement concerned will defeat the defence.

Importantly, given previous case law which had indicated that moderation of third-party content could result in an operator attracting liability as an editor or publisher, the Act makes clear that the Section 5 defence is not defeated solely by reason of the fact that the operator of the website moderates the statements posted on it by others.

Section 10 Defence

Section 10 of the Act states that a court will not have jurisdiction to hear any action for defamation brought against a person who was not the author, editor or publisher of the applicable material, unless the court is satisfied that it is not reasonably practicable for an action to be brought against the author, editor or publisher.

Privilege

In response to lobbying from the scientific and academic communities, Section 6 of the Act provides protection for scientists and academics publishing in peer-reviewed journals. Section 7 clarifies when the defences of absolute and qualified privilege will be available.

Single Publication

Previously, each new publication of the same defamatory material would give rise to a separate cause of action. This has been of particular concern where defamatory statements have been published online. Section 8 of the Act provides a “single publication” rule that makes clear that the limitation period for bringing a claim will run for one year from the date of first publication.

Overseas Publishers

Section 9 of the Act has been introduced to address the contentious issue of “libel tourism.” It applies to any defendant who is not domiciled in the UK, an EU member state, or a state which is a party to the Lugano Convention (i.e., Iceland, Norway, Denmark and Switzerland). In such circumstances, the courts will not have jurisdiction to hear such claim unless the court is satisfied that England and Wales is the most appropriate place in which to bring an action.

Removal of Statements

Section 13 of the Act provides that, where a court has given judgment in favour of a claimant in an action for defamation, the court may require (i) the operator of a website on which the statement is posted to remove the statement or (ii) any person who was not the author, editor or publisher of the defamatory statement to stop distributing, selling or exhibiting material containing the statement.

Although we will need to await publication of the proposed “notice and takedown” regulations envisaged by the Act and monitor how the Act is implemented in practice by the courts, the Act appears to introduce more certainty and protection for website operators in terms of liability for third-party content—particularly in light of Tamiz v. Google—and as such has been broadly welcomed.

3. Interim Guidelines on Prosecution of Social Media Communications

As we reported in May 2012, various UK laws are currently being used to regulate the content of tweets and other online messages, although there is no consistency as to which laws will be used to regulate which messages. The relevant laws include section 127 of the Communications Act 2003, section 1 of the Malicious Communications Act 1988, the Contempt of Court Act 1981 and the Serious Crime Act 2007.

In December 2012, in response to a spate of high profile cases prosecuted under these laws, the Crown Prosecution Service (CPS) published interim guidelines in relation to the prosecution of cases in England and Wales that involve communications sent via social media. A public consultation was launched alongside such guidelines; at the end of the consultation, the interim guidelines will be reviewed in light of the responses received, and final guidelines will be published.

The guidelines identify four categories of communications that may constitute criminal offences:

  1. credible threats of violence or damage to property;
  2. communications targeting specific individuals;
  3. breach of court orders; and
  4. communications which are grossly offensive, indecent, obscene or false.

In terms of category 4, the CPS acknowledged the huge number of communications made daily using social media and identified the desire to avoid unnecessary prosecutions which would have a chilling effect on free speech. A balance had to be struck between an individual’s right to freedom of expression under Article 10 of the European Convention on Human Rights and the protection of individuals. For these reasons, the CPS identified that a high threshold must be met before criminal proceedings are brought, and in many cases, a prosecution is unlikely to be in the public interest.

Category 4 communications fall under section 1 of the Malicious Communications Act 1988 and section 127 of the Communications Act 2003. These provisions refer to communications which are grossly offensive, indecent, obscene, menacing or false. The interim guidelines clarify that for a prosecution to be brought under such laws, a communication must be more than:

  • offensive, shocking or disturbing;
  • satirical, iconoclastic or rude; or
  • the expression of unpopular or unfashionable opinion, or banter or humour (even if distasteful to some or painful to those subjected to it).

Furthermore, a prosecution must be in the public interest and, where a suspect has taken swift action to remove the communication or has expressed genuine remorse, or other relevant parties (such as service providers) have taken similar swift action to remove the communication in question or otherwise block access to it, the guidance emphasizes that it may not be in the public interest to prosecute. The guidelines also stress the need to take into account the instantaneous nature of social media and the fact that the audience of such social media cannot be predicted, e.g., an individual may post something privately which is then repeated and re-published to a much wider audience than originally intended.

The interim guidelines have been broadly welcomed as reflecting a common sense approach, although some organizations concerned with freedom of expression, such as Justice and the Open Rights Group, have suggested in their consultation responses that the interim guidelines do not go far enough and have called for clarification of the underlying laws themselves. In terms of next steps, March 13, 2013 marked the deadline for consultation responses, and the CPS is expected to publish the results of the consultation later this year. Any updated guidelines will then follow.

Conclusion

The UK’s laws are slowly being updated to reflect the digital age, and these latest developments should help social media platform operators and other organizations to better understand how they can stay on the right side of the law. However, as always, organizations will need to keep a close watch on how the courts interpret the new laws to ensure that they continue to operate safely online. And taking a step back, it may be the case that these new developments will motivate the public to more carefully consider their social media etiquette and how they balance their right of freedom of expression with their social obligations of courtesy and respect for others. As one commentator has noted, “It’s not just the law that needs to catch up with social media, but manners too and manners can’t be legislated for.”

Europe is currently undergoing a significant reform of its privacy regime. Under the current European Union (EU) Privacy Directive, individuals already have broad rights curtailing companies’ ability to process their personal data. The proposed EU Privacy Regulation seeks to broaden these rights even further. In particular, the proposed “right to be forgotten” may ultimately impose substantial new burdens on companies, especially social media and Internet businesses.

European privacy laws restrict the information that companies can process regarding individuals, and grant to individuals several rights with respect to their personal data (e.g., access and correction rights). The current EU Privacy Directive came into force in 1995 and has continued to apply ever since with various updates in the intervening years. The Europeans, however, are currently discussing a proposed EU Privacy Regulation that would further strengthen the protection of personal data of individuals by, among other things, introducing new rights. Among the new rights being proposed is the “right to be forgotten.” Essentially, under this proposed new right, individuals would be able to request—under certain circumstances—that companies erase all information in their systems and databases regarding such individuals. Companies receiving such requests would be obligated to comply.

The right to request removal from a company’s records is not new. Under the current EU Privacy Directive, an individual can request that a company remove his or her data from its system under certain circumstances, for example, because there is no legal basis for the company having such data in the first place or because the individual no longer has a relationship with that company (e.g., if a customer switches mobile phone carriers). However, this current right of removal is not absolute and can take a backseat to other interests, such as a company’s duty to maintain books and records of its business.

The new right to be forgotten would strengthen and expand the current right of removal. In particular, the new right would require a company to not only erase the applicable information and cease any further dissemination of the information but also take all reasonable steps necessary to inform third parties to whom the company has made the data available and to request that such third parties also remove the data from their systems. In other words, the new right would require a complete cleanup of the data originating from the company. A phone company receiving the request would therefore have to not only remove the data from its systems, but also inform, for example, its collections agencies, advertising and marketing agencies and outsourcing providers (such as installation services companies) that the request was made and that they should also remove the applicable data from their systems (as currently drafted, the company would only have to pass along the request, and would not be required to verify compliance with such request by other companies).

The right to be forgotten has been conceived in particular to address social media companies and other online businesses. Regarding such providers, the European legislatures find it of paramount importance that individuals be able to control what information is online about them (even when they have put the information online themselves), especially with respect to minors under the age of 18. While the rationale for this approach may be understandable, the way that the right is currently drafted, a social media site that receives a request to be forgotten could be obligated to inform third parties about the request, including other users of the social media site, other social media sites to which the data has been linked (e.g., via Twitter feeds or integration), search engines and any other website that the social media site knows has received the data. Given the expansive scope of the right as currently drafted, this right could potentially create burdensome and costly compliance obligations for social media sites and other online services, once the proposed EU Privacy Regulation is in force.

The proposed reform is currently being discussed in the European Parliament and is not expected to be finalized until 2014 at the earliest, after which there will be another two years before it would take effect. The proposals in the Regulation may still change pending ongoing debate, although it is expected that many of the new rights and requirements, including the right to be forgotten, will be maintained in some form.

In the June 2011 issue of Socially Aware, we reported on a Brussels Court of Appeal ruling in favor of Copiepresse, the Belgian association for the protection of French-language press copyright, in a case against Google.  To recap, on May 5, 2011, the Brussels Court of Appeal upheld an earlier ruling that Google had infringed copyright when it displayed links to and extracts of online newspaper articles that were usually only available to paying subscribers of the online newspapers at issue.

In reaction to the May 5th ruling, Google removed all Belgian French-language daily newspapers from its search index and cache on July 15, 2011.  As a result, the websites for Belgian newspapers Le Soir, La Libre Belgique, Sudpresse and l’Echo were unavailable in Google’s search results on both Google News and Google’s main search page.  Belgian national Le Soir took issue with Google’s actions and, in an article dated July 16, 2011, complained that Google made “Belgium newspapers disappear.”  Within hours of learning about Google’s actions, Copiepresse entered into negotiations with Google, and reached an agreement resulting in the news sites and certain related content being restored to Google search results by July 18, 2011.  The agreement reportedly allows Google to link to the online news sites in Google’s search results, but not to reproduce extracts of articles in Google’s news service.

Google’s actions were based on its literal interpretation of the Court of Appeal’s ruling, and in the company’s defense, a Google spokesperson stated that Google was merely eliminating all risks of incurring fines of EUR 25,000 (approximately USD 35,600) per day for non-compliance with the ruling.  Google sought the waiver of potential penalties with respect to restoring links to the news sites on its search service.  Nevertheless, Google’s conduct has been characterized by observers as an effort to “punish” Belgium’s French-language press for objecting to Google’s business practices, and sent a stark warning to online news publishers, many of whom depend on Google-generated traffic for customers and ad revenues.

All this comes at a time of heightened antitrust scrutiny for Google in the EU.  For example, German and Italian press associations have already brought antitrust-related complaints against Google.  Also, portions of the German complaint were referred to by the European Commission competition service, which opened formal proceedings against the search giant for alleged abuse of dominant position in November 2010.  The EU investigation focuses on Google’s alleged lack of transparency in rankings, biased search results and unfair terms and conditions.  Such cases typically take years to conclude.

In any event, this matter is far from over.  Google has not yet filed an appeal against the Brussels Court of Appeal’s ruling, and has until December 2011 to do so.