- Facebook reported strong results for ad revenue in the second quarter of 2014. Mobile advertising was particularly strong, up 30 percent from last year. Mobile ads now account for 62 percent of Facebook’s advertising revenue.
- Russian President Vladimir Putin signed a law requiring Internet companies to store all personal data of Russian users at data centers in Russia. This move would make foreign social media sites subject to Russian laws on government access to information and could chill criticism and opposition to the Russian government on such sites.
- Some freelancers are turning to social media to try to get paid for their work, often by making postings on vendors’ Facebook pages asking for overdue payments. This online shaming tactic is often successful but in some cases may lead to legal jeopardy for the freelancers.
- Arkansas legislators are considering changing a 2013 law after Facebook informs them that the law may have inadvertently made it a crime for a boss and an employee to become Facebook friends.
- As a result of a new partnership between Twitter and the Weather Company, marketers will now be able to purchase ads on Twitter based on the specific weather in a locale, so that, for example, a shampoo manufacturer could target Promoted Tweets to frizzy-haired users in high-humidity environments.
- According to the Los Angeles Times, Facebook prematurely released, then withdrew, a new mobile app called Slingshot that is intended to compete with Snapchat and permit users to send each other photo and video messages.
When most Americans think of drones, they think of unmanned, often weaponized aircraft that are used by governments in areas of conflict for intelligence or combat purposes. However, the proverbial sky is the limit on the potential commercial use of drones. For example, in a December 2013 60 Minutes interview, Jeff Bezos, the founder of Amazon.com, described his company’s efforts to develop GPS-programmed, autonomous drones (or in his words, “octocopters”) to serve as “delivery vehicles” to provide half-hour delivery of your future Amazon order. Although there will be hurdles to the widespread commercial adoption of drones as the Federal Aviation Administration works out the regulatory issues surrounding the licensing and use of drones in our airspace, our not-too-distant future could involve a world in which drones are literally buzzing above our heads.
Drones are, among other things, unmanned, light, easy to deploy and relatively cheap. As a result, companies could use drones for numerous purposes, including scientific research and exploration, monitoring livestock or gas pipelines, remote troubleshooting of technology, finding lost shipments or even as a substitute for the Super Bowl blimp. Because of advances in camera, video and audio technology (and the decreasing cost of that technology), however, drones could also be used to collect and communicate massive amounts of information about individuals and their everyday lives. Imagine a company taking its drones out for a spin on a Saturday morning in your town to conduct market research, observing how the average person mows the lawn, when the average person goes to grab coffee or how many bags of groceries the average person leaves with from the supermarket. Or, imagine a company flying a drone around its factory or retail location to monitor when its employees go on break or what end-caps its customers gravitate to or avoid. As is true with many new technologies, drones raise complex and often troubling privacy issues (remember your first cell phone…it didn’t have a camera or location services, right?). Continue Reading Next Generation State Privacy Law: Regulating the Commercial Use of Drones
This year, as the world celebrates the 25th anniversary of the World Wide Web, the Web’s founder, Tim Berners-Lee, has called for a fundamental reappraisal of copyright law. By coincidence, this year we also anticipate a rash of UK and European legislative developments and court decisions centring on copyright and its application to the Web.
In our “Copyright: Europe Explores its Boundaries” series of posts—aimed at copyright owners, technology developers and digital philosophers alike—we will examine how UK and European copyright is coping with the Web and the novel social and business practices that it enables. Continue Reading Copyright: Europe Explores its Boundaries: Part 1: Link Hubs
A 2013 CareerBuilder survey of hiring managers and human resource professionals reports that more than two in five companies use social networking sites to research job candidates. This interest in social networking does not end when the candidate is hired: to the contrary, companies are seeking to leverage the personal social media networks of their existing employees, as well as to inspect personal social media in workplace investigations.
As employer social media practices continue to evolve, individuals and privacy advocacy groups have grown increasingly concerned about employers intruding upon applicants’ or employees’ privacy by viewing restricted access social media accounts. A dozen states already have passed special laws restricting employer access to personal social media accounts of applicants and employees (“state social media laws”), and similar legislation is pending in at least 28 states. Federal legislation is also under discussion.
These state social media laws restrict an employer’s ability to access personal social media accounts of applicants or employees, to ask an employee to “friend” a supervisor or other employer representative and to inspect employees’ personal social media. They also have broader implications for common practices such as applicant screening and workplace investigations, as discussed below. Continue Reading Employer Access to Employee Social Media: Applicant Screening, ‘Friend’ Requests and Workplace Investigations
Two bills designed to facilitate the removal of minors’ personal information from social networking sites are currently under consideration in the California State Assembly, after being approved in the upper house of the state’s legislature, the Senate, in early 2013. The first of the two bills, S.B. 501, would require a “social networking Internet Web site” to remove, within 96 hours of receiving a registered user’s request, any of that user’s personal identifying information that is accessible online. The site would also be required to remove the personal identifying information of a user who is under the age of 18 upon request of the user’s parent or guardian. The second bill, S.B. 568, would require an “Internet Web site” to remove, upon request of a user under age 18, any content that user posted on the site.
Web site operators, whether they consider themselves to be in the social networking space or not, should remain alert to any forthcoming guidance from state agencies on the language contained in each of these bills. For instance, S.B. 501, as currently drafted, defines a “social networking” site as one that “allows an individual to construct a public or partly public profile within a bounded system, articulate a list of other users with whom the individual shares a connection, and view and traverse his or her list of connections . . . .” On its face, this definition would include not only the likes of Facebook and Twitter, but a host of other sites that primarily offer services such as, for example, ecommerce, gaming or blogging, and additionally provide to their users the ability to maintain profiles and interact with one another.
Furthermore, those who use social networking sites should be aware that S.B. 568 is not the Internet equivalent of an “undo” function for ill-advised content uploads. The bill expressly provides that site operators need only remove a minor’s original posting, and not content that “remains visible because a third party has copied the posting or reposted the content.” Therefore, anything uploaded to sites that facilitate rapid dissemination through “sharing” or “re-tweeting” is likely there to stay.
If S.B. 568 is passed by the Assembly, site operators will have until January 1, 2015, to develop the infrastructure necessary to ensure compliance. However, there is no such grace period currently written into S.B. 501, so companies may benefit from reviewing prior instances of social networking sites being required to rapidly implement new privacy policies in response to enforcement actions and changing laws. In one noteworthy episode in late 2011, Facebook was audited by Ireland’s Office of the Data Protection Commissioner (DPC) in response to complaints over the site’s retention of data that users believed they had deleted. Guided by DPC recommendations, Facebook rolled out over the next year a series of 45 privacy-related policy and operational changes, including changes involving whether and how long user data would be retained.
Lastly, companies should understand these two bills in the context of an expanding body of online privacy laws being enacted at both the state and federal levels, and in key foreign jurisdictions. One question likely to be addressed in coming years is whether laws such as S.B. 501 and 568, as well as similar legislation passed in others states—for example, Maine’s Act to Prevent Predatory Marketing Practices against Minors—are preempted by the federal Children’s Online Privacy Protection Act (COPPA), which contains broad language barring state-level imposition of liability “in connection with an activity” discussed in COPPA and that is inconsistent with COPPA’s mandates. Even if these state laws are found to be preempted, however, social networking companies should nonetheless prepare themselves to adapt to an evolving regulatory landscape in the area of privacy protection, as negotiations proceed in the European Union over a new General Data Protection Regulation that would likewise require the removal of users’ data upon request—and levy fines of up to two percent of global revenue for failure to comply.
In February 2013, we reported on legislative momentum in the Japanese Diet to bring Japan’s sixty-year-old election laws into the brave new world of Web 2.0. On April 19, 2013, that reform effort came to fruition, when a bill permitting the use of the Internet during election campaign periods passed both Houses of the legislature—just in time for the upcoming Upper House poll in July.
The debate revolved around Article 142 of the Public Offices Election Law (POEL), which imposes strict regulations on campaign activities during the two- to three-week “official campaign period” leading up to each national, prefectural and municipal election (also known informally as the “blackout” period). Specifically, Article 142 prohibits the dissemination of “documents and drawings” for electioneering purposes during the blackout period (with limited exceptions), a restriction that until now has been consistently interpreted to prohibit Internet-based electioneering activities altogether. Indeed, Article 142 has been understood to prohibit even the general public from participating in online election-related activities, activities synonymous with many popular grassroots campaign efforts in the United States and elsewhere.
These somewhat antiquated restrictions are now largely part of the past. The amended Article 142 permits candidates for political office, political parties and members of the general public (both Japanese and non-Japanese) to utilize a range of online tools for electioneering activities during the official campaign period, ushering in a new era of net senkyo (the buzzword for Internet-enabled campaigning). The potential benefits for candidates and political parties include inexpensive, twenty-four hour access to constituents, and the freedom to depart from the narrow range of permissible activities that define the current mode of electioneering in Japan: train station stump speeches, pamphlet distribution and showering passersby with megaphoned sound bites from officially sanctioned campaign vans. Another purpose of the legislation was, reportedly, to energize Japan’s infamously “apathetic” youth vote.
Essentially, the amended law divides the universe of online tools into websites and similar services (including blogs and social networking services (SNS)), on the one hand, and electronic mail, on the other. At least with respect to websites and similar services, the old restrictions have largely been dissolved: candidates, political parties and members of the general public are now permitted to update their websites, blogs and social network profiles with election-related activities during the official campaign period, and to engage in direct advocacy and solicitation of votes over the Web.
The specific inclusion of SNS among the types of services for which restrictions have been largely relaxed is crucial, given that SNS carry the greatest potential for political innovation under the net senkyo regime. SNS, of course, have been a major force in American politics for a number of years, and Japanese politicians have themselves flocked to services such as Facebook, YouTube and Twitter for conducting non-campaign-related activities outside the blackout period. However, the prospect of engaging in real-time and (theoretically) two-way communications with candidates for political office during the crucial period when voters are most attuned to the issues could represent a breakthrough for Japanese democracy, as Professor Matthew Wilson forcefully argued in 2011.
The relaxation of the POEL’s restrictions on online electioneering has already impacted Internet technologies based in Japan. Naver, developers of the Japanese homegrown messaging app “Line”—which has surpassed 150 million users worldwide and 45 million users in Japan alone—announced in May 2013 that ten political parties opened official Line accounts in the wake of the POEL amendment. The political parties reportedly hope to use Line to facilitate direct communications with supporters and solicit comments and feedback in real time, in addition to broadcasting news and information to followers using a more traditional one-to-many model.
On the other hand, the Diet has maintained much tighter regulation of the general public’s use of electronic mail (as opposed to websites, SNS and similar services). Although the POEL now permits parties and candidates to use electronic mail for electioneering purposes, lawmakers decided to preserve existing restrictions on voters’ use of campaign-related electronic mail, or to at least postpone resolution of the issue to a later date, in an apparent response to fears of “negative campaigning,” defamation, spoofing, identity theft and spam. As a result, the general public is still prohibited from sending electronic mails for election-related purposes during the blackout period—including from mobile phone-associated electronic mail accounts, which are widely used in Japan.
This creates an interesting tension: although a member of the general public may be free to use Facebook to express support for his or her favorite candidate, sending an electronic mail message containing the same content would continue to be off limits under the POEL. As many have observed, this leads to counterintuitive results, whereby someone who forwards to his or her friends a candidate’s official campaign electronic mail blast may potentially be liable for a fine (up to JPY500,000) or imprisonment (up to two years) and face disenfranchisement, while someone who simply copies and pastes the same information into a Facebook message would probably not run afoul of the POEL.
There is some skepticism amidst the excitement around net senkyo. According to a survey conducted jointly by the Sankei Shimbun and Fuji News Network, 56.8% of respondents said that they would not use online campaign information to inform their voting choices, while only 39.3% said that they would. (On the other hand, among voters in their twenties, the number of respondents expressing affirmative interest jumped to 62.7%. This bodes well for the effort to remobilize Japan’s youth vote, which reportedly was one of the original drivers of POEL reform.)
Further, anxiety over the twin threats of narisumashi, or identity theft, and defamation has not abated, and both Internet service providers and law enforcement authorities are already preparing for potential hiccups in the upcoming election cycle. Given the prevalence of Internet-enabled negative campaigning in other countries (for example, in Korea during 2012), it may be reasonable to worry about the downsides of the net senkyo revolution. However, as Professor Wilson has pointed out, the threat of fraud and other bad acts is omnipresent even outside any “official campaign period,” and both traditional law, such as the law of defamation, and technology itself—e.g., direct verification of accounts on Facebook and Twitter—can help mitigate these risks. SNS and similar technologies may even empower politicians to respond to false assertions more quickly and effectively.
Even though the amended Article 142 has become law in Japan, there is no way to predict the extent of its impact on Japanese political culture. But the surging popularity of SNS platforms and other mobile and online communications platforms makes it clear that net senkyo will impact the way Japanese citizens interact with political actors and political information in a lasting way.
Article courtesy of Morrison & Foerster’s Mobile Payments Practice
Lawmakers in Washington, D.C., continue to show interest in understanding and developing regulatory proposals relating to mobile apps. The interest appears to be driven, at least in part, by policymakers’ concerns about consumer privacy when using mobile phones and other smart hand-held devices. The issue of consumer privacy, as well as the security of financial information, and the use of mobile also apps has been raised in the context of Congressional hearings held to understand the new ways in which consumers are paying, and taking payments, via smartphone.
The recent introduction of a bill focusing on mobile apps and privacy issues is another indicator of ongoing legislative interest in mobile phone technology and ways in which smartphones are used. On May 9, 2013, Representative Hank Johnson (D-GA) introduced H.R. 1913, the “Application Privacy, Protection, and Security Act of 2013” (“APPS Act”). H.R. 1913 was referred to the House Committee on Energy and Commerce for consideration. As of June 4, 2013, the bill had five co-sponsors.
Representative Johnson’s introduction of the APPS Act follows the release, in January 2013, of a discussion draft of the bill that was developed through an Internet-based legislative project launched by the congressman’s office in July 2012. The following provides a brief overview of the APPS Act, as introduced.
Under the APPS Act, app developers would be required to provide users with a notice, before collecting their personal data, describing the terms and conditions governing the collection, use, storage and sharing of personal data. Developers would also be required to obtain the consent of the users to these terms and conditions.
The bill would require this notice to users to include the following:
- The categories of personal data that the app will collect;
- The purposes for which the personal data will be used;
- The categories of third parties with which the personal data will be shared; and
- A “data retention policy” that governs the length of time for which the personal data will be stored and a description of the user’s rights under the bill to notify the app developer and request that the developer refrain from collecting additional personal data through the app.
The APPS Act would direct the Federal Trade Commission (FTC) to issue regulations specifying the format, manner and timing of the notice. In promulgating the regulations, the FTC would consider how to ensure the “most effective and efficient” communication to the user regarding the treatment of personal data.
The APPS Act would also require app developers to take reasonable and appropriate measures to prevent unauthorized access to personal data collected by apps. This provision demonstrates that concerns about consumer privacy continue to be a driving force for policymakers in crafting legislative proposals.
FTC Enforcement and Safe Harbor
The APPS Act would provide for FTC enforcement, pursuant to the FTC’s unfair or deceptive acts or practices authority under the FTC Act, but would not foreclose private rights of action, or actions by state attorneys general or other state officials. Pursuant to a safe harbor provision, app developers would satisfy the APPS Act’s requirements, and requirements of implementing regulations, by adopting and following a code of conduct for consumer data privacy developed in the multi-stakeholder process convened by the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA). The NTIA process is an outgrowth of the White House white paper, “Consumer Data Privacy in a Networked World,” which advocated the coupling of voluntary privacy codes of conduct with federal legislation establishing consumer “Bill of Rights” principles.
The full text of H.R. 1913 is accessible on the Web site of the Government Printing Office at: http://www.gpo.gov/fdsys/pkg/BILLS-113hr1913ih/pdf/BILLS-113hr1913ih.pdf.
History is littered with examples of the law being slow to catch up with the use of technology. Social media is no exception. As our Socially Aware blog attests, countries around the world are having to think fast to apply legal norms to rapidly evolving communications technologies and practices.
Law enforcement authorities in the United Kingdom have not found the absence of a codified “social media law” to be a problem. They have applied a “horses for courses” approach, and brought prosecutions or allowed claims under a range of different laws that were designed for other purposes. Of course, this presents problems to users, developers and providers of social media platforms, who can be by no means certain which legal standards apply.
The use of Twitter and other forms of social media is ever increasing and the attraction is obvious—social media gives people a platform to share views and ideas. Online communities can bring like-minded people together to discuss their passions and interests; and, with an increasing number of celebrities harnessing social media for both personal and commercial purposes, Twitter often provides a peek into the lives of the rich and famous.
As an increased number of Twitter-related cases have hit the front pages and the UK courts, it is becoming increasingly clear that, in the United Kingdom at least, the authorities are working hard to re-purpose laws designed for other purposes to catch unwary and unlawful online posters.
It’s typically hard to argue that someone who maliciously trolls a Facebook page set up in the memory of a dead teenager or sends racist tweets should not be prosecuted for the hurt they cause. But in other cases, it may not be so clear-cut—how does the law decide what is and what is not unlawful? For example, would a tweet criticizing a religious belief be caught? What about a tweet that criticizes someone’s weight or looks? Where is the line drawn between our freedom of expression and the rights of others? Aren’t people merely restating online what was previously (and still is) being discussed down the pub?
A range of UK laws is currently being used to regulate the content of tweets and other online messages. At the moment, there is no particular consistency as to which laws will be used to regulate which messages. It appears to depend on what evidence is available. As a spokesman of the Crown Prosecution Service remarked, “Cases are prosecuted under different laws. We review the evidence given to us and decide what is the most appropriate legislation to charge under.”
Communications Act 2003
In 2011, there were 2,000 prosecutions in the United Kingdom under section 127 of the Communications Act 2003. A recent string of high-profile cases has brought the Communications Act under the spotlight.
Under section 127(1)(a), a person is guilty of an offense if he sends “a message or other matter that is grossly offensive or of an indecent, obscene or menacing character” by means of a public electronic communications network. The offense is punishable by up to six months’ imprisonment or a fine, or both.
So… what is “grossly offensive” or “indecent, obscene or menacing”?
In DPP v Collins , proceedings were brought under section 127(1)(a) in relation to a number of offensive and racist phone calls made by Mr. Collins to the offices of his local Member of Parliament. The House of Lords held that whether a message was grossly offensive was to be determined as a question of fact applying the standards of an open and just multiracial society, and taking into account the context of the words and all relevant circumstances. The yardstick was the application of reasonably enlightened, but not perfectionist, contemporary standards to the particular message set in its particular context. The test was whether a message was couched in terms that were liable to cause gross offense to those to whom it related. The defendant had to have intended his words to be grossly offensive to those to whom they related, or to have been aware that they may be taken to be so. The court made clear that an individual is entitled to express his views and to do so strongly, however, the question was whether he had used language that went beyond the pale of what was tolerable in society. The court considered that at least some of the language used by the defendant could only have been chosen because it was highly abusive, insulting and pejorative. The messages sent by the defendant were grossly offensive and would be found by a reasonable person to be so.
Proceedings are also being brought under section 127(1)(a) for racist messages. In March 2012, Joshua Cryer, a student who sent racially abusive messages on Twitter to the ex-footballer, Stan Collymore, was successfully prosecuted under section 127(1)(a) and sentenced to two years’ community service and ordered to pay £150 costs. (However, interestingly, Liam Stacey, who was sentenced to 56 days’ imprisonment for 26 racially offensive tweets in relation to Bolton Wanderers footballer Fabrice Muamba, was charged with racially aggravated disorderly behavior with intent to cause harassment, alarm or distress under section 31 of the Crime and Disorder Act 1998, rather than under the Communications Act).
Similarly, religious abuse is also being caught under the Act. In April 2012, Amy Graham, a former police cadet, was charged under the Communications Act for abusive anti-Muslim messages posted on Twitter. She awaits sentencing.
These cases may appear relatively clear-cut, but there have been some other high-profile cases where the grounds for prosecution appear more questionable.
In April 2012, John Kerlen was found guilty of sending tweets that the court determined were both grossly offensive and menacing, for posting a picture of a Bexley councilor’s house and asking: “Which c**t lives in a house like this. Answers on a postcard to #bexleycouncil”; followed by a second tweet saying: “It’s silly posting a picture of a house on Twitter without an address, that will come later. Please feel free to post actual s**t.” He avoided a jail sentence —instead being sentenced to 80 hours of unpaid labor over 12 months, asked to pay £620 in prosecution costs, and subjected to a five-year restraining order. Were these messages really menacing or grossly offensive? If he was going to be prosecuted, was the Communications Act the appropriate law or should he have been prosecuted for incitement to cause criminal damage (if he was genuinely inciting others to post feces) or for harassment?
Even more controversial is the case that has become widely known as the “Twitter joke trial.” Paul Chambers was prosecuted under section 127(1)(a) for sending the following tweet: “Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your s**t together otherwise I’m blowing the airport sky high!!” He appealed against his conviction to the Crown Court. In dismissing the appeal, the judge said his tweet was “menacing in its content and obviously so. It could not be more clear. Any ordinary person reading this would see it in that way and be alarmed.” This was despite the fact that Robin Hood Airport had classified the threat as non-credible on the basis that “there is no evidence at this stage to suggest that this is anything other than a foolish comment posted as a joke for only his close friends to see.” The case attracted a huge following among Twitter users, including high profile users such as Stephen Fry and Al Murray. Following a February 2012 appeal to the High Court, it was announced on May 28 that the High Court judges who heard the case were unable to reach agreement, and that therefore, a new appeal would need to be re-heard by a three-judge panel. Such a “split decision” is extremely unusual. No date has yet been set for the new hearing.
Malicious Communications Act 1988
Cases are also being brought under section 1 of the Malicious Communications Act 1988. Under this Act, it is an offense to send an electronic communication that conveys a message that is grossly offensive to another person, where the message is sent with the purpose of causing distress or anxiety to that person.
In February 2012, Sunderland fan, Peter Copeland, received a four-month suspended sentence after posting racist comments on Twitter aimed at Newcastle United fans. More recently, a 13th person was arrested by police investigating the alleged naming of a rape victim on social media sites after Sheffield United Striker, Ched Evans, was jailed for raping a 19-year-old woman. The individuals involved have been arrested for offenses under various laws, including the Malicious Communications Act.
So, what’s next for malicious communications? Perhaps sexist remarks.
Earlier this month, Louise Mensch, a Member of Parliament, highlighted a variety of sexist comments that had been sent to her Twitter account. In response to this, Stuart Hyde, who is the Chief Constable of Cumbria Police and the national e-crime prevention lead for the Association of Chief Police Officers, remarked that the comments made to Mensch were “horrendous” and “sexist bigotry at its worst.” He referred to the offenses available to the authorities: “We are taking people to court. People do need to understand that while this is a social media it’s also a media with responsibilities and if you are going to act illegally using social media expect to face the full consequences of the law. Accepting that this is fairly new, even for policing … we do need to take action where necessary.” Whether any of these comments will lead to charges remains to be seen.
In another example of online abuse, Alexa Chung, the TV presenter, recently received nasty comments criticizing her weight in response to some Instagram photos she had posted on Twitter. She removed the photos in response, but is it possible that these kinds of messages could be considered grossly offensive and therefore unlawful?
We will have to wait and see what other cases are brought under the Communications Act and Malicious Communications Act and what balance is ultimately struck between freedom of expression and protecting individuals from receiving malicious messages. However, it is not just criminal laws relating to communications that could apply to online behavior. Recent events have also led to broader legislation such as the Contempt of Court Act and the Serious Crime Act being considered in connection with messages posted on Twitter and other social media services.
Contempt of Court Act 1981
If someone posts information online that is banned from publication by the UK courts, they could be found in contempt of court under the Contempt of Court Act 1981 and liable for an unlimited fine or a two-year prison sentence. However, as we saw in 2011, the viability of injunctions in the age of social media is questionable. When the footballer, Ryan Giggs, requested that Twitter hand over details about Twitter users who had revealed his identity in breach of the terms of a “super-injunction,” hundreds of Twitter users simply responded by naming him again. No users have, to date, been prosecuted for their breach of the injunction.
In another high profile case, in February 2012, the footballer, Joey Barton, was examined for contempt of court when he tweeted some comments regarding the trial of footballer, John Terry. Under the Contempt of Court Act 1981, once someone has been arrested or charged, there should be no public comments about them which could risk seriously prejudicing the trial. In that case, it was found that Barton’s comments would not compromise the trial and therefore he was not prosecuted for his comments.
Serious Crime Act 2007
Last summer’s riots in Englandled to Jordan Blackshaw and Perry Sutcliffe-Keenan being found guilty under sections 44 and 46 of the Serious Crime Act and jailed for having encouraged others to riot. Blackshaw had created a Facebook event entitled “Smash d[o]wn in Northwich Town” and Sutcliffe-Keenan had invited people to “riot” in Warrington. Both men were imprisoned for four years.
Defamation Act 1996
Of course, posting controversial messages online is not just a criminal issue. Messages can also attract civil claims for defamation, under the Defamation Act 1996.
In March 2012, in the first UKruling of its kind, former New Zealandcricket captain, Chris Cairns, won a defamation claim against Lalit Modi, former Indian Premier League (IPL) chairman, for defamatory tweets. Mr. Modi had tweeted that Mr. Cairns had been removed from the IPL list of players eligible and available to play in the IPL “due to his past record of match fixing.” Mr. Cairns was awarded damages of £90,000 (approximately £3,750 per word tweeted).
As in other countries, a whole host of UK laws that were designed in an age before social media—even, in some cases, far before the Internet as we know it—are now being used to regulate digital speech. Digital speech, by its very nature, has permanent records that are easily searchable, making the police and the prosecution’s job much easier.
Accordingly, these types of cases are only going to increase, and it will be interesting to see where UK courts decide to draw the line between freedom of expression and the law. One would hope that a sense of proportionality and common sense will be used so that freedom of expression offers protection for ill-judged comments said in the heat of the moment or “close to the knuckle” jokes, while ensuring that the victims of abusive and threatening trolls are rightly protected. In the meantime, users need to be very careful when tweeting and posting messages online, particularly in terms of the language they use. Tone can be extremely difficult to convey in 140 characters or less.
One has to feel sorry for the UK holiday makers who were barred in January 2012 from entering the United States for tweeting that they were going to “destroy America” (despite making clear to the U.S. airport officials who detained them that “destroy” was simply British slang for “party”). No doubt they will think twice before clicking that Tweet button in the future.