Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Three Steps to Help Ensure the Enforceability of Your Website’s Terms of Use

Posted in E-Commerce, Terms of Use
website terms and conditions

website terms and conditions

Operators of social media platforms and other websites typically manage their risks by imposing terms of use or terms of service for the sites. As we previously wrote, websites must implement such terms properly to ensure that they are enforceable. Specifically, users must be required to manifest acceptance of the terms in a manner that results in an enforceable contract. But what specifically constitutes such acceptance, and what steps should website operators take to memorialize and maintain the resulting contract? This article attempts to answer these practical questions.

Use Boxes or Buttons to Require Affirmative Acceptance

Website operators should avoid the cardinal sin in online contract formation: burying terms of use in a link at the bottom of a website and attempting to bind users to those terms based merely on their use of the website. Outside of some specific (and, for our purposes, not particularly relevant) circumstances, such approaches, often confusingly referred to as “browsewrap” agreements, will not result in a valid contract because there is no objective manifestation of assent. (Note, though, that even so-called browsewrap terms may be helpful in some circumstances, as we described in this post.)

Moreover, even website terms presented through a “conspicuous” link may not be enforceable if users are not required to affirmatively accept them. For example, in Nguyen v. Barnes & Noble, Inc., Barnes & Noble did include a relatively clear link to its website terms on its checkout page, but nothing required users to affirmatively indicate that they accepted the terms. The Ninth Circuit held, therefore, that Barnes & Noble could not enforce the arbitration provision contained in the terms. While the specific outcome in Barnes & Noble arguably is part of a Ninth Circuit trend of declining to enforce arbitration clauses on the grounds that no contract had been formed, nothing in the opinion limits the Ninth Circui’st holding to arbitration provisions. The case is an important cautionary tale for all website operators.

To avoid the Barnes & Noble outcome, website operators should implement two key features when users first attempt to complete an interaction with the site, such as making a purchase, registering an account, or posting content: (1) present website terms conspicuously, and (2) require users to click a checkbox or an “I accept” button accompanying the terms. The gold-standard implementation is to display the full text of the website terms above or below that checkbox or button. If they fit on a single page, that is helpful, but an easy-to-use scroll box can work as well. Website operators taking the scroll box approach may consider requiring users to actually scroll through the terms before accepting them.

Many website operators, however, choose not to present the terms themselves on the page where a user is required to indicate acceptance. Instead, they present a link to the terms alongside a checkbox or button. Courts have ratified this type of implementation as long as it is abundantly clear that the link contains the website terms and that checking a box or clicking a button indicates acceptance of those terms. This was essentially the implementation at issue in a 2012 case from the Southern District of New York, Fteja v. Facebook, Inc. Specifically, signing up for Facebook required users to click a button labeled “Sign Up,” and immediately below that button was the text, “By clicking Sign Up, you are indicating that you have read and agree to the Terms of Service.” The phrase “Terms of Service” was underlined and operated as a link to the terms. The court reasoned that whether the plaintiff read the terms of service was irrelevant because, for the plaintiff and others “to whom the internet is an indispensable part of daily life,” clicking on such a link “is the twenty-first century equivalent” of turning over a cruise ticket to read the terms printed on the back. As sure as vacationers know they can read the small print on their cruise tickets to find the terms they accept by embarking on the cruise, the plaintiff knew where he could read the terms of use he accepted by using Facebook. The parties formed an enforceable contract once the plaintiff clicked the “Sign Up” button.

This reasoning, however, does not necessarily mean that an implementation like the one at issue in Fteja will always will result in an enforceable contract. Because it relied on the plaintiff’s admitted proficiency in using computers and the Internet, the court likened the “Terms of Service” link to the backside of a cruise ticket. This leaves room to argue for a different outcome when a website operator should expect that novice computer users will be among its visitors. The simple way to avoid that (perhaps far-fetched) argument is to expressly identify the hyperlink as a means to read the contract terms. That approach succeeded in Snap-On Business Solutions v. O’Neil & Assocs., where the website expressly instructed users, “[i]mmediately following this text is a green box with an arrow that users may click to view the entire EULA.”

These cases illustrate how important it is to expressly connect users’ affirmative actions to the terms of use. In particular, the checkbox or button and accompanying text should clearly indicate that the user’s click signifies acceptance of the website terms. The terms should be presented in a clear, readable typeface and be printable, and the “call to action” text should be unambiguous—not susceptible to interpretation as anything other than acceptance of the website terms.

Here are some examples:

  • “By checking this box ¨, I agree to the ‘Terms of Use’ presented above on this page.”
  • “By clicking ‘I Accept’ immediately below, I agree to the ‘Terms of Service’ presented in the scroll box above.”
  • “Check this box ¨ to indicate that you accept the Terms of Use (click this link to read the Terms of Use).” (In this example, the website terms would be presented through a link, as in the Fteja case. The added instruction, “click this link to read the Terms of Use,” avoids any potential argument that a Fteja-type implementation only works where users can be assumed not to be novice computer users.)

Ensure You Can Prove Affirmative Acceptance

Even website operators that properly implement website terms often neglect another important task: making sure they can prove that a particular user accepted the terms. One common approach—to present declarations from employees—is illustrated in Moretti v. Hertz Corp., a 2014 case from the Northern District of California. The employees in that case affirmed via declarations that (1) a user could not have used the website without accepting the website terms, and (2) the terms included the relevant provision when the use took place.

The approach in Moretti, however, has a potential weakness: it depends on declarants’ credibility and their personal memory of when the terms of service included certain provisions. Website operators can address that vulnerability by emailing a confirmation to users after they accept the website terms and then archiving copies of those messages. To limit the volume of email users receive, this confirmation could be included with other communications, such as messages confirming an order or registration. This approach has two benefits. First, the confirmation email provides further notice to the user of the website terms. Second, instead of (or in addition to) invoking employees’ memory of historical facts to establish which terms were in effect at the relevant time, employees can simply authenticate copies of the messages based on their knowledge of the messaging system.

Provide Notice of Any Changes

Some of the most difficult implementation issues arise when a website operator wishes to modify its terms. Website terms often purport to allow the operator to change the terms whenever it wishes, but unilateral modifications may not be enforceable if they’re not implemented properly because—like any other contract amendment—modification of website terms requires the agreement of both parties. Ideally, website operators should require users to expressly accept any changes or updates through a mechanism like the one used to obtain their acceptance of the website terms in the first place.

Many website operators, however, are understandably reluctant to add friction to the user experience by repeating such legal formalities every time they modify their terms. In those cases, operators ideally should at least provide users with clear advance notice of modifications. The notice should specify when the changes will go into effect and state that continued use after that date will constitute acceptance of the changes. For example, in Rodriguez v. Instagram, Instagram announced a month in advance that it planned to modify its terms, and the plaintiff continued to use the site after the effective date of the change. On those facts, the trial court found that the plaintiff agreed to the modified terms by continuing to use the service. While Instagram and other cases make clear that unilateral changes require, at the very least, advance notice, other courts may be less willing to enforce unilateral modifications without express acceptance by the user, especially where the factual issue of notice is contested. Obtaining express acceptance remains the safest approach.

Following the above guidelines will increase the likelihood that courts will view website terms—and the important risk mitigation provisions they contain, such as disclaimers, limitations of liability and dispute resolution provisions—as enforceable contracts.

How UK Brands That Use Vlogger Endorsements & Social Media for Marketing Can Stay on the Right Side of the Law

Posted in Compliance, Marketing, Online Promotions, Online Reviews

stock-photo-19152244-marketingVloggers have become the reality stars of our times. For an increasing number of social media users, what was once a hobby is now a lucrative career. You may be surprised to learn that Felix Kjellberg (aka “PewDiePie”), a 25-year-old Swedish comedian and the world’s most popular YouTube star, is reported to have earned $8.5 million in 2014.

The UK has its own vlogger superstars in the form of Zoella and Alfie Deyes. Together, this power couple of social media has amassed 12 million YouTube subscribers, 6.8 million Instagram followers and almost 6 million Twitter followers. Zoe Suggs (aka “Zoella”), 25, started vlogging in 2009 and has since become a brand in fashion and beauty marketing, publishing a novel and creating a line of products. Alfie, 21, started his Pointless vlog when he was 15 and has since published a series of books. It was even announced earlier this year that tourists will soon be able to see waxworks of Zoella and Alfie at London’s Madame Tussauds. But Zoella and Alfie are not alone; there is now a whole generation of vloggers rivalling film and sports stars in the popularity ranks. Indeed, we now even have a host of social media talent agencies formed to help propel vloggers to superstardom.

Vloggers are particularly popular with young people who enjoy the more intimate connection they can have with these approachable idols. Therefore, brands who want to target a young demographic are increasingly keen to work with vloggers. This collaboration typically involves brands paying vloggers to feature in “advertorial vlogs,” i.e., videos created in the usual style of the vlogger, but with the content controlled by the brand.

Now, of course, there is nothing inherently wrong with there being a commercial relationship between a brand and a vlogger from a legal perspective. However, particularly where you have the influence of celebrity, plus an impressionable audience, vloggers and brands need to be very careful that they don’t fall foul of consumer protection rules that are in place to protect consumers from unfair advertising practices. In August 2015, the UK advertising regulator issued new guidance to help vloggers and brands be responsible and stay on the right side of the law. In this blog post, we will identify the key issues raised by the guidance. We will also provide an overview of some of the other key legal issues that brands need to be aware of when using social media for marketing and advertising in the UK.


The Consumer Protection from Unfair Trading Regulations 2008 (“CPRs”) prohibit certain unfair commercial practices. These include using editorial content in the media to promote a product where a trader has paid for the promotion without making that clear (advertorial).

The Committee of Advertising Practice Code (the “CAP Code”) acts as the rule book for non-broadcast advertisements in the UK and requires that advertising must be legal, decent, honest and truthful. The CAP Code was extended to cover social media in 2011. The Cap Code is enforced by the Advertising Standards Authority (“ASA”), the UK regulator responsible for advertising content in the UK. The ASA has the power to remove or have amended any ads that breach the CAP Code.

Rule 2.1 of the CAP Code states that marketing communications must be obviously identifiable as such. Rule 2.4 states that marketers and publishers must make clear that advertorials are marketing communications, e.g., by labelling them “advertisement feature.” These rules apply to marketing communications on vlogs in the same way as they would to marketing communications that appear on blogs or other online sites. But as the CAP Executive noted last year, a number of marketers have “fallen foul of the ASA by blurring the line, intentionally or not, between independent editorial content written about a product and advertising copy.”

In November 2014, the ASA’s ruling against Mondelez provided a clear example of a brand failing to comply with the CAP Code. Mondelez had engaged five celebrity vloggers to promote its Oreo cookies by participating in a race to lick cream off a cookie as quickly as possible. The channels featuring the vlogs typically contained non-promotional content, and the vlogs failed to clearly indicate the commercial relationship between Mondelez and the vloggers. The reference to “Thanks to Oreo for making this video possible” might indicate that Oreo had been involved in the process, but it did not make clear that the advertiser had paid for and had editorial control over the videos. As a result, the advertorials were banned.

In another high-profile case, in May 2015, a YouTube video providing makeup tutorials featuring the popular vlogger Ruth Crilly, who has 300,000 subscribers on YouTube, was banned by the ASA for failing to clearly identify itself as marketing material. The video appeared on the “Beauty Recommended” YouTube channel, which is operated by Procter & Gamble, with the intention of marketing its Max Factor range of products. The ASA stated that the channel page provided “no indication” that it was a Max Factor marketing tool, and emphasized that “it wasn’t clear until a viewer had selected and opened the video that text, embedded in the video, referred to Procter & Gamble….We consider that viewers should have been aware of the commercial nature of the content prior to engagement.”


In August 2015, the CAP Code Executive published guidance to help vloggers and brands better understand their obligations under the advertising rules. While the guidance is not binding, it’s a helpful statement of the rules as they apply to vlogs.

Advertorial. Where a brand collaborates with a vlogger on a video that is produced by the brand and published on the brand’s website or social media page, this is very likely to be a marketing communication—but it wouldn’t be an advertorial. However, where a vlog is made in the usual style of the vlogger, but the content of the vlog is controlled by the brand and the vlogger has been paid (not necessarily with money) for the vlog, this would be an advertorial. Because the extent of the brand involvement may not be obvious to the viewer, this needs to be made explicit upfront so that viewers are aware that the video is an ad before engaging. Labels such as “ad,” “ad feature,” “advertorial,” or similar are likely to be acceptable, whereas labels such as “sponsored by,” “supported by” and “thanks to X for making this possible” should be avoided, as these would not make it sufficiently clear that the brand had control over the content of the vlog. Viewers should be aware that they are selecting an ad to view before they watch it so that they can make an informed choice. Finding out that something is an ad after having selected it, at the end of a video or halfway through, is not sufficient.

Commercial breaks/product placement. In terms of commercial breaks or product placement within a vlog, it needs to be clear when the ad or product placement starts. This could be via onscreen text, a sign, logo or the vlogger explaining that they have been paid to talk about a particular item by the brand.

Vlogger-promotion. If the sole content of a vlog is a promotion of the vlogger’s own merchandise, this would not be considered an advertorial. Rather, it would be a marketing communication. The video title should make clear that the video is promoting the vlogger’s products, but it’s unlikely that the vlog itself will need labelling as an ad if it’s clear from the context that it’s a marketing communication.

Sponsorship. Where a brand has sponsored a vlog, but the brand has no control over the vlog, this would not be considered an ad and would not be caught by the CAP Code. However, to ensure compliance with the CPRs, the vlogger should give a nod to the sponsor in order to disclose the nature of the commercial relationship.

Free Items. Vloggers may be sent free items by a brand. Where there is no condition attached to the item by the brand and the vlogger can choose whether or not to cover the item in a vlog, this would not be an ad caught by the CAP Code. In addition, where the brand provides the vlogger with free products on the condition that they are reviewed independent of any brand input, then, as the brand retains no control over the vlog, the video would not have to be labelled as an advertorial. However, in such circumstances, the vlogger should disclose to consumers that the vlogger has an incentive to talk about the product, along with the nature of the incentive, to ensure compliance with the CPRs.

Other Social Media Marketing

Vlogging isn’t the only aspect of social media marketing that creates compliance challenges, of course. There are other issues that brands need to be aware of when advertising and marketing using social media in the UK. We have outlined some of these below. For issues specific to the UK financial services sector, please see our previous blog post: UK’s Financial Services Regulator: No Hashtags in Financial Promotions.

Native Advertising (written advertorial). A native ad is advertising that resembles editorial content. Native ads are a popular form of content marketing, but again raise concerns that consumers may not be aware that the content is advertising in breach of the CPRs and Cap Code. Guidance issued in February 2015 by IAB (the UK trade association for digital advertising) advised advertisers to provide consumers with prominently visible visual cues to enable them to understand, immediately, that they are engaging with marketing content that has been compiled by a third party in a native ad format and is not editorially independent. The guidance suggests clear brand logos and the use of different design formatting for native ads. It also advises the publisher or provider of the native ad format to use a reasonably visible label that makes clear that a commercial arrangement is in place.

Employee Endorsements. Companies are keen to encourage their employees to use social media and become advocates for the company. However, companies must be careful; if an employee chooses to discuss his or her employer’s brand favorably on social media, then this is likely to be construed as an advert under the CAP Code, even where the employee is acting independently and not at the request of his or her employer. An employee endorsement that is not transparent also runs the risk of breaching the CPRs. Therefore, employees must make clear that they are affiliated with their employer when making any company endorsements on social media. Organizations should also provide employees with clear social media policies and training to avoid any incident of inadvertent advertising.

Ads via Twitter and Celebrity Endorsements. As mentioned above, the CPRs and CAP Code require users to be aware that they are viewing an advert. In terms of Twitter, this means that promotional tweets should be accompanied by the hashtag #spon or #ad. This is particularly the case where the advert may not be immediately apparent as a promotional tweet, e.g., where it is in the form of a celebrity endorsement. As with promotions using vloggers, companies are increasingly keen to use celebrities in connection with promotions in order to increase their brand awareness within that celebrity’s group of followers.

In March 2012, an advertising campaign by Mars involved reality star Katie Price tweeting about the Eurozone crisis, and soccer player Rio Ferdinand engaging his followers in a debate about knitting. The campaign involved four teaser tweets by each celebrity to focus attention on their Twitter profile (but with no marketing content), culminating with a final tweet that was an image of the celebrity with a Snickers chocolate bar and the line “you’re not you when you’re hungry @snickersUK #hungry#spon.” While the final tweet was clearly labelled as an advert, the ASA ruled that the first four tweets only became marketing communications at the point the fifth and final tweet was sent (as the first four tweets contained no marketing references). As a result, the ASA ruled that the campaign did not breach advertising standards as the fifth tweet (and as such, the entire campaign) was clearly identifiable as an advert.

However, Nike was less successful in June 2012. Soccer players Wayne Rooney and Jack Wilshere tweeted “My resolution – to start the year as a champion, and to finish it as a champion… #makeitcount.gonike.me/makeitcount.” While the ASA agreed that the tweets were obviously marketing communications, the reference to the Nike brand was not sufficiently prominent. The tweets also lacked #spon or #ad to signify advertising. As it was not sufficiently clear to all readers that the tweets were part of a marketing campaign, the advertisement was banned.

User-Generated Content. Companies also need to be wary when using user-generated content when promoting their brand. For example, companies may be deemed to be advertising if they: (i) provide a link to a user blog that includes positive comments, (ii) re-tweet positive tweets from users, or (iii) allow users to post comments on the company website. To ensure that such content is responsible, accurate and not misleading, harmful or offensive, companies should monitor user-generated content to ensure that the content is appropriate for the likely audience and preserve documentary evidence to substantiate any claims.

Advergames. Advergames are online video games that are created in order to promote a brand, product or organization by immersing a marketing message within the game. In May 2012, the ASA published guidance that made clear that advergames will be considered advertising and are subject to the CAP Code. For further discussion on advergames, please see our previous blog post: What Are the Rules of the Advergame in the UK?


The key message for organizations who want to use social media in their marketing campaigns is to treat consumers fairly and to be upfront and transparent. But good practice isn’t just about legal compliance, it will also help maintain consumers’ respect for and trust in your brand. If your social media campaign hits the headlines, you want it to be for all of the right reasons.


Status Updates: Court nixes VPPA claim; lawyer suspended over blog posts; Facebook ‘unfriending’ cited in bullying decision

Posted in Cyberbullying, Ethics, Litigation, Privacy

Tale of the tape. The Video Privacy Protection Act (VPPA), which requires video service providers to destroy personally identifiable information after a specified time, doesn’t provide a private right of action for plaintiffs whose information was retained beyond that period. So held the U.S. Court of Appeals for the Ninth Circuit in Rodriguez v. Sony, a case in which the plaintiff, Daniel Rodriguez, claimed that two Sony companies violated the act by retaining, beyond the act’s statutory limits, information relating to movies he had rented and purchased. Citing prior decisions by the Sixth Circuit and the Seventh Circuit, the court in Rodriguez held that the VPPA provides a private right of action only for prohibited disclosure of personal information, not for prohibited retention of personal information. Rodriguez did also claim that Sony had violated the disclosure provisions of the VPPA because the company “shared, sold, and/or transferred” his personal information to Sony Network after Sony Network “took over the [Playstation Network].” But the Ninth Circuit upheld the dismissal of this claim as well, holding that it fell within the VPPA’s exemption for disclosures “incident to the ordinary course of business.” The Rodriguez v. Sony opinion is the second time in two months that the Ninth Circuit dismissed a plaintiff’s attempt to recover under the VPPA. The last case affirmed a district court’s conclusion that Netflix did not violate the act by permitting certain disclosures about subscribers’ viewing history to subscribers’ family, friends and guests.

Discipline and punish. The Illinois Supreme Court has suspended for three years a Chicago attorney who wrote blog posts that, according to a report by the Illinois attorney disciplinary board that originally reviewed the matter in 2014, impugned “the integrity of certain judges, guardians ad litem and the lawyers involved in a case in the Probate Court of Cook County.” The lawyer/blogger, JoAnne Marie Denison, wrote the posts that landed her in hot water following her representation of a 90-year-old woman in guardianship proceedings. In 2014, the board concluded that Denison—whose posts referenced a “feeding frenzy” of lawyers, a “classic case of corruption” and a court “being spoonfed BS law by atty miscreants”—had violated several rules, including one that prohibits lawyers from making “a statement that the lawyer knows to be false or with reckless disregard as to its truth or falsity concerning the qualifications or integrity of a judge, adjudicatory officer or public legal officer.” The board nevertheless didn’t conclude that Denison’s actions warranted disbarment, writing that she “genuinely, though unreasonably, believed something was wrong with the proceedings in the … case,” and did not seem to be motivated by self-interest. Arguing that the decision violated her First Amendment rights, Denison then appealed to a review board, which upheld her suspension. The Illinois Supreme Court finally cemented Denison’s suspension on September 21, 2015.

Why can’t we be friends? An Australian tribunal charged with employee dispute resolution cited a Tasmanian sales administrator’s decision to unfriend her colleague on Facebook in its finding that the sales administrator bullied her colleague, a real estate agent, in the workplace. The deputy president of the tribunal, Australia’s Fair Work Commission, said the act of unfriending was exemplary of a “lack of emotional maturity.” Legal experts interviewed by the Australian media emphasized that the sales administrator’s alienation of her colleague on Facebook was just one of many incidents of hostile behavior and that unfriending a colleague on Facebook does not, on its own, amount to workplace bullying. But they also said the decision was illustrative of the need for companies to have clear social media policies.

FTC Continues Enforcing Ad Disclosure Obligations in New Media and Issues a Warning to Advertisers

Posted in Compliance, FTC, Online Promotions, Online Reviews

Federal Trade Commission Doorway Sign

In December 2014, we noted that the Federal Trade Commission’s (FTC) settlement with advertising firm Deutsch LA, Inc. was a clear signal to companies that advertise through social media that they need to comply with the disclosure requirements of Section 5 of the FTC Act. On September 2, 2015, the FTC announced a settlement along the same lines with Machinima, Inc., a company promoting the Xbox One system. This new action indicates that the FTC is serious about enforcing compliance in this space, so companies need to make sure that their advertising and marketing partners understand their obligations under Section 5.

A Quick Refresher on Online Advertising Disclosure Requirements

As we explained in our previous alert, the FTC’s Endorsement Guides describe how advertisers using endorsements can avoid liability under Section 5 for unfair or deceptive acts or practices. Simply put, a customer endorsement must be from an actual, bona fide user of the product or service and, if there is any material connection between the endorser and the advertiser that consumers would not reasonably expect but that would affect the weight given to the endorsement—such as payment or an employment relationship—then that connection must be clearly and conspicuously disclosed.

According to the complaint in In re Machinima, Machinima paid video bloggers (“influencers”) to promote Microsoft’s Xbox One system by producing and uploading to YouTube videos of themselves playing Xbox One games. Machinima did not require any disclosure of the compensation the influencers received, and many videos lacked any such disclosure. The FTC alleged that the payments would not be reasonably expected by YouTube viewers, such that the failure to disclose them was deceptive in violation of Section 5. In light of the Deutsch LA case, which dealt with endorsements on Twitter that did not include proper disclosures, In re Machinima seems uncontroversial. But what makes the case interesting is how close Microsoft came to being swept up in it.

Microsoft Escapes Liability, Narrowly

The FTC also issued a closing letter reflecting that it had investigated Microsoft, and Microsoft’s advertising agency Starcom, in relation to influencers’ videos. (Starcom managed the relationship with Machinima.) Even though the FTC did not ultimately take action against Microsoft (or Starcom), the closing letter is significant because it makes clear the FTC’s position that a company whose products are promoted bears responsibility for the actions of its ad agencies—as well as the actions of those engaged by its ad agencies.

According to the closing letter, Microsoft avoided an enforcement action because it had a “robust” compliance program in place that included specific guidance relating to the FTC’s Endorsement Guides and because Microsoft made training relating to the Endorsement Guides available to employees, vendors and personnel at Starcom. Furthermore, Microsoft and Starcom adopted additional safeguards regarding sponsored endorsements and took swift action to require Machinima to insert disclosures into the offending videos.

Given the increased reliance of advertisers on social media campaigns, the Machinima case provides both a clear warning and clear guidance to companies on how to minimize the risk of a Section 5 enforcement action. Not only must notice be provided of any paid endorsements, regardless of the medium in which they appear, but advertisers should also seriously consider having in place specific policies and procedures to address the FTC’s Endorsement Guides—as well as to ensure that their ad agencies and other involved parties comply with them.

Federal District Court Strikes Down Law That Bans Ballot Selfies

Posted in First Amendment, Litigation

VOTEThe U.S. District Court for the District of New Hampshire recently struck down on First Amendment grounds a 2014 amendment to New Hampshire Revised Statute 659:35 that made it illegal for New Hampshire voters to post pictures of their completed ballots to social media. While several states have laws that disallow ballot sharing, RSA 659:35 was the first “to explicitly ban voters from sharing their marked ballots on social media.”

The case, Rideout v. Gardner, was filed by the New Hampshire Civil Liberties Union on behalf of three voters who were being investigated by the New Hampshire Attorney General’s Office for violating the law banning ballot selfies during the September 2014 Republican primary elections.

The court first determined that RSA 659:35 was a content-based restriction on speech because it necessarily required regulators to “examine the content of the speech to determine whether it includes impermissible subject matter”—i.e., photographs of completed ballots.

The court then held that the statute could not meet the strict scrutiny standard that applies to content-based speech, “which requires the Government to prove that the restriction furthers a compelling interest and is narrowly tailored to achieve that interest.”

Paraphrasing a 2011 Supreme Court case, the Rideout court noted that, “[f]or an interest to be sufficiently compelling, the state must demonstrate that it addresses an actual problem.” The state had argued that the law was needed to prevent vote buying and voter intimidation, but the court was not convinced. In fact, the plaintiffs had produced evidence that vote buying had not been the subject of a single prosecution or complaint in New Hampshire since 1976.

The court also noted that the state had “failed to identify a single instance anywhere in the United States in which a credible claim has been made that digital or photographic images of completed ballots have been used to facilitate vote buying or voter coercion.”

Finally the court held that the law was not sufficiently narrowly tailored, because the “few who might be drawn into efforts to buy or coerce their votes are highly unlikely to broadcast their intentions via social media.” Thus, investigations for violation of RSA 659:35 “will naturally tend to focus on the low-hanging fruit of innocent voters who simply want the world to know how they have voted for entirely legitimate reasons” and will likely “punish only the innocent while leaving actual participants in vote buying and voter coercion schemes unscathed.”

Moreover, the state had an obvious, less-restrictive alternative: “[I]t can simply make it unlawful to use an image of a completed ballot in connection with vote buying and voter coercion schemes.”

Your Votes Can Help Us Share Our Expertise at SXSW Interactive 2016!

Posted in Event

Our managing editors John Delaney and Aaron Rubin will be attending SXSW Interactive on March 11th through 16th, 2016. In connection with the event, John and Aaron have proposed two presentations based on topics that have been covered on this blog: Key Moments in Social Media Law and The Grand Unifying Theory of Today’s Tech Trends.

Socially Aware readers can help to ensure that these two topics end up on the SXSW Interactive agenda by voting for the presentations. To vote, simply click on the links provided above and create an account.

Voting is free and open to everyone—not just prospective SXSW Interactive attendees. But hurry! The polls close at 11:59 PM CDT on Friday, September 4th.

Also, if you plan to attend SXSW Interactive next year, please let us know—we’d love to get together with you in Austin.

The Top Social Media Platforms’ Efforts To Control Cyber-Harassment

Posted in Cyberbullying, First Amendment, Terms of Use

iStock_000040880696_LargeSocial networking platforms have long faced the difficult task of balancing the desire to promote freedom of expression with the need to prevent abuse and harassment on their sites. One of social media’s greatest challenges is to make platforms safe enough so users are not constantly bombarded with offensive content and threats (a recent Pew Research Center study reported that 40% of Internet users have experienced harassment), yet open enough to foster discussion of complex, and sometimes controversial, topics.

This past year, certain companies have made some noteworthy changes. Perhaps most notably, Twitter, long known for its relatively permissive stance regarding content regulation, introduced automatic filtering and stricter language in its policies regarding threatening language. Also, Reddit, long known as the “wild wild west” of the Internet, released a controversial new anti‑harassment policy and took unprecedented proactive steps to regulate content by shutting down some of the site’s more controversial forums.

According to some, such changes came as a result of several recent, highly publicized instances of targeted threat campaigns on such platforms, such as “Gamergate,” a campaign against female gaming journalists organized and perpetrated over Twitter, Reddit and other social media platforms. Below we summarize how some of the major social networking platforms are addressing these difficult issues.


Facebook’s anti-harassment policy and community standards have remained relatively stable over time. However, in March 2015, Facebook released a redesign of its Community Standards page in order to better explain its policies and make it easier to navigate. This was largely a cosmetic change.

According to Monika Bickert, Facebook’s head of global policy management, “We’re just trying to explain what we do more clearly.”

The rules of conduct are now grouped into the following four categories:

  1. “Helping to keep you safe” details the prohibition of bullying and harassment, direct threats, criminal activity, etc.
  2. “Encouraging respectful behavior” discusses the prohibition of nudity, hate speech and graphic content.
  3. “Keeping your account and personal information secure” lays out Facebook’s policy on fraud and spam.
  4. “Protecting your intellectual property” encourages users to only post content to which they own the rights.


After a series of highly publicized censorship battles, Instagram updated its community standards page in April 2015 to clarify its policies. These more-detailed standards for appropriate images posted to the site are aimed at curbing nudity, pornography and harassment.

According to Nicky Jackson Colaco, director of public policy, “In the old guidelines, we would say ‘don’t be mean.’ Now we’re actively saying you can’t harass people. The language is just stronger.”

The old guidelines comprised a relatively simple list of do’s and don’ts—for example, the policy regarding abuse and harassment fell under Don’t #5: “Don’t be rude.” As such, the new guidelines are much more fleshed out. The new guidelines clearly state, “By using Instagram, you agree to these guidelines and our Terms of Use. We’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in a disabled account.”

According to Jackson Colaco, there was no one incident that triggered Instagram’s decision. Rather, the changes were catalyzed by continuous user complaints and confusion regarding the lack of clarity in content regulation. In policing content, Instagram has always relied on users to flag inappropriate content rather than actively patrolling the site for offensive material.

The language of the new guidelines now details several explicit rules, including the following:

  1. Nudity. Images of nudity and of an explicitly sexual nature are prohibited. However, Instagram makes an exception for “photos of post‑mastectomy scarring and women actively breastfeeding.”
  2. Illegal activity. Offering sexual services, buying or selling drugs (as well as promoting recreational use) is prohibited. There is a zero-tolerance policy for sexual images of minors and revenge porn (including threats of posting revenge porn).
  3. Harassment. “We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages…We carefully review reports of threats and consider many things when determining whether a threat is credible.”


Twitter has made two major rounds of changes to its content regulation policies in the past year. These changes are especially salient given the fact that Twitter has previously been fairly permissive regarding content regulation.

In December 2014, Twitter announced a set of new tools to help users deal with harassment and unwanted messages. These tools allow users to more easily flag abuse and describe their reasons for blocking or reporting a Twitter account in more specific terms. While in the past Twitter had allowed users to report spam, the new tools allow users to report harassment, impersonations, self‑harm, suicide and, perhaps most interestingly, harassment on behalf of others.

Within “harassment,” Twitter allows the user to report multiple categories: “being disrespectful or offensive,” “harassing me” or “threatening violence or physical harm.” The new tools have also been designed to be more mobile-friendly.

Twitter also released a new blocked accounts page during this round of changes. This feature allows users to more easily manage the list of Twitter accounts they have blocked (rather than relying on third-party apps, as many did before). The company also changed how the blocking system operates. Before, blocked users could still tweet and respond to the blocker; they simply could not follow the blocker. Now, blocked accounts will not be able to view the profile of the blocker at all.

In April 2015, Twitter further cracked down on abuse and unveiled a new filter designed to automatically prevent users from seeing harassing and violent messages. For the first time, all users’ notifications will be filtered for abusive content. This change came shortly after an internal memo from CEO Dick Costolo leaked, in which he remarked, “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.”

The new filter will be automatically turned on for all users and cannot be turned off. According to Shreyas Doshi, head of product management, “This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of the Tweet to other content that our safety team has in the past independently determined to be abusive.”

Beyond the filter, Twitter also made two changes to its harassment policies. First, the rules against threatening language have been strengthened. While “direct, specific threats of violence against others” were always banned, that prohibition is now much broader and includes “threats of violence against others or promot[ing] violence against others.”

Second, users who breach the policies will now face heavier sanctions. Previously, the only options were to either ban an account completely or take no action (resulting in much of the threatening language not being sanctioned at all). Now, Twitter will begin to impose temporary suspensions for users who violate the rules but whose violation does not warrant a full ban.

Moreover, since Costolo’s statements, Twitter has tripled the size of its team handling abuse reports and added rules prohibiting revenge porn.


In March 2015, Reddit prohibited the posting of several types of content, including anything copyrighted or confidential, violent personalized images and unauthorized photos or videos of nude or sexually excited subjects.

Two months later, Reddit unveiled a controversial new anti-harassment policy that represented a significant shift from Reddit’s long‑time reputation as an online free-for-all. The company announced that it was updating its policies to explicitly ban harassment against users. Some found this move surprising, given Reddit’s laissez-faire reputation and the wide range of subject matter and tone it had previously allowed to proliferate on its site (for example, Reddit only expressly banned sexually explicit content involving minors three years ago after much negative PR).

In a blog post titled “promote ideas, protect people,” Reddit announced it would be prohibiting “attacks and harassment of individuals” through the platform. According to Reddit’s former CEO Ellen Pao, “We’ve heard a lot of complaints and found that even our existing users were unhappy with the content on the site.”

In March 2015, Reddit also moved to ban the posting of nude photos without the subjects’ consent (i.e., revenge porn). In discussing the changes in content regulation, Alexis Ohanian, executive chairman, said, “Revenge porn didn’t exist in 2005. Smartphones didn’t really exist in 2005…we’re taking the standards we had 10 years ago and bringing them up to speed for 2015.” Interestingly, rather than actively policing the site, Reddit will rely on members to report offensive material to moderators.

Reddit’s new policy defines harassment as: “systematic and/or continued actions continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.”

As a result of the new policies, Reddit permanently removed five subreddits (forums) from the site: two dedicated to fat-shaming, one to racism, one to transphobia and one to harassing members of a progressive website. Apart from the expected criticisms of censorship, some commentators have condemned Reddit for the seemingly random selection of these specific subreddits. Even though these subreddits have been removed, many other offensive subreddits remain, including a violently anti-black subreddit and one dedicated to suggestive pictures of minors.


In June 2015, Google took a major step in the battle against revenge porn, a form of online harassment that involves publishing private, sexually explicit photos of someone without that person’s consent. Adding to the damage, such photos may appear in Google search results for the person’s name. Google has now announced that it will remove such images from search results when the subject of the photo requests it.

Amit Singhal, senior vice president of Google Search, stated, “This is a narrow and limited policy, similar to how we treat removal requests for other highly sensitive personal information, such as bank account numbers and signatures, that may surface in our search results.” Some have questioned, though, why it took so long for Google to treat private sexual information similarly to other private information.

As social media grows up and becomes firmly ensconced in the mainstream, it is not surprising to see the major players striving to make their platforms safer and more comfortable for the majority of users. It will be interesting, though, to watch as the industry continues to wrestle with the challenge of instituting these new standards without overly restricting the free flow of content and ideas that made social media so appealing in the first place.

Status Updates: Appeals court upholds anti-cyberbullying law; better marketing through neural networks; restaurant owner turns the tables on Yelp critic

Posted in Cyberbullying, Defamation, First Amendment, Marketing, Section 230 Safe Harbor, Status Updates

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.

“Notes” Update Shows Facebook’s Continued Efforts to Increase Already Impressive User Engagement

Posted in Marketing

08_26_Timepiece_iStock_000070160599_LargeAs the number of social media platforms continues to grow, users’ online activity is becoming increasingly divided, requiring social media companies to prove to potential advertisers that they not only have a lot of registered users, but that those users are engaged and spending a lot of time on their platforms.

Having accumulated nearly 230 billion minutes of user-time, Facebook is several lengths ahead of the competition in the user engagement race; its users have spent 18x more time on the platform than users of the next-biggest social network, Instagram (which, of course, is owned by Facebook). Despite its clear lead, Facebook seems to be keeping user engagement at the top of its priority list, introducing features that reduce its users’ need to access resources outside the Facebook ecosystem.

Take, for example, Facebook’s introduction of “native video.” Native videos are videos that are posted directly to Facebook rather than first being uploaded to another site such as YouTube and then shared on Facebook as links. Native videos on Facebook have been shown to significantly outperform videos shared on Facebook from other sites in terms of engagement.

A Facebook feature known as auto-play further increases user engagement by ensuring that Facebook native videos—and only Facebook native videos—automatically play as users scroll down their newsfeeds. After one quarter with the auto-play in place, Facebook experienced a 58% increase in engagement.

Now, by testing an update of its “Notes” feature, Facebook may be indicating a desire to keep its users from venturing off the platform to use third-party blogging platforms and personal websites, too.

Before 2011, when Facebook statuses were limited to 500 characters, the Notes feature allowed Facebook users to create longer posts that, like their photo albums and favorite book choices, would always be attached to their profiles. Since Facebook has significantly loosened up its character limits, the purpose of Notes has been unclear.

But Facebook recently updated Notes to allow users to create posts with a more sophisticated look and an accompanying picture. The updated Notes was described by a Facebook spokesperson as the company’s attempt “to make it easier for people to create and read longer-form stories on Facebook.” Some social media industry observers have suggested that this update is intended to provide users with an alternative to Medium, a blogging platform favored by those in the technology and media industries.

“But that might be too early an assessment,” writes Motherboard’s Clinton Nguyen, “as [the new Notes feature is] a work in progress, the revamp is only available for a handful of users.”

Nguyen is right; it’s too early to tell whether social media enthusiasts will want create and read lengthy personal essays on Facebook. One thing is for sure, however: Facebook is not letting up on its efforts to remain the user-engagement king.

Social Media E-Discovery: Are Your Facebook Posts Discoverable in Civil Litigation?

Posted in Discovery, E-Discovery, Litigation

iStock_000056895088_FullJudge Richard J. Walsh began his opinion in Largent v. Reed with the following question: “What if the people in your life want to use your Facebook posts against you in a civil lawsuit?” With the explosive growth of social media, judges have had to confront this question more and more frequently. The answer to this question is something you’ll hear quite often from lawyers: “It depends.”

Courts generally have held that there can be no reasonable expectation of privacy in your profile when Facebook’s homepage informs you that “Facebook helps you connect and share with the people in your life.” Even when you decide to limit who can see your photos or read your status updates, that information still may be discoverable if you’ve posted a picture or updated a status that is relevant to a lawsuit in which you’re involved. The issue, then, is whether the party seeking access to your social media profile has a legitimate basis for doing so.

If you’ve updated your Facebook status to brag about your awesome new workout routine after claiming serious and permanent physical injuries sustained in a car accident—yes, that information is relevant to a lawsuit arising from that accident and will be discoverable. The plaintiff in Largent v. Reed learned that lesson the hard way when she did just that and the court ordered her to turn over her Facebook log-in information to the defense counsel. On the other hand, your Facebook profile will not be discoverable simply because your adversary decides he or she wants to go on a fishing expedition through the last eight years of your digital life.

Courts in many jurisdictions have applied the same standard to decide whether a litigant’s Facebook posts will be discoverable: The party seeking your posts must show that the requested information may reasonably lead to the discovery of admissible evidence.

For example, the plaintiff in Zimmerman v. Weis Markets, Inc. claimed that he suffered permanent injuries sustained from operating a fork lift—and then went on to post that his interests included “ridin” and “bike stunts” on the public portion of his Facebook page. The court determined that his public posts placed the legitimacy of his damages claims in controversy and that his privacy interests did not outweigh the discovery requests.

In contrast, in Tompkins v. Detroit Metropolitan Airport, the plaintiff in this slip-and-fall case claimed back injuries in connection with an accident at the Detroit Metropolitan Airport. The defendant checked the plaintiff’s publicly available Facebook photos (i.e., photos not subject to any of Facebook’s available privacy settings or restrictions), and stumbled upon photos of the plaintiff holding a small dog and also pushing a shopping cart. The court determined that these photos were in no way inconsistent with the plaintiff’s injury claims, stating that if “the Plaintiff’s public Facebook page contained pictures of her playing golf or riding horseback, Defendant might have a stronger argument for delving into the nonpublic section of her account.”

The Tompkins court recognized that the plaintiff’s information was not discoverable because parties do not “have a generalized right to rummage at will through information” a person has posted. Indeed, the defendants sought the production of the plaintiff’s entire Facebook account. Their overbroad and overreaching discovery request was—and is—common among parties seeking access to their opponents’ Facebook data.

In response to these overbroad requests, courts routinely deny motions to compel the production of a person’s entire Facebook profile because such requests are nothing more than fishing expeditions seeking what might be relevant information. As the court in Potts v. Dollar Tree Stores, Inc. stated, the defendant seeking Facebook data must at least “make a threshold showing that publicly available information on [Facebook] undermines the Plaintiff’s claims.”

The Tompkins and Potts decisions mark important developments in Facebook e-discovery cases. They establish that a person’s entire Facebook profile is not discoverable merely because a portion of that profile is public. In turn, Facebook’s privacy settings can provide at least some protection against discovery requests—assuming that the user has taken efforts not to display photos publicly that blatantly contradict his or her legal claims.

When it is shown that a party’s Facebook history should be discoverable, however, the party must make sure not to tamper with that history. Deactivating your Facebook account to hide evidence can invite the ire of the court. Deleting your account outright can even result in sanctions. The takeaway is that courts treat social media data no differently than any other type of electronically stored information; what you share with friends online may also be something you share with your adversary—and even the court.