Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Social Links: Snapchat ad revenue grows; the UK’s revenge porn problem; laws that enable control of digital assets after death

Posted in Advertising, Cyberbullying, Free Speech, Marketing, Privacy

Snapchat is on track to rake in an enormous amount of ad revenue by 2017.

Also, there’s mounting evidence that the company is working toward developing a Google Glass-like product.

We have written previously about the scourge of revenge porn; it turns out the UK has a serious revenge porn problem, too.

A new law in Illinois requires social media sites to give their users the opportunity to name a beneficiary who can access their accounts if they die. Only a few other U.S. states have laws that similarly protect social media users’ digital assets.

Baltimore police use Geofeedia to monitor citizens’ social media posts, raising concerns among civil libertarians.

Now you can see when someone reads the direct message you sent on Twitter (unless, of course, the recipient disables read receipts).

According to a new study, positive comments from your friends on Facebook can bring you as much happiness as having children. Those results don’t necessarily contradict earlier studies, which found that social media users became depressed when they consumed a lot of content passively.

Are hashtags actually hurting your Twitter marketing campaigns?

Pinterest’s president predicts that media publishers eventually won’t care whether their content gets consumed on their own companies’ websites or within partner apps.

A new chatbot called Yala examines users’ time zones, social media histories and other factors to determine the most effective times to post to social media.

Will brands eventually have virtual spaces where consumers can test drive products or try on clothes?

App Developer Not Liable Under TCPA For User-Initiated Texts

Posted in Litigation, Mobile

80895353_SmallA recent decision out of the Northern District of California brings good news for developers of mobile apps that incorporate text messaging functions. Those functions may create the risk of claims under the Telephone Consumer Protection Act, which generally prohibits the delivery of a text message without the recipient’s express consent. But in Cour v. Life360, Inc., U.S. District Judge Thelton E. Henderson granted defendant Life360’s motion to dismiss a putative TCPA class action after determining Life360 could not be held liable under the TCPA for a text initiated by a user of Life360’s messaging and geolocation application.

Background

The plaintiff alleged that he received a single, unsolicited text message from Life360, which operates a mobile application that allows users to text and see the location of fellow users on their contact lists. According to the plaintiff, after users download the application and set up an account, the application requests access to their contact lists so they can invite their friends and family to join. Users choose those in their contacts they wish to invite and then press an “Invite” button on the screen to send the invitations via text message. Users are not told how or when those invitations will be sent.

Plaintiff filed claims under the TCPA and California’s Unfair Competition Law (UCL) on behalf of himself and a nationwide class of persons that received at least one text message from or on behalf of Life360. Life360 moved to dismiss both claims.

One Text Sufficient to Confer Standing Under Spokeo

Life360 first argued that the plaintiff lacked Article III standing because he failed to allege a concrete injury, as required under the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). But the Court rejected that argument, holding that even though the plaintiff received only one text, the invasion of privacy it caused was sufficiently concrete to confer standing.

Life360 Not Liable Under the TCPA or UCL

The key disagreement between the parties was whether Life360 or its user was responsible for “initiating” the invitational text message sent to the plaintiff. Relying on guidance from the Federal Communications Commission’s July 2015 declaratory ruling, the Court ruled that the user—and not Life360—initiated the text to plaintiff, and thus Life360 could not be held liable.

The Court reasoned that Life360’s users have to affirmatively choose which of their contacts will receive an invitation and then press the “Invite” button to actually send the invitations. Even though Life360 does not inform its users how or when those invitations will be transmitted, given the TCPA’s purpose of preventing invasions of privacy, “the person who chooses to send an unwanted invitation is responsible for invading the recipient’s privacy even if that person does not know how the invitation will be sent.” Consequently, Life360 could not be held liable for the text message under either the TCPA or the UCL.

Takeaway

As this case demonstrates, to mitigate the risk of TCPA liability, developers of messaging software or applications should ensure that any text messages sent through their platforms are initiated by the users themselves through their affirmative conduct.

*          *          *

For more on the Telephone Consumer Protection Act’s application to text messages, see FCC Rules That Opt-Out Confirmation Text Messages Do Not Violate the TCPA; G2G, Yo Quiero TB: Taco Bell Found Not Liable for Franchisee Text Message Campaign; Face Off: Consumer Sues Hockey Team Over Text Messages. For more on the TCPA in general, see FCC Clarifies Its Interpretations of the Telephone Consumer Protection Act, Provoking Strong Objections From the Business Community.

 

Social Links: Instagram’s & Pinterest’s new features; the per-post premium paid to top influencers; a successful social media investor shares his strategy

Posted in Advertising, First Amendment, Free Speech, Marketing, Mobile

Instagram now allows users to zoom in on photos in their feeds and at least 11 brands are already capitalizing on the new feature.

Pinterest acquired Instapaper, a tool that allows you to cache webpages for reading at a later time.

A social-media celebrity with 500,000 followers and a lot of people interacting with his or her content could bring in how much for a single post?!

Snapchat’s first investor shares his secret for identifying the next big app.

SEC steps up scrutiny of investment advisers’ use of social media.

As younger audiences’ primary source of news, social media has understandably affected photojournalism.

Should social media companies establish guidelines for when they will—and will not—heed police officers’ requests to suspend suspects’ accounts?

Meet the officer behind a small New England city’s police department’s viral Facebook page.

Wondering whether you should hit “reply all” when someone has mistakenly included you on an email chain? The New York Times has one word for you.

Court Upholds Enforceability of “Clickwrap” Employee Agreement

Posted in Electronic Contracts

Correct check box digital concept

As we have previously discussed, if you want your electronic contracts to be enforceable, it is a best practice to require the counterparty to affirmatively accept the contract by checking a box or clicking a button. A recent New Jersey district court decision, ADP, LLC v. Lynch, reinforces this point. Such issues most often arise in the context of website terms of use, but ADP v. Lynch involved a non-competition provision and forum selection clause contained in documentation presented to employees electronically in connection with stock option grants.

The employer, ADP, sued two former employees for taking jobs at a competitor in violation of certain restrictive covenants contained in the stock option grant documentation. The employees sought to dismiss the action on the basis of lack of jurisdiction, and ADP responded by pointing to a forum selection clause in the grant documentation. The employees argued, however, that they had not received adequate notice of the restrictive covenants and that the forum selection clause was unenforceable.

The grant documentation containing the restrictive covenants and the forum selection clause had been presented to the employees in electronic form and, based on the allegations in ADP’s complaint, the employees were required to acknowledge the documentation in order to receive the stock option grants. Specifically, ADP had presented the documentation in such a way that each employee was physically unable to click the required “Accept Grant” button unless he or she had affirmatively checked a prior box indicating that he or she had read the associated documents containing the restrictive covenants and forum selection clause.

The court also noted that ADP’s manager of its stock plan services “provided a step-by-step rundown” of the process that employees were required to follow to accept stock option grants, and that, “in order to accept those awards, an employee would have to affirmatively acknowledge that he or she reviewed the Restrictive Covenants before proceeding.” This illustrates another point we have noted previously: If you want your electronic contracts to be enforceable, you should not only make sure to implement them in a way that requires affirmative acceptance, you should also be prepared to produce evidence that the user at issue actually accepted.

In light of the above, the court analyzed the grant documentation containing the restrictive covenants and forum selection clause as an enforceable “clickwrap” contract similar to the website terms of use at issue in another case we have written about previously, Fteja v. Facebook, Inc.:

 “At this stage in the litigation, the Court finds that the forum selection clauses are encompassed by enforceable clickwrap agreements. The complaints unequivocally allege that an employee could not accept any stock grants until acknowledging that he or she reviewed all grant documents, including the Restrictive Covenants that contained the forum selection clauses. […] In order to accept those awards, an employee would have to affirmatively acknowledge that he or she reviewed the Restrictive Covenants before proceeding. […] Therefore, this case involves the type of clickwrap agreement that other courts have found to be enforceable.”

The court also found unpersuasive the employees’ argument that mutual assent was lacking because the acknowledgment box did not expressly state “I agree to the terms of the grant documents,” but instead merely required the employees to acknowledge that they had read those documents. According to the court, this was a “distinction without difference” because, in accepting the option grant, the defendants were required to represent as part of the grant agreements that they had read the restrictive covenant agreements.

Accordingly, as ADP sufficiently alleged that it had required the employees to affirmatively accept the restrictive covenants and forum selection clause as part of the electronic contracting process, the court denied the employees’ motion to dismiss.

While this case does not necessarily break new ground in terms of the enforceability of electronic contracts, it does illustrate that the same principle applies whether you are seeking to impose terms and conditions on users of your website or enforce restrictive covenants and a forum selection clause in an employment agreement: make sure the counterparty is required to take some clear and affirmative action to expressly accept the contract.

*          *          *

For more on what it takes for an online agreement to be enforceable, see Implementing and Enforcing Online Terms of Use; Three Steps to Help Ensure the Enforceability of Your Website’s Terms of Use; Clickwrap, Browsewrap and Mixed Media Contracts: A Few Words Can Go a Long Way; and Terms and Conditions Buried in Easily Ignored Scroll Box Don’t Cut It, the Seventh Circuit Holds.

Social Links: Google penalizes sites with pop-up ads; proposed Federal legislation to criminalize revenge porn; ad industry group questions Kardashians’ social media posts

Posted in Advertising, Employment Law, Endorsement Guides, Free Speech, FTC, Labor Law, Litigation, Marketing, Mobile, Privacy

Google is cracking down on mobile pop-up ads by knocking down the search-result position of websites that use them.

The National Labor Relations Board decided a social media policy that Chipotle had in place for its employees violates federal labor law.

A group of lawmakers plans to introduce legislation that would criminalize revenge porn—explicit images posted to the web without the consent of the subject—at the federal level.

The Truth in Advertising organization sent the Kardashians a letter threatening to report them for violating the FTC’s endorsement guides. This isn’t the first time the legality of the famous family’s social media posts has been called into question. If only Kim would read our influencer marketing blog posts.

According to one study, 68% percent of publishers use editorial staff to create native ads.

Twitter launched a button that a company can place on its website to allow users to send a direct message to the company’s Twitter inbox.

The Center for Democracy & Technology criticized the Department of Homeland Security’s proposal to ask visa-waiver-program applicants to disclose their social media account information.

UK lawmakers issued a report calling on the big social media companies to do more to purge their platforms of hate speech and material that incites violence.

Social media is playing bigger role in jury selection, Arkansas prosecutors and criminal defense lawyers say.

A day in the life of the Economist‘s head of social media.

Seven things smart entrepreneurs do on Instagram.

Four ways to get busy people to read the email you send them.

Want to know how Facebook views your political leanings? Here’s the way to find out.

Controversial California Court Decision Significantly Narrows a Crucial Liability Safe Harbor for Website Operators

Posted in Defamation, Online Reviews, Section 230 Safe Harbor

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

Now Available: The August Issue of Our Socially Aware Newsletter

Posted in Cyberbullying, E-Commerce, Infographic, Marketing, Mobile, Privacy, Protected Speech, Right To Be Forgotten, Terms of Use

CaptureThe latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award winning guide to the law and business of social media, we discuss the impact online trolls are having on social media marketing; we revisit whether hashtags should be afforded trademark protection; we explain how an unusual New Jersey law is disrupting the ecommerce industry and creating traps for the unwary; we explore legal and business implications of the Pokémon Go craze; we examine a recent federal court decision likely to affect application of the Video Privacy Protection Act to mobile apps; we discuss a class action suit against an app developer that highlights the legal risks of transitioning app customers from one business model to another; and we describe how Europe’s Right to Be Forgotten has spread to Asia.

All this—plus infographics illustrating the enormous popularity of Pokémon Go and the unfortunate prevalence of online trolling.

Read our newsletter.

Social Links: Twitter offers anti-harassment tools; Pinterest takes on video ads; P&G changes its social strategy

Posted in Advertising, Cyberbullying, Disappearing Content, First Amendment, Litigation, Marketing

Twitter took steps to remedy its harassment problem.

In addition, over the last six months, Twitter suspended 235,000 accounts that promoted terrorism.

The Washington Post is using language-generation technology to automatically produce stories on the Olympics and the election.

Video ads are going to start popping up on Pinterest.

Does it make sense for big brands to invest in expensive, highly-targeted social media advertising? Procter & Gamble doesn’t think so.

These brands are using Facebook in particularly effective ways during the Olympic games.

Since we first reported on the phenomenon nearly two years ago, Facebook has become an increasingly common vehicle for serving divorce papers.

Across the country, states are grappling with the conflict between existing laws that prohibit disclosing ballot information or images and the growing phenomenon of “ballot selfies”—photos posted to social media of people at the polls casting their ballots or of the ballots themselves.

Creating dozens of Facebook pages for a single brand can help marketers to increase social-media-engagement and please the Facebook algorithm gods, according to Contenly.

Here’s how Snapchat makes money from disappearing videos.

A Harvard Business Review article advises marketers to start listening to (as opposed to managing) conversations about their brands on social media.

For intel on what it can do to keep teens’ attention, Instagram goes straight to the source.

Social Links: Facebook’s anti-ad-blocking software; LinkedIn’s “scraper” lawsuit; FTC’s upcoming crackdown on social influencers

Posted in Advertising, Compliance, Cyberbullying, Data Security, Endorsement Guides, Free Speech, Litigation, Marketing, Mobile, Online Endorsements

Facebook introduced technology that disables ad blockers used by people who visit the platform via desktop computers, but Adblock Plus has already foiled the platform’s efforts, at least for now.

A look at Twitter’s 10-year failure to stop harassment.

Are mobile apps killing the web?

LinkedIn sues to shut down “scrapers.”

The FTC is planning to police social media influencers’ paid endorsements more strictly; hashtags like #ad may not be sufficient to avoid FTC scrutiny. Officials in the UK are cracking down on paid posts, too.

Dan Rather, Facebook anchorman.

The U.S. Olympic Committee sent letters to non-sponsoring companies warning them against posting about the games on their corporate social media accounts.

How IHOP keeps winning the love & affection of its 3.5 million Facebook fans.

A Canadian woman whose home was designated a Pokémon Go “stop” is suing the app’s creators for trespass and nuisance. We saw that coming.

There’s a website dedicated to helping Snapchat users fool their followers into thinking they’re out on the town.

Facebook has been wooing premium content owners, but TV companies are reportedly resisting.

PETA got a primatologist to submit an amicus curiae brief supporting its suit alleging a monkey who took a selfie is entitled to a copyright for the image.

Commercializing User-Generated Content: Five Risk Reduction Strategies

Posted in Copyright, DMCA, IP, Terms of Use

ContentGraphic_SmallWe’re in the midst of a seismic shift in how companies interact with user-generated content (UGC).

For years, companies were happy simply to host UGC on their websites, blogs and social media pages and reap the resulting boost to their traffic numbers. And U.S. law—in the form of Section 512(c) of the Digital Millennium Copyright Act (DMCA)—accommodated this passive use of UGC by creating a safe harbor from copyright damages for websites, blogs and social media platform operators that hosted UGC posted without the authorization of the owners of the copyrights in such UGC, so long as such operators complied with the requirements of the safe harbor.

Increasingly, companies are no longer satisfied with passively hosting UGC. Rather, they now want to find creative ways to commercialize such content—by incorporating it into ads (including print, TV and other offline ads), creating new works based on such content and even selling such content. Yet, in moving beyond mere hosting to proactive exploitation of UGC, companies risk losing the benefit of the DMCA Section 512(c) safe harbor, which could result in potentially significant copyright liability exposure.

For example, if a company finds that users are posting potentially valuable UGC to the company’s Facebook page, or on Twitter in connection with one of the company’s hashtags, that company may want to make such UGC available on its own website. The DMCA Section 512(c) safe harbor, however, is unlikely to protect the company in copying such UGC from the Facebook or Twitter platform to its own website.

The reality is that any company seeking to monetize or otherwise exploit UGC needs to proceed with extreme caution. This is true for several reasons:

  • UGC can implicate a wide range of rights . . . As with any content, UGC is almost certainly subject to copyright protection, although certain Tweets and other short, text-only posts could potentially be exempt from copyright protection if they qualify as “short phrases” under the Copyright Act. If any individuals are identifiable in UGC, then rights of publicity and rights of privacy may also be relevant. In addition, UGC may contain visible third-party trademarks or comments that defame or invade the privacy of third parties.
  • . . . and a wide range of rightsholders. Notably, many of the rights necessary to exploit UGC are likely to be held by individuals and corporations other than the posting user. For example, unless a photo is a “selfie,” the photographer and the subject of the photo will be different individuals, with each holding different rights—copyright, for the photographer, and the rights of publicity and privacy, for the subject—that could be relevant to the exploitation of the photo. Moreover, any trademarks, logos and other images contained in a photo could potentially implicate third-party rightsholders, including third-party corporations. Videos also raise the possibility of unauthorized clips or embedded music.
  • If the UGC is hosted by a third-party social network, it may have Terms of Service that help—or hurt—efforts to exploit the UGC. Most social media networks collect broad rights to UGC from their users, although they differ substantially when it comes to passing those rights along to third parties interested in exploiting the content. For example, if a company uses Twitter’s Application Programming Interface (API) to identify and access Tweets that it would like to republish, then Twitter grants to that company a license to “copy a reasonable amount of and display” the Tweets on the company’s own services, subject to certain limitations. (For example, Twitter currently prohibits any display of Tweets that could imply an endorsement of a product or service, absent separate permission from the user.) Instagram also has an API that provides access to UGC, but, in contrast to Twitter, Instagram’s API terms do not appear to grant any license to the UGC and affirmatively require companies to “comply with any requirements or restrictions” imposed by Instagram users on their UGC.

With these risks in mind, we note several emerging best practices for a company to consider if it has decided to exploit UGC in ways that may fall outside the scope of DMCA Section 512(c) and other online safe harbors. Although legal risk can never be eliminated in dealing with UGC, these strategies may help to reduce such risk:

  • Carefully review the Social Media Platform Terms. If the item of UGC at issue has been posted to a social media platform, determine whether the Terms of Service for such platform grants any rights to use such posted UGC off of the platform or imposes any restrictions on such content. Note, however, that any license to UGC granted by a social media platform almost certainly will not include any representations, warranties or indemnities, and so it may not offer any protection against third-party claims arising from the UGC at issue.
  • Seek Permission. If the social media platform’s governing terms don’t provide you with all of the rights needed to exploit the UGC item at issue (or even if they do), seek permission directly from the user who posted the item. Sophisticated brands will often approach a user via the commenting or private messaging features of the applicable social media platform, and will present him or her with a link to a short, user-friendly license agreement. Often, the user will be delighted by the brand’s interest in using his or her content. Of course, be aware that the party posting the content may not be the party that can authorize use of that content, as Agence France Presse learned the hard way in using photos taken from Twitter.
  • Make Available Terms and Conditions for “Promotional” Hashtags. If a company promotes a particular hashtag to its customers, and would like to use content that is posted in conjunction with the hashtag, the company could consider making available a short set of terms alongside its promotion of that hashtag. For example, in any communications promoting the existence of the hashtag and associated marketing campaign, the company could inform customers that their use of the hashtag will constitute permission for the company to use any content posted together with the hashtag. Such an approach could face significant enforceability issues—after all, it is essentially a form of “browsewrap” agreement—but it could provide the company with a potential defense in the event of a subsequent dispute.
  • Adopt a Curation Process. Adopt an internal curation process to identify items of UGC that are especially high risk, which could include videos, photos of celebrities, photos of children, professional-quality content, any content containing copyright notices, watermarks and so forth, and any content containing potentially defamatory, fraudulent or otherwise illegal content. Ensure that the curators are trained and equipped with checklists and other materials approved by the company’s legal department or outside counsel. Ideally, any high-risk content should be subject to the company’s most stringent approach to obtaining permission and clearing rights—or perhaps avoided altogether.
  • Adjust the Approach for High-Risk Uses. Consider the way in which the UGC at issue is expected to be used, and whether the company’s risk tolerance should be adjusted accordingly. For example, if an item of UGC will be used in a high-profile advertisement, the company may want to undertake independent diligence on any questionable aspects of the UGC, even after obtaining the posting user’s permission—or perhaps avoid any questionable UGC altogether.

In a social media age that values authenticity, more and more companies—even big, risk-adverse Fortune 100 companies—are interested in finding ways to leverage UGC relevant to their business, products or services. Yet the shift from merely hosting UGC to actively exploiting it raises very real legal hurdles for companies. The tips above are not a substitute for working closely with experienced social media counsel, but they collectively provide a framework for addressing legal risks in connection with a company’s efforts to commercialize UGC.

*          *        *

For more on the issues related to user-generated content, see New Court Decision Highlights Potential Headache for Companies Hosting User-Generated Content; Court Holds That DMCA Safe Harbor Does Not Extend to Infringement Prior to Designation of Agent; and Thinking About Using Pictures Pulled From Twitter? Think Again, New York Court Warns.