Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Controversial California Court Decision Significantly Narrows a Crucial Liability Safe Harbor for Website Operators

Posted in Defamation, Online Reviews, Section 230 Safe Harbor

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

Now Available: The August Issue of Our Socially Aware Newsletter

Posted in Cyberbullying, E-Commerce, Infographic, Marketing, Mobile, Privacy, Protected Speech, Right To Be Forgotten, Terms of Use

CaptureThe latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award winning guide to the law and business of social media, we discuss the impact online trolls are having on social media marketing; we revisit whether hashtags should be afforded trademark protection; we explain how an unusual New Jersey law is disrupting the ecommerce industry and creating traps for the unwary; we explore legal and business implications of the Pokémon Go craze; we examine a recent federal court decision likely to affect application of the Video Privacy Protection Act to mobile apps; we discuss a class action suit against an app developer that highlights the legal risks of transitioning app customers from one business model to another; and we describe how Europe’s Right to Be Forgotten has spread to Asia.

All this—plus infographics illustrating the enormous popularity of Pokémon Go and the unfortunate prevalence of online trolling.

Read our newsletter.

Social Links: Twitter offers anti-harassment tools; Pinterest takes on video ads; P&G changes its social strategy

Posted in Advertising, Cyberbullying, Disappearing Content, First Amendment, Litigation, Marketing

Twitter took steps to remedy its harassment problem.

In addition, over the last six months, Twitter suspended 235,000 accounts that promoted terrorism.

The Washington Post is using language-generation technology to automatically produce stories on the Olympics and the election.

Video ads are going to start popping up on Pinterest.

Does it make sense for big brands to invest in expensive, highly-targeted social media advertising? Procter & Gamble doesn’t think so.

These brands are using Facebook in particularly effective ways during the Olympic games.

Since we first reported on the phenomenon nearly two years ago, Facebook has become an increasingly common vehicle for serving divorce papers.

Across the country, states are grappling with the conflict between existing laws that prohibit disclosing ballot information or images and the growing phenomenon of “ballot selfies”—photos posted to social media of people at the polls casting their ballots or of the ballots themselves.

Creating dozens of Facebook pages for a single brand can help marketers to increase social-media-engagement and please the Facebook algorithm gods, according to Contenly.

Here’s how Snapchat makes money from disappearing videos.

A Harvard Business Review article advises marketers to start listening to (as opposed to managing) conversations about their brands on social media.

For intel on what it can do to keep teens’ attention, Instagram goes straight to the source.

Social Links: Facebook’s anti-ad-blocking software; LinkedIn’s “scraper” lawsuit; FTC’s upcoming crackdown on social influencers

Posted in Advertising, Compliance, Cyberbullying, Data Security, Endorsement Guides, Free Speech, Litigation, Marketing, Mobile, Online Endorsements

Facebook introduced technology that disables ad blockers used by people who visit the platform via desktop computers, but Adblock Plus has already foiled the platform’s efforts, at least for now.

A look at Twitter’s 10-year failure to stop harassment.

Are mobile apps killing the web?

LinkedIn sues to shut down “scrapers.”

The FTC is planning to police social media influencers’ paid endorsements more strictly; hashtags like #ad may not be sufficient to avoid FTC scrutiny. Officials in the UK are cracking down on paid posts, too.

Dan Rather, Facebook anchorman.

The U.S. Olympic Committee sent letters to non-sponsoring companies warning them against posting about the games on their corporate social media accounts.

How IHOP keeps winning the love & affection of its 3.5 million Facebook fans.

A Canadian woman whose home was designated a Pokémon Go “stop” is suing the app’s creators for trespass and nuisance. We saw that coming.

There’s a website dedicated to helping Snapchat users fool their followers into thinking they’re out on the town.

Facebook has been wooing premium content owners, but TV companies are reportedly resisting.

PETA got a primatologist to submit an amicus curiae brief supporting its suit alleging a monkey who took a selfie is entitled to a copyright for the image.

Commercializing User-Generated Content: Five Risk Reduction Strategies

Posted in Copyright, DMCA, IP, Terms of Use

ContentGraphic_SmallWe’re in the midst of a seismic shift in how companies interact with user-generated content (UGC).

For years, companies were happy simply to host UGC on their websites, blogs and social media pages and reap the resulting boost to their traffic numbers. And U.S. law—in the form of Section 512(c) of the Digital Millennium Copyright Act (DMCA)—accommodated this passive use of UGC by creating a safe harbor from copyright damages for websites, blogs and social media platform operators that hosted UGC posted without the authorization of the owners of the copyrights in such UGC, so long as such operators complied with the requirements of the safe harbor.

Increasingly, companies are no longer satisfied with passively hosting UGC. Rather, they now want to find creative ways to commercialize such content—by incorporating it into ads (including print, TV and other offline ads), creating new works based on such content and even selling such content. Yet, in moving beyond mere hosting to proactive exploitation of UGC, companies risk losing the benefit of the DMCA Section 512(c) safe harbor, which could result in potentially significant copyright liability exposure.

For example, if a company finds that users are posting potentially valuable UGC to the company’s Facebook page, or on Twitter in connection with one of the company’s hashtags, that company may want to make such UGC available on its own website. The DMCA Section 512(c) safe harbor, however, is unlikely to protect the company in copying such UGC from the Facebook or Twitter platform to its own website.

The reality is that any company seeking to monetize or otherwise exploit UGC needs to proceed with extreme caution. This is true for several reasons:

  • UGC can implicate a wide range of rights . . . As with any content, UGC is almost certainly subject to copyright protection, although certain Tweets and other short, text-only posts could potentially be exempt from copyright protection if they qualify as “short phrases” under the Copyright Act. If any individuals are identifiable in UGC, then rights of publicity and rights of privacy may also be relevant. In addition, UGC may contain visible third-party trademarks or comments that defame or invade the privacy of third parties.
  • . . . and a wide range of rightsholders. Notably, many of the rights necessary to exploit UGC are likely to be held by individuals and corporations other than the posting user. For example, unless a photo is a “selfie,” the photographer and the subject of the photo will be different individuals, with each holding different rights—copyright, for the photographer, and the rights of publicity and privacy, for the subject—that could be relevant to the exploitation of the photo. Moreover, any trademarks, logos and other images contained in a photo could potentially implicate third-party rightsholders, including third-party corporations. Videos also raise the possibility of unauthorized clips or embedded music.
  • If the UGC is hosted by a third-party social network, it may have Terms of Service that help—or hurt—efforts to exploit the UGC. Most social media networks collect broad rights to UGC from their users, although they differ substantially when it comes to passing those rights along to third parties interested in exploiting the content. For example, if a company uses Twitter’s Application Programming Interface (API) to identify and access Tweets that it would like to republish, then Twitter grants to that company a license to “copy a reasonable amount of and display” the Tweets on the company’s own services, subject to certain limitations. (For example, Twitter currently prohibits any display of Tweets that could imply an endorsement of a product or service, absent separate permission from the user.) Instagram also has an API that provides access to UGC, but, in contrast to Twitter, Instagram’s API terms do not appear to grant any license to the UGC and affirmatively require companies to “comply with any requirements or restrictions” imposed by Instagram users on their UGC.

With these risks in mind, we note several emerging best practices for a company to consider if it has decided to exploit UGC in ways that may fall outside the scope of DMCA Section 512(c) and other online safe harbors. Although legal risk can never be eliminated in dealing with UGC, these strategies may help to reduce such risk:

  • Carefully review the Social Media Platform Terms. If the item of UGC at issue has been posted to a social media platform, determine whether the Terms of Service for such platform grants any rights to use such posted UGC off of the platform or imposes any restrictions on such content. Note, however, that any license to UGC granted by a social media platform almost certainly will not include any representations, warranties or indemnities, and so it may not offer any protection against third-party claims arising from the UGC at issue.
  • Seek Permission. If the social media platform’s governing terms don’t provide you with all of the rights needed to exploit the UGC item at issue (or even if they do), seek permission directly from the user who posted the item. Sophisticated brands will often approach a user via the commenting or private messaging features of the applicable social media platform, and will present him or her with a link to a short, user-friendly license agreement. Often, the user will be delighted by the brand’s interest in using his or her content. Of course, be aware that the party posting the content may not be the party that can authorize use of that content, as Agence France Presse learned the hard way in using photos taken from Twitter.
  • Make Available Terms and Conditions for “Promotional” Hashtags. If a company promotes a particular hashtag to its customers, and would like to use content that is posted in conjunction with the hashtag, the company could consider making available a short set of terms alongside its promotion of that hashtag. For example, in any communications promoting the existence of the hashtag and associated marketing campaign, the company could inform customers that their use of the hashtag will constitute permission for the company to use any content posted together with the hashtag. Such an approach could face significant enforceability issues—after all, it is essentially a form of “browsewrap” agreement—but it could provide the company with a potential defense in the event of a subsequent dispute.
  • Adopt a Curation Process. Adopt an internal curation process to identify items of UGC that are especially high risk, which could include videos, photos of celebrities, photos of children, professional-quality content, any content containing copyright notices, watermarks and so forth, and any content containing potentially defamatory, fraudulent or otherwise illegal content. Ensure that the curators are trained and equipped with checklists and other materials approved by the company’s legal department or outside counsel. Ideally, any high-risk content should be subject to the company’s most stringent approach to obtaining permission and clearing rights—or perhaps avoided altogether.
  • Adjust the Approach for High-Risk Uses. Consider the way in which the UGC at issue is expected to be used, and whether the company’s risk tolerance should be adjusted accordingly. For example, if an item of UGC will be used in a high-profile advertisement, the company may want to undertake independent diligence on any questionable aspects of the UGC, even after obtaining the posting user’s permission—or perhaps avoid any questionable UGC altogether.

In a social media age that values authenticity, more and more companies—even big, risk-adverse Fortune 100 companies—are interested in finding ways to leverage UGC relevant to their business, products or services. Yet the shift from merely hosting UGC to actively exploiting it raises very real legal hurdles for companies. The tips above are not a substitute for working closely with experienced social media counsel, but they collectively provide a framework for addressing legal risks in connection with a company’s efforts to commercialize UGC.

*          *        *

For more on the issues related to user-generated content, see New Court Decision Highlights Potential Headache for Companies Hosting User-Generated Content; Court Holds That DMCA Safe Harbor Does Not Extend to Infringement Prior to Designation of Agent; and Thinking About Using Pictures Pulled From Twitter? Think Again, New York Court Warns.

Ninth Circuit Case Demonstrates That the Social Media Platform, Not the User, Is in Control

Posted in Litigation, Privacy

iStock_65128811_600dpiWe have written before about website operators’ use of the federal Computer Fraud and Abuse Act (CFAA) to combat data scraping. We have also noted a number of recent cases in which courts held that social media platforms, rather than the users of those platforms, have the right to control content on and access to the relevant websites. A recent Ninth Circuit decision, Facebook v. Power Ventures, brings these two trends together.

Power Ventures, the defendant, operated a website that aggregated users’ content, such as friends lists, from various social media platforms. In an attempt to increase its user base, Power Ventures initiated an advertising campaign that encouraged users to invite their Facebook friends to Power Ventures’ site.

Specifically, an icon on the Power Ventures site gave users the option to “Share with friends through my photos,” “Share with friends through events,” or “Share with friends through status,” and displayed a “Yes I do” button that users could click. If the user clicked the “Yes I do” button, Power Ventures would create an event, photo, or status on the user’s Facebook profile. In some cases, clicking the button also caused an email to be sent to the user’s friends “from” Facebook stating that the user had invited them to a Facebook event.

Upon becoming aware of this activity, Facebook sent Power Ventures a cease and desist letter informing Power Ventures that it had violated Facebook’s terms of use and demanding that Power Ventures stop soliciting Facebook users’ information. Facebook also blocked Power Ventures’ IP address from accessing Facebook. When Power Ventures changed its IP address and continued to access the site, Facebook sued, alleging among other things that Power Ventures had violated the CFAA. As we discussed at greater length in our previous article, the CFAA imposes liability on anyone who “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer.”

In analyzing Facebook’s CFAA claim, the court reasoned that Power Ventures did not access Facebook’s computers without authorization initially because “Power users arguably gave Power permission to use Facebook’s computers to disseminate messages” and, accordingly, “Power reasonably could have thought that consent from Facebook users to share the promotion was permission for Power to access Facebook’s computers.” That all changed, however, when Facebook sent Power Ventures the cease and desist letter expressly rescinding whatever authorization Power Ventures may have otherwise had. According to the court, “[t]he consent that Power had received from Facebook users was not sufficient to grant continuing authorization to access Facebook’s computers after Facebook’s express revocation of permission.”

The court employed a colorful analogy to support its reasoning:

Suppose that a person wants to borrow a friend’s jewelry that is held in a safe deposit box at a bank. The friend gives permission for the person to access the safe deposit box and lends him a key. Upon receiving the key, though, the person decides to visit the bank while carrying a shotgun. The bank ejects the person from its premises and bans his reentry. The gun-toting jewelry borrower could not then reenter the bank, claiming that access to the safe deposit box gave him authority to stride about the bank’s property while armed. In other words, to access the safe deposit box, the person needs permission both from his friend (who controls access to the safe) and from the bank (which controls access to its premises). Similarly, for Power to continue its campaign using Facebook’s computers, it needed authorization both from individual Facebook users (who controlled their data and personal pages) and from Facebook (which stored this data on its physical servers).

Accordingly, the court held that, following receipt of Facebook’s cease and desist letter, Power Ventures intentionally accessed Facebook’s computers knowing that it was not authorized to do so, making Power Ventures liable under the CFAA.

On one level, Facebook v. Power Ventures can be seen as a battle between two competing social media platforms over valuable user data. Certainly it is easy to understand why Facebook would object to Power Ventures poaching Facebook’s data. But the case can also be seen as an example of social media operators exerting the right to control their platforms, and the content and data that users post to those platforms, even against the users’ own wishes.

In this sense, one can place Facebook v. Power Ventures in the line of recent cases holding that, at the end of the day, it is the social media platform operator and not the user that controls the platform. And that is an important fact for individuals and companies to keep in mind when they are investing time and money to establish and maintain a social media presence on a platform controlled by someone else.

*          *        *

For more regarding online data scraping and the Computer Fraud and Abuse Act, see our earlier blog post Data for the Taking: Using the Computer Fraud and Abuse Act to Combat Web Scraping.

Social Links: Twitter’s tough quarter; Yelp warns users about litigious dentist; Pinterest battles Snapchat

Posted in Advertising, Digital Content, Disappearing Content, Marketing, Mobile

Instagram now allows celebrities to block trolls.

While Facebook reached new highs last quarter, Twitter continued to stumble. Will adding more live video content or allowing users to create Snapchat-like collage custom emojis over photos help Twitter regain its footing?

Tips for fixing your company’s social media marketing strategy.

A pop singer told fans to send him their Twitter passwords so he could post personal messages to their feeds. Marketing genius or potential Consumer Fraud and Abuse Act violation?

Tiffany & Co. launched a Snapchat filter to attract millennials.

Yelp posted a warning on the Yelp.com page of a Manhattan dentist who filed defamation suits against five patients over four years for giving him negative reviews.

Sponsored content is becoming king in a Facebook world.

The New York Times built an in-house analytics dashboard to make it easy for its reporters to access reader engagement data.

Pinterest appears to be losing to Snapchat in the battle for digital ad dollars.

Profile pranks or endorsement bombing on LinkedIn is an actual thing.

First Circuit Issues Potentially Significant Ruling on Federal Video Privacy Statute’s Application to Mobile Apps

Posted in Litigation, Mobile, Privacy

90931637_SMALLThe First Circuit Court of Appeals’ recent decision in Yershov v. Gannett Satellite Information Network, Inc. may carry important implications for mobile app providers seeking to navigate federal privacy laws—in particular, the Video Privacy Protection Act of 1988 (“VPPA”). Although Yershov is not the first case to consider how the VPPA applies to mobile apps, the opinion contains two key holdings regarding (1) the scope of protectable personally identifiable information and (2) the treatment of free app downloaders under the statute.

The VPPA

The VPPA was passed in 1988, after the video rental history of then-Supreme Court nominee Judge Robert Bork was disclosed in a newspaper article during debate over his nomination. Quoting the VPPA, the Yershov opinion explains that the statute is intended to preserve personal privacy in connection with the rental, purchase or delivery of video and audio materials and creates a “civil remedy against a ‘videotape service provider’ for ‘knowingly disclos[ing], to any person, [personally identifiable information] concerning any consumer of such provider.’” Of relevance in Yershov, the statute defines “consumer” as “any renter, purchaser, or subscriber of goods or services from a videotape service provider.” The VPPA defines personally identifiable information to “include[] information which identifies a person as having requested or obtained specific video materials or services from a videotape service provider.”

Case Background

According to the allegations in Yershov’s operative complaint, which were taken as true for purposes of the First Circuit’s opinion, Yershov downloaded the free USA Today mobile app (“the app”) on his Android mobile device in late 2013. The app is offered by Gannett via the Google Play Store and allows the user to access various USA Today media and content, including videos, on the user’s mobile device.

Yershov claims that he watched numerous video clips on the app. Each time, Yershov’s operative complaint states, Gannett and its third-party marketing and analytics vendor collected three pieces of data: (1) the title of the video Yershov viewed; (2) the GPS coordinates of the device Yershov used; and (3) Yershov’s unique Android ID. According to Yershov, the vendor used this information to create “digital dossiers” for Yershov and similarly situated users, which Gannett in turn used to provide targeted advertising. Yershov says he never consented to the collection of this data. He filed a putative class action lawsuit as a result, claiming that Gannett’s actions violated the VPPA.

Gannett successfully moved to dismiss Yershov’s VPPA claim. The district court held that the information Gannett collected and disclosed to its vendor constituted personally identifiable information, but nevertheless concluded that Yershov was not a “consumer” with a right of action under the VPPA because he failed to allege that he was a renter, purchaser or subscriber of Gannett’s video content. Yershov appealed.

First Circuit Revives Yershov’s Claim

The First Circuit reversed the district court’s dismissal order and remanded for further proceedings.

First, the panel agreed with the district court that the information conveyed to the vendor by Gannett constituted personally identifiable information under the VPPA. According to the panel, the VPPA’s “abstract formulation” of personally identifiable information does not require the information at issue to “explicitly name[] a person” to come within the ambit of the statute. Rather, it is sufficient if the information “effectively reveal[s] the name,” or identity “of the video viewer” without too much uncertainty or “yet-to-be-done, or unforeseeable detective work.”

Because Yershov alleged that Gannett’s vendor could connect the GPS coordinates and Android ID with a given person’s “name, address, phone number, and more,” the panel concluded that he sufficiently alleged a “firm and readily foreseeable” link between the data collected and the user’s identity.

Second, the panel addressed the “closer question” of whether Yershov is a “subscriber” and, therefore, a consumer under the VPPA. Lacking a clear statutory definition, the panel evaluated various dictionary definitions of “subscribe,” which “include as an element a payment of some type and/or presume more than a one-shot transaction.”

The panel expressly rejected the notion that the term “subscriber” incorporated a monetary payment requirement. Requiring monetary payment as an element, the panel reasoned, would render “subscriber” superfluous since the statute also lists “purchaser” and “renter” under its definition of “consumer” and those terms necessarily imply the payment of some monetary amount. According to the panel, “Congress would have had no need to include a third category of persons [i.e., subscribers] protected under the Act if it had intended that only persons who pay money for videos be protected.”

The panel also found it significant that, in 2012, Congress considered the impact of the VPPA on video content in the age of the Internet and left the definition of “consumer” untouched—an indication, according to the panel, that “Congress understood its originally-provided definition to provide at least as much protection in the digital age as it provided in 1988.”

In the end, the panel concluded that qualifying as a “subscriber” requires some kind of relationship between the individual and the video provider that gives the individual some form of special access to the video content. As the panel stated:

[B]y installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.

Looking Ahead

The Yershov decision is not without its critics. Its holdings conflict with the opinions of other courts that have considered similar issues.

For example, federal district courts including the Northern District of California and the District of New Jersey have concluded that a unique, numerical device identifier is not personally identifiable information under the statute. Yershov does not address these contrary decisions.

Further, in the 2015 case Ellis v. Cartoon Network, Inc., the Eleventh Circuit adopted a narrower reading of “subscriber,” requiring more of a “commitment” than that which arises from downloading a free app. Yershov distinguished the process associated with downloading and installing the apps in the Ellis case, but the Yershov and Ellis courts’ diverging conclusions could indicate a more fundamental disagreement aboutwhat it means to download and use free software.

Yershov shows the continuing split among courts interpreting the scope of the VPPA. As a result, B2C website operators and mobile app developers that deal with video and audio materials will want to continue to monitor VPPA case law developments, and to seek to identify and address associated legal risks.

*          *        *

For other Socially Aware blog posts on the Video Privacy Protection Act, please see Court Nixes VPPA Claim and If You Host Videos on Your Website, Read This Blog Post Regarding the Video Privacy Protection Act.

 

Social Links: Twitter’s troll problem; Snapchat fat-shamer risks prosecution; a federal anti-revenge-porn law?

Posted in Cyberbullying, Disappearing Content, First Amendment, Free Speech, Litigation, Livestreaming, Mobile, Privacy, Protected Speech

Facebook Messenger joins the elite “one billion monthly users” club just four years after its release as a standalone app.

A Canadian judge ordered a couple convicted of child neglect to post to all their social media accounts his decision describing their crime.

Leslie Jones of Ghostbusters highlights Twitter’s trolling problem. One tech columnist says the platform needs to rethink its application programming interface strategy to enable users and communities to insulate themselves from abuse.

Don’t drive and Facebook Live.

Google erased Dennis Cooper’s 14-year-old blog without warning or explanation. We recently examined the outcome of lawsuits challenging a platform’s right to remove user content (spoiler alert: the platforms usually win).

Twitter now lets anyone apply to get verified.

Researchers say there’s a correlation between an increase in the psychological stress that teens suffer and the amount of time they’re spending on social media.

A Playboy model who “fat-shamed” a woman by photographing her and posting it to Snapchat risks prosecution.

Forensic psychologists explain why people post evidence of their crimes to social media.

We may soon have a federal law making revenge porn illegal. Our blog post from 2014 took a look at some of the legal issues raised by revenge porn.

There’s now a dating app that sets people up on Pokémon Go dates. Want to know more about the most popular mobile game of all time? Read our Pokémon Go Business and Legal Primer.

Augmenting Reality: A Pokémon Go Business and Legal Primer

Posted in Litigation, Mobile, Privacy

illustrationWe have become inured to the sight of people staring at their phones rather than engaging with one another or enjoying their real-life surroundings. But, over the past two weeks, enslavement to mobile devices rose to new levels, with smartphones and tablets actually propelling users’ movements in the real world as opposed to merely distracting them from it.

Unless you’ve been off the grid this month, you know that the force mobilizing these seemingly possessed pedestrians (and drivers!) has been Pokémon Go, an app that has been downloaded more than 15 million times. Pokémon Go is currently boasting more daily users than Twitter despite having been launched on July 6, 2016, making it the most popular mobile game of all time.

Despite all this, if you happen to be, umm, over a certain age (i.e., you’re not a Millennial or younger), you may be a bit mystified as to what this Pokémon Go thing is all about. Accordingly, we put together this primer on Pokémon Go, including some observations regarding potential legal issues raised by the app.

How the Game Works

Pokémon Go is an augmented reality game that uses the device’s ability to track time and location and shows the user a map of his or her real-life surroundings. As the player moves around, the game superimposes animated Pokémon characters onto the screen over a view of the player’s real-life surroundings as seen through his or her mobile device’s camera. The more characters the player catches, the higher his or her ranking rises.

The game is free to download from online app stores, but, as players progress, they need Pokémon coins to enable certain functions. While the game allows players to earn coins over time, the fastest way to acquire them is by purchasing them. Such in-app purchases are real money makers, expected to account for more than $50 billion in industry-wide revenue this year.

Who’s Behind It

The funds that players are plunking into Pokémon Go are likely to add up to real money for the companies behind the app, a joint project of The Pokémon Company, which is 32%-owned by Nintendo, and Niantic Inc., a spinout from Alphabet Inc.

Since the app’s recent release, shares in Nintendo—a company that has struggled in recent years as a result of its reluctance to embrace mobile games—have risen 56%. The game has also significantly strengthened the financial position of Unity Technologies, the company that owns the game engine software that provides basic functionality for Pokémon Go (and for approximately 31% of the 1,000 top-grossing mobile games).

Perks and Pitfalls (some, unfortunately, literal)

Pokémon Go is being hailed as boon for small businesses; to drive foot traffic, merchants are paying the app a $10 daily fee for items called lures, which attract users. It’s also being lauded for incentivizing some players to exercise and for relieving users’ depression and social anxiety. Of course, the app is also creating problems and drawing its share of controversy.

Safety concerns have arisen as players who won’t look away from their mobile devices have run-ins with their real-life physical surroundings, cutting and bruising themselves, getting into driving accidents, and even falling off cliffs. These incidents have prompted a police department in Texas to post to social media a list of safety reminders for Pokémon Go users.

The list advises players to “tell people where you’re going if it is somewhere you’ve never been”—wise advice in light of police reports describing Missouri armed robbers’ use of the game’s geolocation feature “to anticipate the location and level of seclusion of unwitting victims.”

And, while some columnists have deemed the game educational because many of its so-called PokéStops (places where players can get free in-game items) are famous landmarks and historical markers that allow “players to learn about their community and its history,” some of those PokéStops, such as the Holocaust Museum, have objected, maintaining that playing the game on their property is inappropriate. They are frustrated by the fact that they have no control over their PokéStop designation.

Legal Issues

Despite the app being so new, it is already raising legal concerns. Some of the key concerns include the following:

Privacy

The Pokémon Go app has been dogged by privacy concerns. When it was first launched, the Pokémon Go app requested permission to access all of the data associated with the player’s Google accounts (including emails, calendar entries, photos and stored documents). The app’s first update, available since at least July 12, 2016, remedied that problem and now asks people downloading it for permission to access only their Google IDs and email addresses.

The more limited-information-access permission terms—which downloaders of the original version of Pokémon Go can only adopt by downloading the update, signing out and signing back in—haven’t stopped U.S. Senator Al Franken from penning a letter to Niantic’s CEO John Hanke requesting Niantic to answer a series of questions to “ensure that Americans’—especially children’s—very sensitive information is protected.”

Product Liability

And what about the aforementioned injuries that people have sustained while playing Pokémon Go? Can Pokémon Go’s developers be held liable for such injuries? At least one car accident victim is suing another popular social media app, Snapchat, for the traumatic brain injuries he suffered when he was struck by a car driven by a Georgia woman allegedly trying to use the Snapchat speed filter—a feature that tracks how fast the app’s user is moving and rewards points to users who submit photos of their speed.

Trespass

There’s also the question of property rights. In some cases, owners of the physical real estate sites that have been designated as PokéStops have complained about the traffic and other nuisances caused by the players. As a result, Niantic is accepting requests for removal of PokéStops from property owners, but removal isn’t guaranteed.

Of course, app users who enter upon another’s land without permission may be subject to trespassing claims. But could the companies behind the game also be liable for trespass? As The Guardian points out, “A Pokéstop cannot be ‘on private property.’ A PokéStop does not exist: it is a latitude and longitude stored on Niantic’s servers, interpreted by the Pokémon Go client which then represents it as a circle hovering over a stylized Google Map of the area surrounding the player.”

It is possible to recover in trespass for an intangible invasion of property, but whether a real estate owner’s exclusive rights to his or her property extends to cyberspace remains to be seen.

Steps Taken to Mitigate Legal Risks

Pokémon Go’s owners have taken steps to limit their potential legal liability. For example, a warning screen on the app advises users to pay attention to their real-world surroundings. And Pokémon Go’s detailed, robust Terms of Service attempt to limit the potential liability of the companies behind the app. In addition to a $1,000 liability cap and a mandatory arbitration provision, the Terms of Service contain an entire “Safe Play” section, which states in part that, as a player, you “agree that your use of the App and play of the game is at your own risk, and it is your responsibility to maintain such health, liability, hazard, personal injury, medical, life, and other insurance policies as you deem reasonably necessary for any injuries that you may incur while using the Services. You also agree not to use the App to violate any applicable law, rule, or regulation (including but not limited to the laws of trespass).”

Pokémon Go’s Terms of Service, however, don’t do anything to limit the liability of the game’s players. As noted above, users could be liable for trespass and for any harm that others suffer as a result of players’ use of the app (especially careless use, such as playing while behind the wheel of a car).

The Upshot

Any innovative technology that becomes a worldwide phenomenon overnight is bound to raise legal concerns. But, as we’ve noted here at Socially Aware, such concerns often turn out to be overblown. The real significance of Pokémon Go is ultimately a business, rather than a legal, story: thanks to the app, millions of consumers around the world have now embraced augmented reality technology. Lawsuits will inevitably follow in the wake of Pokémon Go’s success but, more importantly, so will millions of dollars of investment in new augmented reality applications. As a result, in what could be a very short amount of time, the integration of augmented reality into nearly every facet of our everyday life will become, well, a reality.

[Authors’ Note: We would like to thank Luke D. (age 13), Ben R. (age 12), Alfredo M. (age 10) and Dylan J. (age 9) for the invaluable research that they contributed to this blog post.]

*          *        *

For more of the Socially Aware editors’ observations on tech innovations, please see the following: Will Ad Blockers Kill Online Publishing?; Building a Successful Social Media App: Four Lessons Learned From Snapchat; and Narrow Vision: Did Anti-Glass Hysteria Contribute to the Demise of Google Glass?