The California Attorney General continued its series of public forums regarding the California Consumer Privacy Act (CCPA), with forums last week in Riverside (January 24, 2019) and
Los Angeles (January 25, 2019). As in the previous forums, there were a significant number of attendees, but few elected to speak publicly regarding their views on the Act. You can read our reports on the public forums held earlier this month in San Francisco and San Diego.

Lisa Kim, Deputy Attorney General for the AG’s Privacy Unit, provided opening remarks at both forums and identified the areas of the AG’s rulemaking on which speakers should focus their comments, specifically those areas of the Act that call for specific AG rules.  Ms. Kim encouraged interested parties to provide written comments and proposed regulatory language during this pre-rulemaking phase. Consistent with the prior forums, she noted that the AG’s office would be listening, and not responding, to comments made in Riverside and Los Angeles.

Of note, the presentation slides made available at the forum (and available here) state that the AG anticipates publishing proposed rules in Fall 2019,and that after that there will be a period for public comment and additional public hearings.

Continue Reading California AG Hosts Two More Public Forums on CCPA in Riverside and Los Angeles

In anticipation of preparing rules to implement the California Consumer Privacy Act, the California Attorney General recently announced six public forums that he will host in January and February 2019 across California.  On January 8, 2019, the AG hosted the first of these forums in San Francisco.  The following provides an overview of the forum and the comments made at the forum.

Overview of the January 8, 2019, San Francisco Forum 

Stacey Schesser, the Supervising Deputy Attorney General for the AG’s Privacy Unit, provided opening remarks.  Ms. Schesser confirmed that the AG’s office is at the very beginning of its rulemaking process.  Although the AG’s office will solicit formal comments after it prepares proposed rules, the AG is interested in receiving detailed written comments from the public with proposed language during this informal period.

These forums appear to be designed to inform the AG’s rulemaking and potentially streamline the process, by allowing public input before rules are drafted.  In this regard, Ms. Schesser clarified that she and other AG representatives in attendance at the San Francisco forum were there only to listen to the public comments and would not respond to questions or engage with speakers.  As a result, if the remaining forums follow a similar approach, it is unlikely that the forums will elicit meaningful intelligence regarding the AG’s anticipated approach to, or the substance of, the anticipated rulemaking.

Continue Reading California Attorney General Holds First California Consumer Privacy Act Public Forum

As we have noted previously, the California Court of Appeal’s Hassell v. Bird decision in 2016 upholding an injunction requiring Yelp to remove certain user reviews was discouraging to social media companies and other online intermediaries, as well as to fans of Section 230 of the Communications Decency Act and proponents of Internet free speech generally. The recent California Supreme Court decision reversing the Court of Appeal was, therefore, met with considerable relief by many in the Internet community.

But while the California Supreme Court’s decision is undoubtedly a significant development, it would be premature for Section 230 fans to break out the champagne; the “most important law protecting Internet speech” remains under attack from many directions, and this recent decision is far from definitive. But before getting into the details of the Hassell v. Bird opinion, let’s step back and consider the context in which the case arose.

Before Section 230: A Wild, Wild Web

A fundamental issue for social media platforms and other online intermediaries, including review sites like Yelp, is whether a company may be held liable when its customers engage in bad behavior, such as posting defamatory content or content that infringes the IP rights of third parties. Imagine if Facebook, Twitter, YouTube, and Yelp were potentially liable for defamation every time one of their users said something nasty (and untrue) about another user on their platforms. It would be hard to imagine the Internet as we currently know it existing if that were the case. Continue Reading Section 230 Survives to Fight Another Day Following California Supreme Court Decision

Just over a month after the EU General Data Protection Regulation (GDPR) took effect, California passed its own sweeping privacy legislation, the California Consumer Privacy Act of 2018.

The Act stands to affect countless global companies doing business in California, many of which recently devoted extensive time and resources to GDPR compliance. These companies must now determine what additional steps are necessary to comply with the Act by the time it takes effect on January 1, 2020.

Join Socially Aware contributors Christine Lyon and Julie O’Neill on Thursday, September 20, 2018, for a deep dive into the key similarities and differences between the GDPR and the Act, as well as practical steps companies can take to assess gaps and chart a path to compliance. The areas they expect to cover include:

  • Notice requirements
  • Access and portability
  • Deletion
  • Opt-outs
  • Discrimination

If you are interested in attending this free webinar, please register here.

On July 19, 2018, in May, et al. v. Expedia Inc., U.S. Magistrate Judge Mark Lane issued a Report and Recommendation recommending that U.S. District Judge Robert Pitman for the Western District of Texas grant a motion to compel arbitration and dismiss a putative class action on the grounds that the plaintiff agreed to the defendants’ website’s Terms and Conditions, which contained a mandatory arbitration clause.

HomeAway User Files Putative Class Action 

HomeAway is an online marketplace for vacation rental properties where property owners can list their properties for rent and travelers can book rental properties. HomeAway’s original business model was to charge owners a fee to list their properties (either on a one-year subscription or pay-per-booking basis) and to allow travelers to search and book rentals for free. HomeAway was acquired by Expedia in 2015 and changed its business model to charge travelers a fee to book rentals in mid-2016. Plaintiff James May had been a property owner who used HomeAway since 2013. Continue Reading Sneaky Website User Bound by Online Terms of Use’s Arbitration Provision Despite Renewing Subscription in Spouse’s Name

An advertising executive who lost his job after being named on an anonymous Instagram account is suing the now-defunct account for defamation. The suit names as defendants not only the account—Diet Madison Avenue, which was intended to root out harassment and discrimination at ad agencies—but also (as “Jane Doe 1,” “Jane Doe 2,” et cetera) several of the anonymous people who ran it. Whether Instagram will ultimately have to turn over the identities of the users behind the account will turn on a couple of key legal issues.

A bill recently passed by the New York State Senate makes it a crime for “a caretaker to post a vulnerable elderly person on social media without their consent.” At least one tech columnist thinks the legislation is so broadly worded that it violates the U.S. Constitution. That might be so, but—in light of several news reports about this unfortunate form of elder abuse over the last few years—that same columnist may not be correct about the bill likely having been passed in response to a one-time incident.

A new law in Egypt that categorizes social media accounts and blogs with more than 5,000 followers as media outlets allows the government in that country to block those accounts and blogs for publishing fake news. Some critics aren’t buying the government’s explanation for the law’s implementation, however, and are suggesting it was inspired by a very different motivation.

Critics of the most recent version of the European Copyright Directive’s Article 13, which the European Parliament rejected in early July, brought home their message by arguing that it would have prevented social media users from uploading and sharing their favorite memes.

In a criminal trial, social media posts may be used by both the prosecution and the defense to impeach a witness but—as with all impeachment evidence—the posts’ use and scope is entirely within the discretion of the trial court. The New York Law Journal’s cybercrime columnist explains.

To thwart rampant cheating by high school children, one country shut down the Internet nationwide during certain hours and had social media platforms go dark for the whole exam period.

Snapchat now allows users to unsend messages. Here’s how.

Employees of Burger King’s Russian division recently had to eat crow for a tasteless social media campaign that offered women a lifetime supply of Whoppers as well as three million Russian rubles ($47,000) in exchange for accomplishing a really crass feat.

We’ve all heard of drivers experiencing road rage, but how about members of the public experiencing robot rage? According to a company that supplies cooler-sized food-delivery robots, its’s a thing.

 

 

 

 

 

If a web server located outside the United States hosts video content that can be viewed by Internet users located in the United States, does a public performance result under U.S. copyright law?

This has been a topic of hot debate for a surprisingly long time, with little or no direct guidance from the courts—until now. A recent decision from the D.C. Circuit, Spanski Enterprises v. Telewizja Polska, addresses this issue head-on, with the court finding that the uploading of video content in which a party held exclusive U.S. public performance rights and the subsequent directing of the content to U.S. viewers upon their request to be an infringing “performance” under the U.S. Copyright Act.

Telewizja Polska (“Polska”) is Poland’s national TV broadcaster that owns, operates and creates content for several Polish TV channels. Polska and Spanski Enterprises (“Spanski”), a Canadian corporation, entered into a licensing agreement granting Spanski exclusive broadcasting rights in North and South America to TVP Polonia, one of Polska’s TV channels. Polska provides online access to its programming through a video-on-demand feature on its Poland-based website and, to protect Spanski’s rights, Polska used geoblocking technology to block North and South American IP addresses from accessing the copyrighted content. The territorial restrictions were either incorporated into the digital video formats of the episodes themselves or assigned through a content management system. Continue Reading Copyright’s Long Arm: Foreign Website Found to Infringe U.S. Copyright Law by Providing U.S. Viewers Access to Site Content

As close observers of the implications of privacy law on companies’ data collection, usage and disclosure practices, we at Socially Aware were among the many tech-law enthusiasts anticipating the U.S. Supreme Court’s recent decision in Carpenter v. United States, in which the Court held that the government must obtain a warrant to acquire customer location information maintained by cellular service providers, at least where that information covers a period of a week or more.

Authored by Chief Justice John Roberts, the 5-4 opinion immediately enshrines greater protections for certain forms of location data assembled by third parties. It also represents the Court’s growing discomfort with the so-called “third-party doctrine”—a line of cases holding that a person does not have a reasonable expectation of privacy in records that he or she voluntarily discloses to a third party. In the longer run, there will likely be further litigation over whether the same logic should extend Fourth Amendment protections to other types of sensitive information in the hands of third parties as courts grapple with applying these principles in the digital age.

Background

Anytime a cell phone uses its network, it must connect to the network through a “cell site.” Whenever cell sites make a connection, they create and record Cell Site Location Information (CSLI). Cell phones may create hundreds of data points in a normal day, and providers collect and store CSLI to spot weak coverage areas and perform other business functions. Continue Reading Location Information Is Protected by the 4th Amendment, SCOTUS Rules

Computer scientist and legal scholar Nick Szabo first proposed the idea of “smart contracts” in 1996. Szabo published his initial paper on the topic in a publication called Extropy, a journal of transhumanism, a movement seeking to enhance human intellect and physiology by means of sophisticated technologies. At the time, the idea was nothing if not futuristic.

Fast forward 22 years, and even if the actual use of smart legal contracts remains largely in the future, the idea of them has gone mainstream. What follows is our list of the top five things you need to know about this quickly evolving area.

  1. Their Name Is Somewhat Confusing

When lawyers speak of contracts, they generally mean agreements that are intended to be legally enforceable. In contrast, when most people use the term “smart contract” they’re not referring to a contract in the legal sense, but instead to computer coding that may effectuate specified results based on “if, then” logic.

Advocates of smart legal contracts envision a day when coding will automatically exercise real-world remedies if one of the parties to a smart contract fails to perform.. For example, if an automotive borrower were to fail to make a car payment, coding within the smart loan agreement could automatically trigger a computer controlling the relevant car to prevent the borrower from driving it, or could cause the car to drive autonomously to the lender’s garage.

Even then, whether coding itself could ever satisfy the requirements of a legally binding contract is up for debate. Continue Reading Five Things to Know About Smart Contracts

Most companies are familiar with the Children’s Online Privacy Protection Act (COPPA) and its requirement to obtain parental consent before collecting personal information online from children under 13.  Yet COPPA also includes an information deletion requirement of which companies may be unaware.  On May 31, 2018, the Federal Trade Commission (FTC) published a blog post addressing this requirement, clarifying (i) when children’s personal information must be deleted and (ii) how the requirement applies, as well as (iii) recommending that covered companies review their information retention policies to ensure they are in compliance.

(i) COPPA’s information deletion requirement.  The FTC clarifies that, under Section 312.10 of COPPA, companies may retain children’s personal information “for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.”  After that, a company must use reasonable measures to ensure such personal information is securely destroyed.

(ii) Application of the deletion requirement to children’s outdated subscription information.  In its post, the FTC applies the deletion requirement to the example of a subscription-based app directed to children under 13.  If the subscription period ends, and a parent decides not to renew the service, can the company keep the child’s personal information?  The answer, the FTC confirms, is “no”:  the information is no longer “reasonably necessary” to provide the app’s services, so it must be deleted.  This is true regardless of whether a parent affirmatively requests deletion.

(ii) Recommendation to review information retention policies in light of the deletion requirement.  The FTC recommends that companies review their information retention policies with COPPA’s deletion requirement in mind.  It lists questions to help guide companies as they navigate this requirement:

  • What types of personal information are you collecting from children?
  • What is your stated purpose for collecting the information?
  • How long do you need to hold onto the information to fulfill the purpose for which it was initially collected? For example, do you still need information you collected a year ago?
  • Does the purpose for using the information end with an account deletion, subscription cancellation, or account inactivity?
  • When it’s time to delete information, are you doing it securely?

Key takeaway.  If a company possesses personal information collected online from a child under 13, and the information no longer serves the purpose for which it was collected, the company must delete it.  Companies should review their information retention policies to ensure compliance with this COPPA requirement.

*       *       *

For more on the Children’s Online Privacy Protection Act, please read the following Socially Aware posts: FTC Issues Substantially Revised COPPA Rule: and Review of Changes and Compliance Tips; and Mobile App Legal Terms & Conditions: Six Key Considerations.