The Federal Trade Commission (FTC) appears to be using its ongoing review of current rules and guides to revisit its approach to driving home the message that the relationship between a social media “influencer” and the brand he or she is endorsing must be disclosed. As we have described previously, the FTC has interpreted its Guides Concerning the Use of Endorsements and Testimonials in Advertising (the “Endorsement Guides”) to require that online advertisements — like all other advertising — clearly and conspicuously disclose material connections between endorsers (i.e., influencers) and the brands they promote because such connections may affect the credibility of the endorsement. And, in recent years, the FTC has — through enforcement actions, press releases, guidance, closing letters, and letters sent directly to endorsers (including prominent public figures) — made clear its belief that: (1) appropriate disclosures by influencers are essential to protecting consumers; and (2) in too many instances, such disclosures are absent from celebrity or other influencer endorsements.

Now, in connection with a request for comments on the Endorsement Guides, FTC Commissioner Rohit Chopra has issued a scathing statement calling on the FTC to “take bold steps to safeguard our digital economy from lies, distortions, and disinformation.” In this regard, Commissioner Chopra suggests that the FTC’s efforts to date have not been effective in “deterring misconduct in the marketplace” relating to inauthentic and fake reviews, and that, in particular, elements of the Endorsement Guides should be codified as formal rules so that violators can be liable for civil penalties and damages under the FTC Act.

Also of note is that Commissioner Chopra has asserted that the FTC should refocus its efforts on advertisers themselves, and not the influencers that promote their brands.  According to the Commissioner, “when companies launder advertising by paying someone for a seemingly authentic endorsement or review, this is illegal payola,” and “companies paying for undisclosed influencer endorsements and reviews are not [being] held fully accountable for this illegal activity.” Seeking to aggressively penalize advertisers themselves would be a shift in emphasis for the FTC, as its recent efforts to combat inadequate disclosures in influencer advertising have focused on influencers. For example, the FTC recently produced a brochure detailing the responsibility of influencers “to make [required] disclosures, to be familiar with the Endorsement Guides, and to comply with laws against deceptive ads.” The FTC also brought an enforcement action against influencers, and foreshadowed that more enforcement will happen in the future.


Continue Reading Fake News & Paid Reviews: FTC Seeks Comments on its Endorsement Guides

In a move likely welcomed by publishers seeking a solution to honoring “sale” opt-outs in the interest-based advertising space, the Interactive Advertising Bureau last week released the IAB California Consumer Privacy Act Compliance Framework for Publishers and Technology Companies. The IAB is the trade association for the digital media and marketing industries, and it developed the Framework to help publishers (i.e., websites) and the online advertising supply chain comply with the CCPA—and particularly with the CCPA’s right to consumer opt-outs of “sales” of personal information.

The Framework sets up a system in which a consumer opt-out has the result that the parties in the digital advertising supply chain become limited service providers to the publisher, such that there is no longer a “sale” with respect to those consumers’ personal information. A limited service provider may still serve ads on behalf of the publisher, but those ads cannot involve any “sale” of personal information under the CCPA.

IAB is accepting public comments to the Framework until Tuesday, November 5, 2019. Comments should be emailed to privacy@iab.com. The draft Framework and draft technical specifications for the Framework can be accessed here.
Continue Reading We’re Sorry, Your Service (Provider) Is Limited: The IAB CCPA Compliance Framework

One of the most recent chapters in the ongoing EU cookies saga has come in the form of a recent ruling by the Court of Justice of the European Union (CJEU) in the Planet49 case. The CJEU ruled that:

(i) implied consent is not sufficient anymore, requiring website operators to seek active consent from users which cannot be obtained by means of pre-ticked boxes; and

(ii) any obtained consent will only be sufficiently informed if an average user can understand what cookies do and how they function.

The outcome of the case – while pivotal – does not come as a surprise considering the cookie developments in the EU over the past few years.

In 2003, when the current Privacy and Electronic Communications Directive (ePrivacy Directive) came into effect, the use of cookies and similar technologies was not as advanced as it is now and did not process users’ personal information in the same way and with such complexity. Sixteen years later, cookies and similar technologies have become an indispensable part of almost every business. The amount of useful details that companies learn about their users’ interests and internet behavior through such technologies is vast and seemingly unlimited. As you would expect with such rapid technological development, the EU data protection authorities (DPAs) have caught on that the technologies are a data goldmine.
Continue Reading Cookies: A Coming-of-Age Story

In a landmark ruling, the European Court of Justice—Europe’s highest court—dealt Google a clear win by placing a territorial limit on the “right to be forgotten” in the EU. The court’s holding in Google v. Commission nationale de l’informatique et des libertés (CNIL) clarifies that a search engine operator that is obligated to honor an individual’s request for erasure by “de-referencing” links to his or her personal data (i.e., removing links to web pages containing that personal data from search results) is only required, under the GDPR, to de-reference results on its EU domains (e.g., google.fr in France and google.it in Italy), and not on all of its domains globally.

However, in the same ruling, the Court also stated that the GDPR applies to Google’s data processing on all of its domains globally (by virtue of such processing comprising “a single act of processing”). Therefore, an EU Member State’s supervisory authority and courts are free to treat the ECJ’s EU-wide de-referencing requirement as a “floor” and go one step further, requiring search engines to implement the right to be forgotten on all of its domains worldwide, including those outside the EU.

Background – The Right to Be Forgotten

The right to be forgotten—codified at Article 17 of the GDPR—grants individuals the right to obtain erasure of their personal data without undue delay, where, for example, the data are no longer necessary for the purpose for which they were collected or processed. However, the right is not unlimited; exceptions apply if the processing is deemed necessary for the exercise of freedom of expression, compliance with a legal obligation, public interests such as public health, scientific or historic research, or the establishment or defense of legal claims.
Continue Reading Forget Me…or Not: Europe’s High Court Limits Territorial Reach of Right to Be Forgotten, But Not of GDPR

In just over a week, on October 1, 2019, key amendments to Nevada’s online privacy law will take effect. We previously detailed the amendments here. In brief:

  • Consumers have the right to opt out of the sale of their personal information. The law gives Nevada consumers the right to request that website operators refrain

A recent decision from the Ninth Circuit Court of Appeals in a dispute between LinkedIn and hiQ Labs has spotlighted the thorny legal issues involved in unauthorized web scraping of data from public websites. While some may interpret the LinkedIn decision as greenlighting such activity, this would be a mistake. On close review of the decision, and in light of other decisions that have held unauthorized web scrapers liable, the conduct remains vulnerable to legal challenge.

hiQ and LinkedIn

Founded in 2012, hiQ is a data analytics company that uses automated bots to scrape information from LinkedIn’s website. hiQ targets the information that users have made public for all to see in their LinkedIn profile. hiQ pays nothing to LinkedIn for the data, which it uses, along with its own predictive algorithm, to yield “people analytics,” which it then sells to clients.

In May 2017, LinkedIn sent a cease-and-desist letter to hiQ demanding that it stop accessing and copying data from LinkedIn’s servers. LinkedIn also implemented technical measures to prevent hiQ from accessing the site, which hiQ circumvented.

Shortly thereafter, with its entire business model under threat, hiQ filed suit in the United States District Court for the Northern District of California seeking injunctive relief and a declaration that LinkedIn had no right to prevent it from accessing public LinkedIn member profiles.
Continue Reading Ninth Circuit’s LinkedIn Decision Does Not Greenlight the Unauthorized Web Scraping of Public Websites

Last week, the Federal Trade Commission made clear that child-directed parts of an otherwise general audience service will subject the operator of the service to the Children’s Online Privacy Protection Act (COPPA).

Just six months after the FTC’s record-setting settlement against TikTok, the FTC announced a $170 million fine against Google and its subsidiary YouTube to settle allegations that YouTube had collected personal information from children without first obtaining parental consent, in violation of the FTC’s rule implementing COPPA. This $170 million fine—$136 million to the FTC and $34 million to the New York Attorney General, with whom the FTC brought the enforcement action—dwarfs the $5.7 million levied against TikTok earlier this year. It is by far the largest amount that the FTC has obtained in a COPPA case since Congress enacted the law in 1998. The settlement puts operators of general-audience websites on notice that they are not automatically excluded from COPPA’s coverage: they are required to comply with COPPA if particular parts of their websites or content (including content uploaded by others) are directed to children under age 13.


Continue Reading The Company Who Cried “General Audience”: Google and YouTube to Pay $170 Million for Alleged COPPA Violations

Advancements in technology appear to have spurred the Federal Trade Commission to initiate a review of its rule promulgated pursuant to the Children’s Online Privacy Protection Act (the “COPPA Rule” or “Rule”) four years ahead of schedule. Last week, the FTC published a Federal Register notice seeking comments on the Rule. Although the FTC typically reviews a rule only once every 10 years and the last COPPA Rule review ended in 2013, the Commission unanimously voted 5-0 to seek comments ahead of its next scheduled review. The Commission cited the education technology sector, voice-enabled connected devices, and general audience platforms hosting third-party, child-directed content as developments warranting reexamination of the Rule at this time.

Background

The COPPA Rule, which first went into effect in 2000, generally requires operators of online services to obtain verifiable parental consent before collecting personal information from children under the age of 13.  In 2013, the FTC amended the COPPA Rule to address changes in the way children use and access the internet, including through the increased use of mobile devices and social networking.  Its amendments included the expansion of the definition of “personal information” to include persistent identifiers that track online activity, geolocation information, photos, videos, and audio recordings. The new review could result in similarly significant amendments.

Questions for Public Comment

In addition to standard questions about the effectiveness of the COPPA Rule and whether it should be retained, eliminated, or modified, the FTC is seeking comment on all major provisions of the Rule, including its definitions, notice and parental consent requirements, exceptions, and security requirements.
Continue Reading Back to School Early: FTC Seeks Comments to COPPA Rule Ahead of Schedule

The French data protection authority, the CNIL, continues to fine organizations for failing to adopt what the CNIL considers to be fundamental data security measures. In May 2019, the CNIL imposed a EUR 400,000 fine on a French real estate company for failing to have basic authentication measures on a server and for retaining information too long. This is the second fine by the CNIL under the EU General Data Protection Regulation 2016/679 (GDPR) after the one against Google. The decision is among many pre-GDPR fines imposed by the CNIL for failing to meet security standards, and shows that data security continues to be a high enforcement priority for the CNIL.

Background

French real estate company Sergic operated a website where individuals could upload information about themselves for their property rental applications. Responding to a complaint by an applicant, the CNIL investigated Sergic in September 2018, as it appeared that applicants’ documents were freely accessible without authentication (by modifying a value in the website URL). The CNIL confirmed the vulnerability and found that almost 300,000 documents were accessible in a master file containing information such as individuals’ government issued IDs, Social Security numbers, marriage and death certificates, divorce judgments, and tax, bank and rental statements. The CNIL also discovered that Sergic had been informed of the vulnerability back in March 2018 but did not fix it until September 2018.

Continue Reading The CNIL Strikes Again – Mind Your Security

Nevada just joined California as the second state to enact an opt-out right for consumers from the “sale” of their personal information. Senate Bill 220, which was signed into law on May 29, 2019, is scheduled to take effect on October 1, 2019, three months prior to its precursor under the California Consumer Protection Act (the CCPA). The opt-out right is one of several changes made to Nevada’s existing online privacy law, which requires operators of commercial websites and other online services to post a privacy policy. In addition to the new opt-out right, the revised law exempts from its requirements certain financial institutions, HIPAA-covered entities, and motor vehicle businesses.
Continue Reading Nevada Enacts CCPA-Style Opt-Out Right for Consumers—but Similarities Are Few