Last week, the Federal Trade Commission made clear that child-directed parts of an otherwise general audience service will subject the operator of the service to the Children’s Online Privacy Protection Act (COPPA).

Just six months after the FTC’s record-setting settlement against TikTok, the FTC announced a $170 million fine against Google and its subsidiary YouTube to settle allegations that YouTube had collected personal information from children without first obtaining parental consent, in violation of the FTC’s rule implementing COPPA. This $170 million fine—$136 million to the FTC and $34 million to the New York Attorney General, with whom the FTC brought the enforcement action—dwarfs the $5.7 million levied against TikTok earlier this year. It is by far the largest amount that the FTC has obtained in a COPPA case since Congress enacted the law in 1998. The settlement puts operators of general-audience websites on notice that they are not automatically excluded from COPPA’s coverage: they are required to comply with COPPA if particular parts of their websites or content (including content uploaded by others) are directed to children under age 13.

Continue Reading The Company Who Cried “General Audience”: Google and YouTube to Pay $170 Million for Alleged COPPA Violations

A recent Second Circuit decision makes clear that the safe harbor that social media and other Internet companies enjoy under Section 230 of the Communications Decency Act broadly applies to a wide variety of claims.

When you think about the Section 230 safe harbor, don’t just think defamation or other similar state law claims. Consider whether the claim—be it federal, state, local, or foreign—seeks to hold a party that publishes third-party content on the Internet responsible for publishing the content. If, after stripping it all down, this is the crux of the cause of action, the safe harbor should apply (absent a few statutory exclusions discussed below). The safe harbor should apply even if the party uses its discretion as a publisher in deciding how best to target its audience or to display the information provided by third parties.

In 2016, Facebook was sued by the estates of four U.S. citizens who died in terrorist attacks in Israel and one who narrowly survived but was grievously injured. The plaintiffs claimed that Facebook should be held liable under the federal Anti-Terrorism Act and the Justice Against Sponsors of Terror Act, which provide a private right of action against those who aid and abet acts of international terrorism, conspire in furtherance of acts of terrorism, or provide material support to terrorist groups. The plaintiffs also asserted claims arising under Israeli law. Continue Reading CDA Section 230 Immunizes Platform From Liability for Friend and Content Suggestion Algorithms

It is likely no surprise to regular readers of Socially Aware that posting content to social media can, in some cases, generate significant income. But those who make their living on social media may find their livelihood threatened if they fail to comply with the law and with the relevant platform’s terms of use.

For example, we often see trouble arise when social media users fail to follow the Federal Trade Commission’s disclosure rules in connection with receiving compensation in exchange for a promotional post, or when users purchase followers—a practice that violates most social media platforms’ terms of use, and might be illegal. As we have noted previously, the social media platform and not the user sets the rules. If your business model is built on a social media platform, you have play by the platform’s rules.

Earning an honest living is what Instagram user “Ben” (the pseudonym assigned to him by MarketWatch) claims to have been doing when he was taking in approximately $4,000 per month by operating and curating several accounts containing memes originally created by third parties. (For those who have somehow managed to avoid this ubiquitous Internet phenomenon, Wikipedia describes a meme as a “piece of media that spreads, often . . . for humorous purposes, from person to person via the Internet.” The article at this link contains some examples.) Continue Reading The Meme Generation: Social Media Platforms Address Content Curation

Advancements in technology appear to have spurred the Federal Trade Commission to initiate a review of its rule promulgated pursuant to the Children’s Online Privacy Protection Act (the “COPPA Rule” or “Rule”) four years ahead of schedule. Last week, the FTC published a Federal Register notice seeking comments on the Rule. Although the FTC typically reviews a rule only once every 10 years and the last COPPA Rule review ended in 2013, the Commission unanimously voted 5-0 to seek comments ahead of its next scheduled review. The Commission cited the education technology sector, voice-enabled connected devices, and general audience platforms hosting third-party, child-directed content as developments warranting reexamination of the Rule at this time.

Background

The COPPA Rule, which first went into effect in 2000, generally requires operators of online services to obtain verifiable parental consent before collecting personal information from children under the age of 13.  In 2013, the FTC amended the COPPA Rule to address changes in the way children use and access the internet, including through the increased use of mobile devices and social networking.  Its amendments included the expansion of the definition of “personal information” to include persistent identifiers that track online activity, geolocation information, photos, videos, and audio recordings. The new review could result in similarly significant amendments.

Questions for Public Comment

In addition to standard questions about the effectiveness of the COPPA Rule and whether it should be retained, eliminated, or modified, the FTC is seeking comment on all major provisions of the Rule, including its definitions, notice and parental consent requirements, exceptions, and security requirements. Continue Reading Back to School Early: FTC Seeks Comments to COPPA Rule Ahead of Schedule

A federal district court dismissed a case against supermodel Gigi Hadid for posting to Instagram a photo of herself that was taken by a paparazzo. The reason for the court’s decision was simple: The party claiming copyright ownership of the photo failed to get it registered with the U.S. Copyright Office, a prerequisite to filing an infringement suit against alleged violators.

If the plaintiff in the suit had complied with the copyright registration process and required the court to make a substantive decision,  the court’s opinion would necessarily have had to, as the Hollywood Reporter wrote, clarify a celebrity’s “right to control how others profit from [that celebrity’s] likeness” and address a “battle that involves a copyright law written before the dawn of the internet, before legislators could imagine social phenomena like Instagram’s billion users and hundreds of millions of daily photo uploads.”

The facts of Hadid’s case are common; celebrities are sued by paparazzi for posting photos of themselves to social media all the time. In this particular suit against a supermodel, an independent photo agency claimed Hadid had violated its copyright in a photo of herself when Hadid posted the picture to social media despite the fact that Hadid had arguably contributed to the image by smiling for the photo, selecting the outfit she’s wearing in it, and even cropping the photo for posting.

The suit had the potential to test legal theories, such as the “fair use” doctrine, that could protect celebrities from copyright infringement liability for posting  paparazzi-taken photos of themselves to social media. Although those theories weren’t tested this time around, “it’s an imminent fight that could spark the type of legal rethinking needed when the old rules fail to accommodate new realities.”

A federal district court in Illinois recently held in Anand v. Heath that a digital marketing company could not force a user to arbitrate because a “Continue” button on its website did not provide clear notice that clicking the button constituted assent to the hyperlinked terms and conditions that contained the arbitration provision.

As we have noted previously, website operators who wish to enforce their online terms against users will have a higher likelihood of success if they do two things. First, the website should display the terms to users in a conspicuous fashion. Second, and applicable here, the website should affirmatively and unambiguously require users to assent to the terms. Anand demonstrates that online agreements risk unenforceability when the terms are presented in a manner that does not make clear to users that they are agreeing to be bound.

The website www.retailproductzone.com offers users free gift cards in exchange for their responses to surveys and for their consent to be contacted for marketing purposes. Reward Zone USA LLC, a subsidiary of Fluent Inc., maintains the website. In June 2017, plaintiff Narantuya Anand registered on www.retailproductzone.com and completed a survey to receive a free gift card. According to Anand, she then received several unwanted telemarketing voicemails and text messages.  Continue Reading Court Holds that Arbitration Clause in “Hybridwrap” Terms Is Unenforceable

As we noted in our recent post on the Second Circuit case Herrick v. Grindr, LLC, Section 230 of the Communications Decency Act (CDA) continues to provide immunity to online intermediaries from liability for user content, despite pressure from courts and legislatures seeking to chip away at this safe harbor. The D.C. Circuit case Marshall’s Locksmith Service Inc. v. Google, LLC serves as another example of Section 230’s resiliency.

In Marshall’s Locksmith, the D.C. Circuit affirmed the dismissal of claims brought by 14 locksmith companies against search engine operators Google, Microsoft and Yahoo! for allegedly conspiring to allow “scam locksmiths” to inundate the online search results page in order to extract additional advertising revenue.

The scam locksmiths at issue published websites targeting heavily populated locations around the country to trick potential customers into believing that they were local companies. These websites provided either a fictitious address or no address at all, and falsely claimed that they were local businesses. The plaintiffs asserted various federal and state law claims against the search engine operators relating to false advertising, conspiracy and fraud based on their activities in connection with the scam locksmiths’ websites. Continue Reading D.C. Circuit Holds that Section 230 Locks Out Locksmiths

The French data protection authority, the CNIL, continues to fine organizations for failing to adopt what the CNIL considers to be fundamental data security measures. In May 2019, the CNIL imposed a EUR 400,000 fine on a French real estate company for failing to have basic authentication measures on a server and for retaining information too long. This is the second fine by the CNIL under the EU General Data Protection Regulation 2016/679 (GDPR) after the one against Google. The decision is among many pre-GDPR fines imposed by the CNIL for failing to meet security standards, and shows that data security continues to be a high enforcement priority for the CNIL.

Background

French real estate company Sergic operated a website where individuals could upload information about themselves for their property rental applications. Responding to a complaint by an applicant, the CNIL investigated Sergic in September 2018, as it appeared that applicants’ documents were freely accessible without authentication (by modifying a value in the website URL). The CNIL confirmed the vulnerability and found that almost 300,000 documents were accessible in a master file containing information such as individuals’ government issued IDs, Social Security numbers, marriage and death certificates, divorce judgments, and tax, bank and rental statements. The CNIL also discovered that Sergic had been informed of the vulnerability back in March 2018 but did not fix it until September 2018.
Continue Reading The CNIL Strikes Again – Mind Your Security

A California Superior Court’s recent ruling in Murphy v. Twitter held that Section 230 of the Communications Decency Act shielded Twitter from liability for suspending and banning a user’s account for violating the platform’s policies. As we have previously noted, Section 230 has come under pressure in recent years from both courts and legislatures. But we have also examined other cases demonstrating Section 230’s staying power. The ruling in Murphy again shows that, despite the challenges facing Section 230, the statute continues to serve its broader purpose of protecting social media platforms from the actions of their users while allowing those platforms to monitor and moderate their services.

From January to mid-October 2018, Meghan Murphy posted a number of tweets that misgendered and criticized transgender Twitter users. After first temporarily suspending her account, Twitter ultimately banned her from the platform for violating its Hateful Conduct Policy. Twitter had amended this policy in late October 2018 to specifically include targeted abuse and misgendering of transgender people. Continue Reading California Court Finds Section 230 Protects Decision to Suspend and Ban Twitter Account

As we have frequently noted on Socially Aware, Section 230 of the Communications Decency Act protects social media sites and other online platforms from liability for user-generated content. Sometimes referred to as “the law that gave us the modern Internet,” Section 230 has provided robust immunity for website operators since it was enacted in 1996. As we have also written previously, however, the historically broad Section 230 immunity has come under pressure in recent years, with both courts and legislatures chipping away at this important safe harbor.

Now, some lawmakers are proposing legislation to narrow the protections that Section 230 affords to website owners. They assert that changes to the section are necessary to protect Internet users from dangers such as sex-trafficking and the doctored videos known as “deep fakes.”

The House Intelligence Committee Hearing

Recently, a low-tech fraudulent video that made House Speaker Nancy Pelosi’s speech appear slurred was widely shared on social media, inspiring Hany Farid, a computer-science professor and digital-forensics expert at the University of California, Berkeley, to tell The Washington Post, “this type of low-tech fake shows that there is a larger threat of misinformation campaigns—too many of us are willing to believe the worst in people that we disagree with.” Continue Reading Legislators Propose Narrowing § 230’s Protections