First we had social media platforms, but recently a variety of “anti-social” media platforms have emerged—well, anti-social in a sense. For years, social media platforms have encouraged (or even, in some cases, required) us to use our real identities, with the aim of building friendships and networks in the online world. But these new social

The latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we analyze a groundbreaking FTC complaint alleging deceptive practices online that could turn website Terms of Use into federal law; we summarize

Earlier this year, the French consumer association UFC-Que Choisir initiated proceedings before the Paris District Court against Google Inc., Facebook Inc. and Twitter Inc., accusing these companies of using confusing and unlawful online privacy policies and terms of use agreements in the French versions of their social media platforms; in particular, the consumer association argued that these online policies and agreements provide the companies with too much leeway to collect and share user data.

In a press release published (in French) on its website, UFC-Que Choisir explains that the three Internet companies ignored a letter that the group had delivered to them in June 2013, containing recommendations on how to modify their online policies and agreements. The group sought to press the companies to modify their practices as part of a consumer campaign entitled “Je garde la main sur mes données” (or, in English, “I keep my hand on my data”).

According to the press release, the companies’ refusal to address UFC-Que Choisir’s concerns prompted it to initiate court proceedings. The group has requested that the court suppress or modify a “myriad of contentious clauses,” and alleged that one company had included 180 such “contentious clauses” in its user agreement.

The group has also invited French consumers to sign a petition calling for rapid adoption of the EU Data Protection Reform that will replace the current Directive on data protection with a Regulation with direct effects on the 28 EU Member States. UFC-Que Choisir published two possibly NSFW videos depicting a man and a woman being stripped bare while posting to their Google Plus, Facebook and Twitter accounts. A message associated with each video states: “Sur les réseaux sociaux, vous êtes vite à poil” (or, in English, “On social networks, you will be quickly stripped bare”).
Continue Reading French Consumer Association Takes on Internet Giants

The European Court of Justice (ECJ) issued a quite surprising decision against Google which has significant implications for global companies.

On May 13, 2014 the ECJ issued a ruling which did not follow the rationale or the conclusions of its Advocate General, but instead sided with the Spanish data protection authority (DPA) and found that:

  • Individuals have a right to request from the search engine provider that content that was legitimately published on websites should not be searchable by name if the personal information published is inadequate, irrelevant or no longer relevant;
  • Google’s search function resulted in Google acting as a data controller within the meaning of the Data Protection Directive 95/46, despite the fact that Google did not control the data appearing on webpages of third party publishers;
  • Spanish law applied because Google Inc. processed data that was closely related to Google Spain’s selling of advertising space, even where Google Spain did not process any of the data. In doing so, it derogated from earlier decisions, arguing the services were targeted at the Spanish market, and such broad application was required for the effectiveness of the Directive.

The ruling will have significant implications for search engines, social media operators and businesses with operations in Europe generally. While the much debated “right to be forgotten” is strengthened, the decision may open the floodgates for people living in the 28 countries in the EU to demand that Google and other search engine operators remove links from search results. The problem is that the ECJ mentions a broad range of data that may be erased. Not only should incorrect or unlawful data be erased, but also all those data which are “inadequate, irrelevant, or no longer relevant”, as well as those which are “excessive or not kept up to date” in relation to the purposes for which they were processed. It is left to the companies to decide when data falls into these categories.

In that context, the ruling will likely create new costs for companies and possibly thousands of individual complaints. What is more, companies operating search engines for users in the EU will have the difficult task of assessing each complaint they process and whether the rights of the individuals prevail over the rights of the public. Internet search engines with operations in the EU will have to handle requests from individuals who want the deletion of search results that link to pages containing their personal data.

That said, the scope of the ruling is limited to name searches. While search engines will have to de-activate the name search, the data can still be available in relation to other keyword searches. The ECJ did not impose new requirements relating to the content of webpages, in an effort to maintain the freedom of expression, and more particularly, press freedom. But this will still result in a great deal of information legally published to be available only to a limited audience.

Below we set out the facts of the case and the most significant implications of the decision, and address its possible consequences on all companies operating search engines.
Continue Reading European Court of Justice Strengthens Right to Be Forgotten

Snapchat’s recent settlement with the Federal Trade Commission (FTC) generally provides a comprehensive but not groundbreaking roadmap to the FTC’s privacy and data security expectations in the mobile environment under Section 5 of the FTC Act, with two very notable exceptions:

  1. It now appears that companies are required to follow researchers’ blogs and other writings to see if there are any privacy or data security vulnerabilities, and to act on any such information promptly; and
  2. It also appears that the FTC expects companies to be aware of all third parties who have technology that can interact with an app, and to make sure that when consumers engage in any such interaction, all of the company’s privacy and data security representations remain true. If the FTC continues down this path, it will create unsustainable new burdens on app developers, many of which have very few resources to begin with. Furthermore, if this is the new standard, there is no reason it should be limited to the app environment—analytically, this would lead to a rule of general application.

THE BASIC ALLEGED MISREPRESENTATION

The Snapchat app became very popular because of its branding as an “ephemeral” mobile messaging service. Among other things, the app promised its users and prominently represented—in its privacy policy and an FAQ, among other places—that the “snaps” (e.g., messages) users sent would “disappea[r] forever” after 10 seconds (or less). However, according to the FTC’s complaint, in addition to other problems with the app’s privacy and security features, it was much too easy to capture these supposedly ephemeral messages, making the company’s claims false and misleading in violation of Section 5. And since the company’s representations were not consistent with the app’s practices, now it’s the FTC that won’t be disappearing any time soon.
Continue Reading Snap Judgment: FTC Alleges Snapchat Did Not Keep Its Privacy and Security Promises, But Suggests Broad New Duty in the Process

In a much anticipated decision in the class action In re Hulu Privacy Litigation, U.S. Magistrate Judge Laurel Beeler of the U.S. District Court for the Northern District of California has shed new light on the meaning of “personally identifiable information” (PII) under the Video Privacy Protection Act (VPPA). This has important implications for

The Federal Trade Commission’s (FTC) announcement that it had filed a complaint against Jerk, LLC and its websites like “jerk.com” (“Jerk”) looks at first glance like a run-of-the-mill FTC Section 5 enforcement action involving allegedly deceptive practices online. But hidden in the facts of Jerk’s alleged misbehavior is a potentially significant expansion of the FTC’s use of its deception authority.

According to the FTC’s complaint, Jerk allegedly led consumers to believe that the profiles on its websites were created by other users of the website. The company also allegedly sold “memberships” for $30 a month that supposedly included features that would enable consumers to alter or delete their profiles, or to dispute false information in the profiles. Jerk also charged consumers a $25 fee to email Jerk’s customer service department, according to the FTC’s complaint.

The FTC alleges that Jerk created between 73.4 million and 81.6 million unique consumer profiles primarily using information such as names and photos pulled from Facebook through application programming interfaces, or APIs. The complaint states that “[d]evelopers that use the Facebook platform must agree to Facebook’s policies,” such as obtaining users’ explicit consent to share certain Facebook data and deleting information obtained from Facebook upon a consumer’s request.
Continue Reading Jerked Around? Did the FTC’s “Jerk.com” Complaint Just Turn API Terms Into Federal Law?

A 2013 CareerBuilder survey of hiring managers and human resource professionals reports that more than two in five companies use social networking sites to research job candidates. This interest in social networking does not end when the candidate is hired: to the contrary, companies are seeking to leverage the personal social media networks of their existing employees, as well as to inspect personal social media in workplace investigations.

As employer social media practices continue to evolve, individuals and privacy advocacy groups have grown increasingly concerned about employers intruding upon applicants’ or employees’ privacy by viewing restricted access social media accounts. A dozen states already have passed special laws restricting employer access to personal social media accounts of applicants and employees (“state social media laws”), and similar legislation is pending in at least 28 states. Federal legislation is also under discussion.

These state social media laws restrict an employer’s ability to access personal social media accounts of applicants or employees, to ask an employee to “friend” a supervisor or other employer representative and to inspect employees’ personal social media. They also have broader implications for common practices such as applicant screening and workplace investigations, as discussed below.
Continue Reading Employer Access to Employee Social Media: Applicant Screening, ‘Friend’ Requests and Workplace Investigations

Our global privacy + data security group’s Data Protection Masterclass Webinar series is turning the spotlight on social media marketing and policies in January.

Please join Socially Aware contributors Christine Lyon and Karin Retzer, along with Ann Bevitt in our London office for a webinar that will examine the laws and regulations in the