Two bills designed to facilitate the removal of minors’ personal information from social networking sites are currently under consideration in the California State Assembly, after being approved in the upper house of the state’s legislature, the Senate, in early 2013. The first of the two bills, S.B. 501, would require a “social networking Internet Web site” to remove, within 96 hours of receiving a registered user’s request, any of that user’s personal identifying information that is accessible online. The site would also be required to remove the personal identifying information of a user who is under the age of 18 upon request of the user’s parent or guardian. The second bill, S.B. 568, would require an “Internet Web site” to remove, upon request of a user under age 18, any content that user posted on the site.

Web site operators, whether they consider themselves to be in the social networking space or not, should remain alert to any forthcoming guidance from state agencies on the language contained in each of these bills. For instance, S.B. 501, as currently drafted, defines a “social networking” site as one that “allows an individual to construct a public or partly public profile within a bounded system, articulate a list of other users with whom the individual shares a connection, and view and traverse his or her list of connections . . . .” On its face, this definition would include not only the likes of Facebook and Twitter, but a host of other sites that primarily offer services such as, for example, ecommerce, gaming or blogging, and additionally provide to their users the ability to maintain profiles and interact with one another.

Furthermore, those who use social networking sites should be aware that S.B. 568 is not the Internet equivalent of an “undo” function for ill-advised content uploads. The bill expressly provides that site operators need only remove a minor’s original posting, and not content that “remains visible because a third party has copied the posting or reposted the content.” Therefore, anything uploaded to sites that facilitate rapid dissemination through “sharing” or “re-tweeting” is likely there to stay.

If S.B. 568 is passed by the Assembly, site operators will have until January 1, 2015, to develop the infrastructure necessary to ensure compliance. However, there is no such grace period currently written into S.B. 501, so companies may benefit from reviewing prior instances of social networking sites being required to rapidly implement new privacy policies in response to enforcement actions and changing laws. In one noteworthy episode in late 2011, Facebook was audited by Ireland’s Office of the Data Protection Commissioner (DPC) in response to complaints over the site’s retention of data that users believed they had deleted. Guided by DPC recommendations, Facebook rolled out over the next year a series of 45 privacy-related policy and operational changes, including changes involving whether and how long user data would be retained.

Lastly, companies should understand these two bills in the context of an expanding body of online privacy laws being enacted at both the state and federal levels, and in key foreign jurisdictions. One question likely to be addressed in coming years is whether laws such as S.B. 501 and 568, as well as similar legislation passed in others states—for example, Maine’s Act to Prevent Predatory Marketing Practices against Minors—are preempted by the federal Children’s Online Privacy Protection Act (COPPA), which contains broad language barring state-level imposition of liability “in connection with an activity” discussed in COPPA and that is inconsistent with COPPA’s mandates. Even if these state laws are found to be preempted, however, social networking companies should nonetheless prepare themselves to adapt to an evolving regulatory landscape in the area of privacy protection, as negotiations proceed in the European Union over a new General Data Protection Regulation that would likewise require the removal of users’ data upon request—and levy fines of up to two percent of global revenue for failure to comply.

The Federal Trade Commission (FTC) announced a potentially groundbreaking settlement with the social networking app Path and released an important new staff report on Mobile Privacy Disclosures late last week.

The FTC’s Settlement with Path suggests a new standard may be on the near-term horizon: out-of-policy, just-in-time notice and express consent for the collection of data that is not obvious to consumers in context. The FTC has long encouraged heightened notice and consent prior to the collection and use of sensitive data, such as health and financial information. This settlement, however, requires such notice and consent for the collection and use of information that is not inherently sensitive, but that, from the Commission’s perspective, at least, might surprise consumers based on the context of the collection. Only time will tell, but historically Order provisions like this have tended to become cemented as FTC common law. Moreover, although the Children’s Online Privacy Protection Act (COPPA) portions of the settlement do not break new ground, they do serve as a potent—and expensive—reminder that the FTC is highly focused on kids’ privacy online, particularly in the mobile space.

The FTC’s Report reinforces this sentiment by encouraging all the major players in the mobile ecosystem—including app developers, ad networks, and trade associations—to increase the transparency of the mobile ecosystem through clear, accessible disclosures about information collection and sharing at appropriate times.

To continue reading this post, click here.

With the explosive growth of social media, consumers increasingly expect to be able to interact online with the companies from which they buy goods and services. As a result, financial institutions have begun to explore the use of social media, both to strengthen relationships with existing customers and to attract new ones. Financial institutions, however, have proceeded with extreme caution in using social media, in large part due to uncertainty as to the application of financial laws and regulations to social media and, to the extent they are applicable, how a financial institution can comply.

In response to industry requests for guidance on the use of social media, on January 23, 2013, the Federal Financial Institutions Examination Council (FFIEC) requested public comment on proposed guidance (“Proposed Guidance”) for financial institutions relating to the use of social media. The Proposed Guidance is intended to help financial institutions understand potential risks associated with the use of social media and to communicate the expectations of the agencies that make up the FFIEC for how financial institutions should manage these risks. The Proposed Guidance, however, largely does not address how a financial institution may comply with any particular requirement when using social media.

The following provides an overview of the Proposed Guidance, which may be found here. Comments on the Proposed Guidance must be submitted to the FFIEC by March 25, 2013.

Background on the FFIEC

The FFIEC is a formal interagency body that is authorized to prescribe uniform principles, standards and report forms for the examination of financial institutions by the federal banking agencies, the National Credit Union Administration (NCUA) and the Bureau of Consumer Financial Protection (CFPB) (collectively, the “Agencies”). Historically, banks were the main type of financial institutions to be the focus of FFIEC supervisory guidance; however, the Dodd-Frank Act expanded the membership of the FFIEC to include not only the federal banking agencies and the NCUA, but also the CFPB. As a result, FFIEC guidance now extends to any person supervised by the CFPB, including many types of non-bank financial institutions, such as mortgage brokers, payday lenders, consumer reporting agencies and debt collectors.

The Proposed Guidance

The Proposed Guidance is intended to help financial institutions understand potential risks associated with their use of social media, including compliance, reputation and operational risks, and to communicate the Agencies’ expectations for how financial institutions should manage these risks. Although the Proposed Guidance clarifies that, if finalized, it would not impose additional obligations on financial institutions, the Agencies each intend to issue any final guidance as supervisory guidance to the institutions that they supervise. As a result, financial institutions subject to the Agencies’ supervisory authority will be expected to use the guidance in their efforts to ensure that their risk management practices adequately address the risks associated with their use of social media, including those outlined in the finalized guidance.

“Social Media” Defined

The Proposed Guidance casts a wide net in defining “social media” as any “form of interactive online communication in which users can generate and share content through text, images, audio, and/or video.” From the Agencies’ perspective, it is social media’s interactive nature that distinguishes it from other online media. The Proposed Guidance includes the following non-exhaustive examples of media that the Agencies believe to fall within the definition:

  • micro-blogging sites (e.g., Facebook and Twitter);
  • forums, blogs, customer review websites and bulletin boards (e.g., Yelp);
  • photo and video sites (e.g., Flickr and YouTube);
  • professional networking sites (e.g., LinkedIn);
  • virtual worlds (e.g., Second Life); and
  • social games (e.g., FarmVille).

Risk Management Programs

A cornerstone of the Proposed Guidance is the expectation that a financial institution will maintain a risk management program through which it identifies, measures, monitors and controls risks related to its use of social media. The Proposed Guidance provides that a financial institution’s risk management program should include the following seven components:

  • A governance structure with clear roles and responsibilities whereby the institution’s board or senior management directs how the use of social media contributes to the institution’s strategic goals and that establishes controls and ongoing risk assessments.
  • Policies and procedures regarding the use and monitoring of social media and compliance with applicable consumer protection laws.
  • An employee training program regarding the institution’s policies and procedures for official, work-related use of social media, and potentially for other uses of social media, including defining impermissible activities.
  • An oversight process for monitoring information posted to proprietary social media sites administered by, or on behalf of, the financial institution.
  • A due diligence process for selecting and managing third-party service provider relationships in connection with social media.
  • Audit and compliance functions to ensure ongoing compliance with internal policies and applicable law.
  • Parameters for reporting to the institution’s board or senior management that will enable periodic evaluations of the social media program.

As in other areas of financial law and regulation, the expectation would be that the size and complexity of a financial institution’s risk management program would be commensurate with the breadth of the institution’s involvement in social media. For example, a financial institution that relies heavily on social media should have a more detailed program than a financial institution that uses social media only in a limited manner. Nonetheless, the Proposed Guidance indicates that a financial institution that does not use social media should still be prepared to address the potential for negative comments or complaints related to the institution that may arise within social media and also to provide guidance for employee use of social media.

Risk Areas Generally

The majority of the Proposed Guidance focuses on identifying potential risks related to a financial institution’s use of social media, including risk of harm to consumers. In particular, the Proposed Guidance identifies potential risks within three broad categories: (1) compliance and legal risk; (2) reputational risk; and (3) operational risk. While the Proposed Guidance catalogs the many risks presented by the use of social media, the focus is on the risks associated with compliance with consumer protection requirements. Nonetheless, the lengthy identification of risk areas would put financial institutions on notice of the broad scope of their responsibilities with respect to the use of social media.

Compliance and Legal Risk Areas

Compliance and legal risk relates to the risks associated with the failure to comply with laws, rules, regulations, prescribed practices, internal policies and procedures, and ethical standards and the related exposure to enforcement actions and/or private rights of action. The Proposed Guidance cautions that these risks are “particularly pertinent” for an emerging medium like social media where a financial institution’s policies and procedures may not have kept pace with changes in the marketplace.

Although a financial institution would be expected to ensure that it periodically evaluates and controls its use of social media to ensure compliance with all applicable legal obligations, the Proposed Guidance identifies examples of more than 15 federal laws where a financial institution may be exposed to compliance and legal risk. These examples are broken down into five general categories: (1) privacy; (2) deposit and lending products; (3) payment systems; (4) anti-money laundering; and (5) community reinvestment. Of note, none of these includes any exception regarding the use of social media. As a result, the Proposed Guidance cautions that, to the extent a financial institution uses social media to engage in covered activity (e.g., advertising a credit product), it would be required to comply with any applicable legal requirement that may relate to that covered activity.

We highlight below certain compliance risks identified in the Proposed Guidance that may be relevant to many financial institutions:

Privacy

  • A financial institution using social media should clearly disclose its privacy policies where required by the Gramm-Leach-Bliley Act.
  • A financial institution maintaining its own social media site should ensure that it maintains and follows policies restricting access to the site to users 13 or older in a manner consistent with the Children’s Online Privacy Protection Act.
  • A financial institution should consider whether any unsolicited communication sent to consumers via social media complies with the limitations of the CAN-SPAM Act and the Telephone Consumer Protection Act.

Deposit and Lending Products

  • A lender should ensure that its use of social media does not violate the Equal Credit Opportunity Act prohibition on making statements in advertising that would discourage, on a prohibited basis, a reasonable person from applying for credit.
  • A lender that advertises credit products in any form of social media communication should ensure that it does so in a manner that complies with Regulation Z’s advertising requirements.
  • A debt collector must comply with Fair Debt Collection Practices Act limitations when conducting covered activities through social media, including, for example, being cognizant that that any social media communication does not disclose the existence of a debt or harass or embarrass consumers about their debts (e.g., a debt collector writing about a debt on a Facebook wall).

Payment Systems

  • A financial institution using social media to facilitate an electronic fund transfer for a consumer should consider whether it is required by Regulation E to, for example, provide any required disclosures to the consumer.

Anti-Money Laundering

  • Financial institutions should be aware of emerging areas of Bank Secrecy Act and anti-money laundering risk in connection with social media, including, for example, the fact that virtual world Internet games and digital currencies present a high risk for money laundering and terrorist financing and should be monitored accordingly.

Community Reinvestment

  • A depository institution subject to the Community Reinvestment Act should ensure that its policies and procedures for its own social media properties address the appropriate monitoring of public comments.

Reputational Risk Areas

For purposes of the Proposed Guidance, reputational risk relates to the risks arising from negative public opinion. A financial institution engaged in social media activities would be expected to be sensitive to and properly manage the reputational risks that may arise from its social media activities. The Proposed Guidance provides a number of considerations for financial institutions related to reputational risk in the context of social media use, including that a financial institution should:

  • have appropriate policies in place to monitor and address in a timely manner the fraudulent use of its brand, such as through phishing or spoofing attacks;
  • have procedures to address risks associated with members of the public posting confidential or sensitive information (e.g., an account number) on the institution’s social media page or site;
  • weigh the risks and the benefits of using a third party to conduct social media activities, including, for example, the ability of a financial institution to control content on a site owned or administered by a third party; and
  • consider the feasibility of monitoring question and complaint forums on social media sites to ensure that customer inquiries, complaints or comments are addressed in a timely and appropriate manner.

Operational Risk Areas

For purposes of the Proposed Guidance, operational risk relates to the risk of loss resulting from inadequate or failed processes, people or systems. These include the risks posed by a financial institution’s use of information technology, including social media. In light of the vulnerability of social media platforms, the Proposed Guidance indicates that a financial institution should ensure that its internal controls designed to protect its information technology systems and to safeguard customer information from malicious software adequately address social media usage. And, in a related point, a financial institution’s incident response program should extend to security incidents involving social media.

 *          *          *          *

If the FFIEC finalizes the Proposed Guidance, financial institutions should expect that the Agencies will independently issue the finalized guidance as supervisory guidance to the institutions that they supervise. In such a case, financial institutions will be expected to use the guidance as part of their efforts to address the risks associated with the use of social media and to ensure that their risk management programs provide effective oversight and controls related to the use of social media. Until final guidance is in place, it is important for financial institutions to be cognizant of and consider the extent of their usage of social media and the risks associated with that use and whether existing controls address the types of risks identified in the Proposed Guidance. Finally, financial institutions may also wish to consider whether they will provide comments to the FFIEC on the Proposed Guidance, including, for example, identifying any technological or other impediments to compliance with otherwise applicable law when using social media.

2012 was a momentous year for social media law. We’ve combed through the court decisions, the legislative initiatives, the regulatory actions and the corporate trends to identify what we believe to be the ten most significant social media law developments of the past year–here they are, in no particular order:

Bland v. Roberts – A Facebook “like” is not constitutionally protected speech

Former employees of the Hampton Sheriff’s Office in Virginia who were fired by Sheriff B.J. Roberts, sued claiming they were fired for having supported an opposing candidate in a local election. Two of the plaintiffs had “liked” the opposing candidate’s Facebook page, which they claimed was an act of constitutionally protected speech. A federal district court in Virginia, however, ruled that a Facebook “like” “…is insufficient speech to merit constitutional protection”; according to the court, “liking” involves no actual statement, and constitutionally protected speech could not be inferred from “one click of a button.”

This case explored the increasingly-important intersection of free speech and social media, with the court finding that a “like” was insufficient to warrant constitutional protection. The decision has provoked much criticism, and it will be interesting to see whether other courts will follow the Bland court’s lead or take a different approach.

New York v. Harris – Twitter required to turn over user’s information and tweets

In early 2012, the New York City District Attorney’s Office subpoenaed Twitter to produce information and tweets related to the account of Malcolm Harris, an Occupy Wall Street protester who was arrested while protesting on the Brooklyn Bridge. Harris first sought to quash the subpoena, but the court denied the motion, finding that Harris had no proprietary interest in the tweets and therefore did not have standing to quash the subpoena. Twitter then filed a motion to quash, but the court also denied its motion, finding that Harris had no reasonable expectation of privacy in his tweets, and that, for the majority of the information sought, no search warrant was required.

This case set an important precedent for production of information related to social media accounts in criminal suits. Under the Harris court’s ruling, in certain circumstances, a criminal defendant has no ability to challenge a subpoena that seeks certain social media account information and posts.

The National Labor Relations Board (NLRB) issued its third guidance document on workplace social media policies

The NLRB issued guidance regarding its interpretation of the National Labor Relations Act (NLRA) and its application to employer social media policies. In its guidance document, the NLRB stated that certain types of provisions should not be included in social media policies, including: prohibitions on disclosure of confidential information where there are no carve-outs for discussion of an employer’s labor policies and its treatment of employees; prohibitions on disclosures of an individual’s personal information via social media where such prohibitions could be construed as limiting an employee’s ability to discuss wages and working conditions; discouragements of “friending” and sending unsolicited messages to one’s co-workers; and prohibitions on comments regarding pending legal matters to the degree such prohibitions might restrict employees from discussing potential claims against their employer.

The NLRB’s third guidance document illustrates the growing importance of social media policies in the workplace. With social media becoming an ever-increasing means of expression, employers must take care to craft social media policies that do not hinder their employees’ rights. If your company has not updated its social media policy in the past year, it is likely to be outdated.

Fteja v. Facebook, Inc. and Twitter, Inc. v. Skootle Corp. – Courts ruled that the forum selection clauses in Facebook’s and Twitter’s terms of service are enforceable

In the Fteja case, a New York federal court held that a forum selection clause contained in Facebook’s Statement of Rights and Responsibilities (its “Terms”) was enforceable. Facebook sought to transfer a suit filed against it from a New York federal court to one in Northern California, citing the forum selection clause in the Terms. The court found that the plaintiff’s clicking of the “I accept” button when registering for Facebook constituted his assent to the Terms even though he may not have actually reviewed the Terms, which were made available via hyperlink during registration.

In the Skootle case, Twitter brought suit in the Northern District of California against various defendants for their spamming activities on Twitter’s service. One defendant, Garland Harris, who was a resident of Florida, brought a motion to dismiss, claiming lack of personal jurisdiction and improper venue. The court denied Harris’s motion, finding that the forum selection clause in Twitter’s terms of service applied. The court, however, specifically noted that it was not finding that forum selection clauses in “clickwrap” agreements are generally enforceable, but rather “only that on the allegations in this case, it is not unreasonable to enforce the clause here.”

Fteja and Skootle highlight that potentially burdensome provisions in online agreements may be enforceable even as to consumers; in both cases, a consumer seeking to pursue or defend a claim against a social media platform provider was required to do so in the provider’s forum. Both consumers and businesses need to be mindful of what they are agreeing to when signing up for online services.

Six states passed legislation regarding employers’ access to employee/applicant social media accounts

California, Delaware, Illinois, Maryland, Michigan and New Jersey enacted legislation that prohibits an employer from requesting or requiring an employee or applicant to disclose a user name or password for his or her personal social media account.

Such legislation will likely become more prevalent in 2013; Texas has a similar proposed bill, and California has a proposed bill that would expand its current protections for private employees to also include public employees.

Facebook goes public

Facebook raised over $16 billion in its initial public offering, which was one of the most highly anticipated IPOs in recent history and the largest tech IPO in U.S. history. Facebook’s peak share price during the first day of trading hit $45 per share, but with a rocky first few months fell to approximately $18—sparking shareholder lawsuits. By the end of 2012, however, Facebook had rebounded to over $26 per share.

Facebook’s IPO was not only a big event for Facebook and its investors, but also for other social media services and technology startups generally. Many viewed, and continue to view, Facebook’s success or failure as a bellwether for the viability of social media and technology startup valuations.

Employer-employee litigation over ownership of social media accounts

2012 saw the settlement of one case, and continued litigation in two other cases, all involving the ownership of business-related social media accounts maintained by current or former employees.

In the settled case of PhoneDog LLC v. Noah Kravitz, employer sued employee after the employee left the company but retained a Twitter account (and its 17,000 followers) that he had maintained while working for the employer. The terms of the settlement are confidential, but news reports indicated that the settlement allowed the employee to keep the account and its followers.

In two other pending cases, Eagle v. Edcomm and Maremont v. Susan Fredman Design Group LTD, social media accounts originally created by employees were later altered or used by the employer without the employees’ consent.

These cases are reminders that, with the growing prevalence of business-related social media, employers need to create clear policies regarding the treatment of work-related social media accounts.

California’s Attorney General went after companies whose mobile apps allegedly did not have adequate privacy policies

Starting in late October 2012, California’s Attorney General gave notice to developers of approximately 100 mobile apps that they were in violation of California’s Online Privacy Protection Act (OPPA), a law that, among other things, requires developers of mobile apps that collect personally identifiable information to “conspicuously post” a privacy policy. Then, in December 2012, California’s Attorney General filed its first suit under OPPA against Delta, for failing to have a privacy policy that specifically mentioned one of its mobile apps and for failing to have a privacy policy that was sufficiently accessible to consumers of that app.

Privacy policies for mobile applications continue to become more important as the use of apps becomes more widespread. California’s OPPA has led the charge, but other states and the federal government may follow. In September, for instance, Representative Ed Markey of Massachusetts introduced The Mobile Device Privacy Act in the U.S. House of Representatives, which in some ways would have similar notice requirements as California’s OPPA.

Changes to Instagram’s online terms of service and privacy policy created user backlash

In mid-December 2012, Instagram released an updated version of its online terms of service and privacy policy (collectively, “Terms”). The updated Terms would have allowed Instagram to use a user’s likeness and photographs in advertisements without compensation. There was a strong backlash from users over the updated Terms, which ultimately led to Instagram apologizing to its users for the advertisement-related changes, and reverting to its previous language regarding advertisements.

Instagram’s changes to its Terms, and subsequent reversal, are reminders of how monetizing social media services is often a difficult balancing act. Although social media services need to figure out how they can be profitable, they also need to pay attention to their users’ concerns.

The defeat of the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA)

Two bills, SOPA and PIPA—which were introduced in the U.S. House of Representatives and U.S. Senate, respectively, in late 2011—would have given additional tools to the U.S. Attorney General and intellectual property rights holders to combat online intellectual property infringement. A strong outcry, however, arose against the bills from various Internet, technology and social media companies. The opponents of the bills, who claimed the proposed legislation threatened free speech and innovation, engaged in various protests that included “blacking out” websites for a day.  These protests ultimately resulted in the defeat of these bills in January 2012.

The opposition to and subsequent defeat of SOPA and PIPA demonstrated the power of Internet and social media services to shape the national debate and sway lawmakers. With prominent social media services such as Facebook, YouTube, Twitter, LinkedIn and Tumblr opposed to the bills, significant public and, ultimately, congressional opposition followed.  Now that we’ve witnessed the power that these services wield when acting in unison, it will be interesting to see what issues unite them in the future.

On December 19, 2012, the Federal Trade Commission (“Commission”) announced long-awaited amendments to its rule implementing the Children’s Online Privacy Protection Act (“Rule”). The changes—which take effect on July 1, 2013—are significant. They alter the scope and obligations of the Rule in a number of ways. We discuss the revisions in greater detail below.

  • The Commission revised the Rule’s definition of “personal information” to include more types of data that trigger the Rule’s notice, consent, and other obligations. These include persistent identifiers when used for online behavioral advertising and other purposes not necessary to support the internal operations of the site or online service.
  • The Commission expanded the Rule’s coverage to third-party services—such as ad networks and social plug-ins—that collect personal information through a site or service that is subject to COPPA. The host site or service is strictly liable for the third party’s compliance, while the third party must comply only if it has actual knowledge that it is collecting personal information through a child-directed site or from a child.
  • The Commission streamlined the content of the parental notice and simplified the privacy policy.
  • The Commission retained the “email plus” method of obtaining parental consent. It also added new methods of obtaining consent and established a process for pre-clearance of other consent mechanisms.
  • The Commission imposed new data security pass-through requirements, as well as data retention obligations.
  • The Commission revised the Rule to permit certain sites that are “directed to children” to comply only with respect to those users who self-identify as under 13.

To continue reading this post, click here.

On September 5, 2012, the Federal Trade Commission (FTC) published a brief guide to assist developers of mobile applications, both large and small, in complying with truth-in-advertising, privacy, and data security principles. In publishing this advice, the FTC makes clear that its Section 5 enforcement powers against unfair or deceptive acts or practices apply in the mobile app arena, and with equal force to large and small developers.

The FTC’s guidance briefly lays out the practices developers should follow in order to avoid such enforcement, thereby suggesting that more enforcement is on the horizon. Indeed, it has already started: last August the FTC reached a settlement with W3 Innovations, LLC for alleged violations of the COPPA rule in its apps directed at children.

The guide, called “Marketing Your Mobile App: Get it Right from the Start,” explains general consumer protection principles, and applies them to the context of mobile applications. Although the title of the guide suggests that the advice is primarily about marketing the apps, the FTC also gives advice about the design and implementation of apps.

WHAT IS THIS GUIDE?

This is NOT a new FTC trade regulation carrying the force of law. This is guidance issued by the Commission for how it may apply its Section 5 authority to police deceptive and unfair practices in the app environment. The FTC expects that the industry will review this guidance and take it into account in developing and advertising their apps.

This guidance is also specifically directed at mobile app developers; it does not relate to the “In Short” Dot-Com Disclosures workshop held on May 30, 2012, which relates to proper disclosure techniques in all online commerce. Guidance arising from that workshop, which is expected to be far more fulsome, may be released as early as this fall.

WHAT COMPLIANCE STEPS IS THE FTC LOOKING FOR?

Substantiate Your Claims

The FTC advises that app developers advertise their apps truthfully, and explains that “pretty much anything” a company tells a prospective user about what the app can do, expressly or by implication, no matter the context, is an “advertisement” requiring substantiation for claims as they would be interpreted by the average user.

If Disclosures are Necessary, Make them Clearly and Conspicuously

If developers need to make disclosures to users in order to make their advertising claims accurate, the FTC notes, then those disclosures must be clear and conspicuous. Although this does not require specific type or font sizes, the disclosures must be large enough and clear enough that users both see and understand them. This means, according to the FTC, that disclosures cannot be buried behind vague links or in blocks of dense legal prose.

Incorporate Principles of “Privacy by Design” In Developing Apps

The FTC also gives advice to developers on how to avoid enforcement for violations of user privacy. First, it notes that developers should implement “privacy by design,” meaning that they should consider privacy implications from the beginning of the development process. This entails several elements:

  • Incorporate privacy protections into your practices;
  • Limit information collection;
  • Securely store held information;
  • Dispose of information that is no longer needed;
  • Make default privacy settings consistent with user expectations; and
  • Obtain express user agreement for information collection and sharing that is not apparent.

Incorporate Transparency and Choice into Apps and Honor Users’ Choices

The FTC urges that developers be transparent about their data collection practices, informing users about what information the app collects and with whom that information is shared. Developers should also, according to the FTC, give users choices about what data the app collects, via opt-outs or privacy settings, and give users tools that are easy to locate and use to implement the choices they make.

Importantly, the FTC emphasizes that developers must honor the choices they offer consumers. This includes following through on privacy promises made. This also includes getting affirmative permission from users for material changes to privacy practices—simply editing the privacy policy is not enough, according to the FTC guide.

Apply COPPA Protections Where Appropriate

The FTC notes that there are special rules for dealing with kids’ information. Developers who aim their apps at children under 13, or know that children under 13 are using the app, must clearly explain their information practices and obtain verifiable parental consent before collecting personal information from children. The guide links to further advice for compliance with the Children’s Online Privacy Protection Act (COPPA).

Special Protections for Sensitive Information

Even for adults, the FTC urges developers to get affirmative consent before collecting “sensitive” information, such as medical, financial, or precise location information. For sensitive information, the FTC states that developers must take reasonable steps to ensure that it remains secure. The FTC suggests that developers:

  • Collect only the information needed;
  • Take reasonable precautions against well-known security risks;
  • Limit access to the data to a need-to-know basis; and
  • Dispose of data safely when it is no longer needed.

The FTC notes that these principles apply to all information the app collects, whether actively from the user, or passively in the background. In addition, any contractors that work with the developers should observe the same high security standards.

The Children’s Online Privacy Protection Act of 1998 (“COPPA”), which became effective in April 2000, has long served as the primary regulatory tool of the Federal Trade Commission (the “FTC”) to police online privacy issues concerning children under 13.  The COPPA Rule (the “Rule”), promulgated by the FTC pursuant to COPPA, in general requires the operator of a website or online service that is directed to children or that knowingly collects personal information from children to obtain verifiable parental consent before collecting personal information from a child under the age of 13.  In September 2011, after the Act had been on the books for over a decade, the FTC announced that change was coming and proposed for public comment certain amendments to the Rule, as we explained last year.  After all, when the Act first passed in 1998, Mark Zuckerberg was just 14 years old, and social media giants like Facebook, YouTube and Twitter would not launch until well into the next decade.  Google had just been founded and operated out of a garage in Silicon Valley.  Pets.com was the next big thing.  Change was long overdue.

On August 1, 2012, after reviewing over 350 comments to its proposed amendments, the FTC announced that it was seeking further proposed modifications to the Rule.  So what’s new this time?

Network Advertisers and Other Third-Party Information Collectors Potentially Responsible for COPPA Compliance

Although COPPA applies only to websites or online services, the FTC’s proposed new modifications seek to expressly hold certain third-party plug-in, software download, and advertising networks accountable for COPPA compliance when they collect personal information through a website or online service that they know is child-directed.  Does this mean that such third parties are going to be held strictly liable for COPPA compliance when they are integrated into a website or online service?  No.  Although it considered this option, the FTC instead proposes to apply the Rule only if the third party “knows or has reason to know” it is collecting personal information through a host site or service that is directed to children.  Thus, if credible information that such use is occurring is brought to the attention of a plug-in or ad network, for example, it ignores this information at its peril.

Mixed Approach to Mixed Audience Sites

Historically, the FTC has not charged mixed audience websites that contain content appealing to both children and adults as “directed to children,” given the burden that this can impose on providers and users alike.  Instead, the FTC has charged such websites under COPPA only where they had actual knowledge that they were collecting personal information from children.  The FTC now seeks to codify this approach.  Under its proposed revisions, a website or service that has child-oriented content appealing to a mixed audience, where children under 13 are likely to be over-represented, will not be deemed “directed to children” if the site or service age-screens all users before personal information is collected.  Then, once the site learns who self-identifies as under 13, it must obtain appropriate parental consent before collecting any personal information and otherwise comply with the Rule with respect to them.  Websites or services that knowingly target, or have content likely to attract, children under 13 as its primary audience must still treat all users as children for COPPA compliance purposes.

Information collected by “persistent identifiers,” including in connection with behaviorally-targeted ads, counts as “personal information” for COPPA purposes

The FTC announcement also included certain modifications and clarifications to some of its earlier, more controversial 2011 proposals.  Last fall, for instance, the FTC expanded the definition of “personal information” (the collection of which generally triggers a parental consent obligation) to include information collected by “persistent identifiers” that track a devise’s use over time and across different platforms.  This expansion met considerable resistance from some quarters because commentators felt that “persistent identifiers” track devise use, not personal use, and therefore should not count as collecting “personal information,” but the FTC did not alter its stance.  An exception, however, exists for information collected by persistent identifiers if it is used as support for internal operations.

So what counts as “support for internal operations”?  The FTC now proposes to expressly define those operations as including “site maintenance and analysis, performing network communications, use of persistent identifiers for authenticating users, maintaining user preferences, serving contextual advertisements, and protecting against fraud and theft.”  Thus persistent identifiers can be used for these express purposes without regard to any COPPA compliance consequences.  But for all other uses, COPPA may become an issue.  Use of a persistent identifier for purposes outside of these operations, including for behaviorally-targeted advertising (specifically addressed in the recent commentary) will likely trigger the Rule’s obligations.  Because of this expanded definition, and the fact that age cannot be determined from a persistent identifier, sites directed to children may be well advised to engage in such activities only after first obtaining verifiable parental consent.  In fact, given the breadth of this potential rule, operators of sites wholly unrelated to children should take notice as this change may well portend a broader shift in policy within the FTC toward these issues.

The FTC is accepting comments on the proposals until September 10, 2012.  The FTC expects to publish a final Rule this year.  A more detailed explanation of these proposed changes, including analysis of important commentary, can also be found here.

The Federal Trade Commission (“FTC”) recently released proposed amendments to its rule (“Rule”) implementing the Children’s Online Privacy Protection Act (“COPPA”). The Rule requires the operator of a website or online service to obtain verifiable parental consent before collecting personal information from a child under the age of 13. If adopted as drafted, the revised Rule would not only make it even more difficult for operators to collect information from children online, but it would also sweep into the Rule’s coverage sites and online services that are currently outside of it. Moreover, the proposed changes would codify the erasure of the traditional distinctions between “personal” and “non-personal” information – an outcome that raises issues even for companies that are not subject to COPPA.

Among the most significant changes proposed by the FTC are the elimination of the widely used “email plus” method of obtaining verifiable parental consent and a considerable expansion of the Rule’s definition of “personal information.”

Elimination of the “email plus” method of obtaining consent. The existing Rule has a two-tiered system for obtaining verifiable parental consent:  An operator that uses a child’s information only internally may use the so-called “email plus” mechanism, while more foolproof measures, such as a print, sign, and send back form or a phone call, are required if the operator will disclose the child’s information to third parties. Asserting that “all collections of children’s information merit strong verifiable parental consent,” the FTC has proposed to eliminate the distinction. “Email plus” – currently the most common way of obtaining consent – would no longer be an option.

Expansion of the definition of “personal information.” At the same time that it proposes to make obtaining verifiable parental consent more difficult and costly, the FTC also proposes to extend the Rule’s reach to a far wider swath of information collection practices, by expanding its definition of “personal information.” Perhaps most notably, the FTC would include within the definition a persistent identifier, when it is used for functions other than support for the internal operations of the site or service. “Persistent identifiers” include a customer number held in a cookie, an IP address, a device serial number, and a unique device identifier. In its commentary accompanying the proposed revisions, the FTC explains that consent would not be required when persistent identifiers are used for purposes such as user authentication, improving navigation, maintaining user preferences, serving contextual advertising, and protecting against fraud or theft, as these are functions that support the internal operations of the site or service.

On the other hand, the “personal information” definition would be triggered by – and verifiable parental consent would therefore be required for – other, non-support uses, presumably including online profiling, the delivery of personalized content, behavioral advertising, retargeting, and analytics. This is significant because there is no way to determine age from a persistent identifier – meaning, for instance, that sites directed to children could not deliver personalized content without first obtaining verifiable parental consent. For sites not directed to children but that are still subject to the Rule (because they knowingly collect personal information from children under 13), it is not clear how this restriction would apply in practice. As companies facing similar consent requirements in the EU can attest, obtaining consent prior to the use of a persistent identifier can be a costly and disruptive obligation. The FTC does not provide guidance in its commentary, but the issues are ripe for comment.

The FTC’s proposals reflect its oft-stated position that the line between what has traditionally been considered “personal” and “non-personal” information is increasingly blurred, such that protections historically afforded to personal information should be extended to certain non-personal information as well. If the FTC takes this approach with respect to COPPA, it is logical that it will take a similar approach in all contexts. Therefore, even companies not subject to COPPA are advised to consider the potential ramifications of the proposed changes and to consider submitting comments.  The FTC is accepting comments until December 23, 2011.

10 Million Monsters!
Marking a truly historic social media milestone, Lady Gaga became the first Twitter user with more than 10 million followers.  According to reports, the entertainer noted this achievement with a Tweet saying “10MillionMonsters!  I’m speechless, we did it!  Its an illness how I love you.  Leaving London smiling.”

Netflix Traffic
Reports are that Netflix is now the single largest source of downstream Internet traffic, accounting for more than 20% of such traffic during peak times.  In comparison, YouTube accounts for approximately 10% of downstream traffic.

Google’s Contribution to Economic Activity
According to Google, the search giant’s programs, including AdWords and AdSense, provided $64 billion in economic activity for American companies and non -profits in 2010.  This represents an 15% increase over 2009.  Google’s home state of California is said to have benefitted the most—to the tune of $15 billion.

Facebook.com’s Integration with Microsoft’s Bing
The integration of the Facebook.com site with Microsoft’s Bing search engine, first announced in 2010, has been expanded.  Among other features, Bing will now display more data regarding search results that your Facebook friends have liked and a greater ability to share Bing search results with Facebook friends.

Facebook’s New Photo-Tagging Feature
On a somewhat related note, Facebook announced a new photo-tagging feature that allows users to tag businesses, brands, celebrities and musicians that have their own Facebook pages.  Previously, users could only tag themselves and their friends in photos.

Facebook Planting Negative Stories About Google?
Many have noted that the Facebook/ Bing integration presents a challenge to Google’s own search and social networking efforts.  In another indication of the increasingly heated competition between the Internet giants, controversy arose over allegations that Facebook hired a public relations firm to plant negative stories about Google.

Bin Laden’s Death Sets Twitter Records
Osama Bin Laden’s death set new Twitter records, becoming one of the most tweeted events ever.  According to Mashable, Twitter reached more than 5,000 Tweets per second at the beginning and end of President Obama’s speech announcing Bin Laden’s death, with a total of 27,900, 000 Tweets over a period of about two and a half hours.

Facebook Class Action
A Brooklyn man has filed a class action lawsuit against Facebook, alleging that the company’s “social ads”—which display the names and images of a user’s friends who have liked a particular brand or ad—use minors’ names and likenesses without the parental consent required under a section of New York’s civil rights law.

Social Widgets as Data Tools
“Social widgets,” those ubiquitous website buttons that allow users to “like” or “share” online articles and other content, also let their makers collect data about the websites people are visiting, potentially raising privacy concerns, according to a study prepared for The Wall Street Journal.

Google News Archive No More?
It has been reported that Google has ceased adding content to its Google News Archives, which provide free access to scanned archives of newspapers.  The existing archive remains accessible, however.

Social Networking for Children
Facebook’s founder, Mark Zuckerberg, announced recently that he would like to make the social networking site available to children, but also recognized the challenges presented by current law, particularly the Children’s Online Privacy Protection Act, which imposes strict rules regarding the collection of personal information from users under the age of thirteen.

Bad Yelp Reviews Barred from Anti-SLAPP Statute
A California court recently ordered a dentist who sued Yelp users for defamation over negative reviews to pay $80,000 in attorneys’ fees, after ruling that the dentist’s suit was barred by California’s anti-SLAPP statute.

Facebook and the ADA
A recent Ninth Circuit decision held that Facebook was not liable under the Americans with Disabilities Act when it terminated a user with bipolar disorder for terms of service violations because Facebook’s services do not have a nexus to a physical place of public accommodation that would be necessary to subject it to the ADA.

Morgan Stanley Employees on Twitter
Morgan Stanley Smith Barney has reportedly become the first major brokerage firm to allow its brokers to use Twitter.

“SB 242” Rejected
California’s Senate has rejected a bill (“SB 242”) that would require social networking sites to hide personal information about users unless they give their permission to share it.  A coalition of Web companies, including Facebook, Google, Skype, Twitter and Yahoo, had voiced opposition to the bill, arguing in a letter to Senator Ellen Corbett (D., San Leandro), who proposed SB 242, that the proposed statute “gratuitously singles out social networking sites without demonstration of any harm,” and would result in users making uninformed choices by requiring that they select privacy settings ahead of using the sites.