Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Website Operators Await Final Guidance Regarding Compliance With California’s “Do-Not-Track” Disclosure Requirements

Posted in Privacy

Even with the publication of draft “best practices” by the California Attorney General (AG), website operators remain uncertain as to their obligations under the new do-not-track disclosure requirements of the state’s Online Privacy Protection Act (“CalOPPA”), which took effect on January 1, 2014.

The new provisions require privacy policy disclosures with respect to:  (1) a site operator’s tracking of its visitors when they are on third-party sites (if it engages in such tracking) and (2) any “other party’s” tracking of site visitors when they are on third-party sites.

In the first case only, the law requires that the operator disclose how it responds to browser do-not-track signals or other do-not-track choice mechanisms.  It does not impose the same disclosure obligation with respect to “other parties”—rather, it requires only that the operator disclose whether other parties engage in such tracking.

During a December 10, 2013 call with industry representatives, consumer advocates and other interested parties, the AG’s office took the position that a service provider is not the same as a site operator but instead should be treated as an “other party” for purposes of the law.  (This position is consistent with the law’s definition of an “operator,” which appears to exclude service providers.)  It follows that the site operator would not have to disclose a choice mechanism with respect to any such “other party.”

As a practical matter, this should be a moot point for an operator that uses third parties that are members of the Network Advertising Initiative and/or Digital Advertising Alliance, as such operator should already be contractually required to disclose how site visitors may opt out of cross-site tracking for online behavioral advertising purposes.  Site operators should keep in mind, however, that CalOPPA’s provisions cover any type of cross-site tracking—which may also include tracking for analytics or other purposes.

On December 20, 2013, the AG’s office circulated a draft of its best practice recommendations for online tracking transparency.  The draft notes that the recommendations are not intended to tell a site operator what disclosures are necessary to comply with CalOPPA.  Rather, they will, “in some places offer greater privacy protections than required by . . . law” and are intended to “encourage the development of privacy best practice standards.”  The draft reflects this, by recommending disclosures that go beyond those required by the law.  For example, they:

  • Urge a site operator that does not engage in cross-site tracking to tell its users that it does not engage in such tracking.  The law requires no such affirmative disclosure; and
  • Encourage a site operator that engages in cross-site tracking to both:  (a) disclose how it responds to a browser’s do-not-track signal or similar communication from a site user and (b) provide a link to a program that offers choices in connection with online tracking.  The law requires that such a site operator disclose how it responds to browser do-not-track signals or other mechanisms that provide users with choices (not both), and it permits the operator to comply by providing a link to an online choice program.

The AG accepted comments on its draft until January 6, 2014, and it intends to issue final guidance during the second half of January 2014.

Although site operators need to proceed with great caution, our sense is that the AG’s office is unlikely to bring any actions for violations of the amended statute prior to issuing its final guidance.  If the AG’s office does bring such an action, we suspect that the action would most likely involve a “slam dunk” situation—i.e., where a site operator engages in cross-site tracking but makes absolutely no mention of do-not-track, third parties or an opt-out in its privacy policy.

Socially Aware will provide an update after the AG publishes its final best practice recommendations.

 

FFIEC Issues Final Guidance on Social Media Usage by Financial Institutions

Posted in Financial Institutions

On December 11, 2013, the Federal Financial Institutions Examination Council (FFIEC) issued final guidance for financial institutions relating to their use of social media (the “Guidance”).  With its release, the FFIEC adopts its January 2013 proposed guidance in substantially the same form.  (Socially Aware’s overview of the proposed guidance is available here.)

Financial institutions should expect that the federal banking agencies, Consumer Financial Protection Bureau and National Credit Union Administration (the agencies that comprise the FFIEC) will require supervised institutions to incorporate the Guidance into their efforts to address risks associated with the use of social media and to ensure that institutional risk management programs provide effective oversight and controls related to such use.  As a result, financial institutions should consider the appropriateness of their social media risk management programs and should be cognizant of potential technical compliance traps that could result from the use of social media to interact with consumers about products governed by consumer financial protection laws, such as the Truth in Lending Act.

Changes to the Proposed Guidance

Although adopted in substantially the same form as the proposed guidance, the Guidance does attempt to address some concerns raised by commenters.  For example, the FFIEC clarifies that compliance should not be viewed as a “one-size-fits-all” process and that institutions should tailor their approach based on their size, complexity, activities and third-party relationships.  Additionally, the Guidance clarifies that stand-alone messages sent through traditional email and text channels will not be considered social media.  Nonetheless, the Guidance cautions that the term “social media” will be viewed broadly by the agencies.

While the FFIEC attempted to clarify a financial institution’s obligations with respect to service providers involved in the institution’s social media activities, the Guidance provides limited specific considerations.  For example, the Guidance directs institutions to “perform due diligence appropriate to the risks posed by the prospective service provider” based on an assessment of the third party’s policies, including the frequency with which these policies have changed and the extent of control the financial institution may have over the policies.

Another area where the FFIEC attempted to clarify its expectations is the extent to which a financial institution would be required to monitor consumer communications on Internet sites other than those maintained by the institution (“Outside Sites”).  While the preamble to the Guidance notes that “financial institutions are not expected to” monitor Outside Sites, the Guidance provides that the public nature of social media channels may lead to increased reputational risk, and that compliance considerations may arise if, for example, a consumer raises a dispute through social media.  Further, the Guidance states that institutions are still expected to make risk assessments to determine the appropriate approach to monitoring and responding to communications made on Outside Sites.  The Guidance also continues to state that, based on the risk assessments, institutions will need to consider the need to “monitor question and complaint forums on social media sites” to review and, “when appropriate,” address complaints in a timely manner.

Compliance Considerations

The cornerstone of the Guidance continues to be the expectation that a financial institution will maintain a risk management program through which it identifies, measures, monitors and controls risks related to its use of social media.  The Guidance provides that a financial institution’s risk management program should include the following components:

  • A governance structure so that social media use is directed by the institution’s board of directors or senior management.
  • Policies and procedures regarding the institution’s use of social media, compliance with applicable consumer protection laws and regulations, and methodologies to address risks from online postings, edits, replies and retention.
  • A risk management process for selecting and managing third-party relationships for social media use.
  • An employee training program incorporating the policies and procedures, and informing employees of appropriate work and non-work uses      of social media (including defined “impermissible activities”).
  • An oversight process for monitoring information posted to proprietary social media sites administered by the financial institution or contracted third party.
  • Audit and compliance functions to ensure compliance with internal policies and applicable laws, regulations and the Guidance.
  • Parameters for reporting to the institution’s board of directors or senior management to enable periodic evaluation of the effectiveness of the social media program and whether the program is achieving its stated objectives.

Moreover, the Guidance continues by focusing on identifying potential risks related to a financial institution’s use of social media, including risk of harm to consumers.  In particular, the Guidance identifies potential risks within three broad categories: (1) compliance and legal risk; (2) reputational risk; and (3) operational risk.  While the Guidance catalogs the many risks presented by the use of social media, the focus is on the risks associated with compliance with consumer protection requirements, including:

  • Fair Lending Laws:  While it focuses on an institution’s compliance with time frames for adverse action and other notices required by the federal fair lending laws and regulations, the Guidance also highlights possible compliance traps if a financial institution fails to carefully consider whether the institution’s social media use is consistent with applicable law.  For example, the Guidance highlights that, where applicable, the Fair Housing Act would require mortgage lenders who maintain a Facebook page to display the Equal Housing Opportunity Logo.
  • Truth in Lending Act/Regulation Z:  The Guidance highlights that the Regulation Z advertising requirements would apply to relevant advertisements made through social media.  Credit card issuers in particular will be familiar with Regulation Z’s disclosure requirements for advertisements that include trigger terms and reference deferred interest promotions, and should be cognizant of the application of these requirements in social media advertisements.
  • Truth in Savings Act/Regulation DD:  Like the considerations for compliance with Regulation Z, the Guidance highlights that Regulation DD also contains special advertising requirements for use of trigger terms such as “bonus”  and “APY,” and further notes that depository institutions can ensure compliance with the federal disclosure requirements by including a link to the additional information required to be provided to the consumer.  
  • Deposit Insurance and Share Insurance:  The Guidance reminds institutions that they are required to comply with the advertising requirements for deposit insurance in non-social media advertisements and displays.

The FFIEC having finalized its Guidance, financial institutions will need to carefully review their social media policies and practices in light of the Guidance.  Indeed, even companies that are not financial institutions may find the Guidance to reflect emerging best practices for minimizing risk in using social media to promote products and services.

Hot Off the Press – New Issue of the Socially Aware Newsletter

Posted in FTC, Litigation, Online Promotions, Privacy, Wearable Computers

The latest issue of our Socially Aware newsletter is now available here. In this issue, we explore legal concerns raised by Google Glass; we provide an overview of the growing body of case law addressing ownership of business-related social media accounts; we take a look at two circuit court decisions addressing the interplay between social media usage and the First Amendment; we examine the trend toward collaborative consumption and associated legal issues; we discuss an important new decision regarding unilateral modifications to online terms of use; and we highlight an industry warning to website operators who collect data for purposes of online behavioral advertising.  For a free subscription to the newsletter, please email us at sociallyaware@mofo.com.

 

Data Protection Masterclass Webinar: Spotlight on Social Media Marketing and Policies

Posted in Employment Law, Event, Online Promotions, Privacy

Our global privacy + data security group’s Data Protection Masterclass Webinar series is turning the spotlight on social media marketing and policies in January.

Please join Socially Aware contributors Christine Lyon and Karin Retzer, along with Ann Bevitt in our London office for a webinar that will examine the laws and regulations in the United States and Europe relating to consumer-facing issues that arise from the use of social media in advertising and marketing. This presentation will also address the challenges that employers and employees face resulting from the use of social media in the workplace and in the recruitment process.

Topics Will Include:

  • Privacy issues for social media advertising, blogging and tweeting
  • Data sharing in relation to social plug-ins
  • Data protection requirements for social media market research
  • Targeting and analytics
  • Social media policies
  • Monitoring of social media use, including misuse of social media by employees
  • Use of social media in the application process

Date & Time:

Tuesday, January 21, 2014

4:30 p.m. – 6:00 p.m. GMT
11:30 a.m. – 1:00 p.m. EST
8:30 a.m. – 10:00 a.m. PST

Speakers:

Registration:

To register for this webinar, please click here.

For more information, please contact Kay Burgess at kburgess@mofo.com or +44 20 7920 4067.

German Court Finds 25 Provisions in Google’s Online Terms of Use and Privacy Policy to Be Unenforceable

Posted in Privacy, Terms of Use

In November 2013, the Berlin District Court ruled that all of the 25 provisions in Google’s online terms of use and privacy policy that had been challenged by the German Federation of Consumer Associations (VZBV) are unenforceable.  In reaching its decision, the court found that German law applies to terms of use and privacy policies to the extent they are directed to German consumers.

Under German unfair contract terms legislation, clauses that contradict main elements of German law and unfairly disadvantage consumers are invalid.  In this respect, the court found that the German Federal Data Protection Act and the Telemedia Act constituted key elements of law to be considered in relation to standard terms, and hence considered these statutes irrespective of the fact that these statutes only apply to organizations established in Germany or using equipment in Germany.  Google has announced that it will appeal the decision, but, if the judgment is upheld, any online terms of use or privacy policy applicable to German consumers could be challenged under German law and in a German forum.

In the case, Google claimed that the unfair contract terms legislation was not applicable because its terms of use and privacy policy do not constitute contracts and the related Google services had been provided free of charge.  The court disagreed, observing that users were required to consent to these terms upon registration or use, and the services were not for “free” because of the commercial value of the personal data collected by Google and subsequently used for marketing purposes.

Among other clauses, the court found the following provisions in the terms of use to be invalid, many of which are relatively standard provisions in U.S. terms of use:

  • Google’s right to unilaterally terminate its services in the case of any breach of its terms of use or policies without prior notice that would allow users to remedy the breach;
  • Google’s right to monitor content for compliance with its policies;
  • Google’s right to alter its services at its discretion;
  • Google’s right to amend its terms of use without further notice or consent; and
  • The (mutual) liability limitation for bodily harm and life, or statutory product liabilities.

The court also found that Google had not obtained valid consent for the collection, use and sharing of personal data via its consent box (“I agree to the use terms and I have read the privacy policy.”).  German law requires that users be informed as to the specific data to be collected and how such data will be used and shared.  Google’s privacy policy, however, provided insufficient detail and relied on blanket statements to describe its rights, for example:

  • Google’s right to collect information (including device-type information) and location data “relating to the services”;
  • Google’s right to share data with organizations that “Google reasonably believes to have a need to know”;
  • Google’s right to share data in the context of a merger;
  • Google’s right to record phone calls without any specific notice;
  • Google’s right to merge data from different platforms without further notice or consent;
  • Google’s limitations on users’ rights to access data provided to Google; and
  • Google’s right to share data with law enforcement agencies without further notice or consent.

The court also objected to the privacy policy’s broad cookie language, including Google’s statement that only “cookies and other anonymous data” are collected by Google.  Cookie IDs and other tracking information were considered by the court to be personal data in this context.

The court’s judgment can be found (in German) here.

Ownership of Business-Related Social Media Accounts

Posted in Employment Law, IP, Litigation, Online Promotions

Social media platforms have become an increasingly important means for companies to build and manage their brands and to interact with their customers, in many cases eclipsing companies’ traditional “.com” websites. Social media providers typically make their platforms available to users without charge, but companies nevertheless invest significant time and other resources to create and maintain their presences on those providers’ platforms. A company’s social media page or profile and its associated followers, friends and other connections are often considered to be valuable business assets.

But who owns these valuable assets – the company or the individual employee who manages the company’s page or profile? Social media’s inherently interactive nature has created an important role for these individual employees. Such an employee essentially acts as the “voice” of the company and his or her style and personality may be essential to the success and popularity of that company’s social media presence. As a result, the lines between “company brand” and “personal brand” may become blurred over time. And when the company and the individual part ways, that blurring can raise difficult issues, both legal and logistical, regarding the ownership and valuation of business-related social media accounts.

Such issues have arisen in a number of cases recently, several of which we discuss below. Although these cases leave open a number of questions, the message to companies who use social media is loud and clear: it is imperative to proactively establish policies and practices that address ownership and use of business-related social media accounts.

PhoneDog v. Kravitz

A recently settled California case, PhoneDog v. Kravitz, Case No. C 11-03474 (N.D. Cal.), raised a number of interesting issues around the ownership and valuation of social media accounts. The defendant, Noah Kravitz, worked for the plaintiff, PhoneDog, a mobile news and reviews website. While he was employed by PhoneDog, Kravitz used the Twitter handle “@PhoneDog_Noah” to provide product reviews, eventually accumulating 17,000 Twitter followers over a period of approximately four and a half years. Kravitz then left PhoneDog to work for one of its competitors but he maintained control of the Twitter account and changed the account handle to “@noahkravitz.” When Kravitz refused PhoneDog’s request to relinquish the Twitter account that had been previously associated with the “@PhoneDog_Noah” handle, PhoneDog filed a complaint against Kravitz asserting various claims, including trade secret misappropriation, conversion, and intentional and negligent interference with economic advantage.

Kravitz filed a motion to dismiss the complaint based on a number of arguments, including PhoneDog’s inability to establish that it had suffered damages in excess of the $75,000 jurisdictional threshold. Kravitz also disputed PhoneDog’s ownership interest in either the Twitter account or its followers, based on Twitter’s terms of service, which state that Twitter accounts belong to Twitter and not to Twitter users such as PhoneDog. Finally, Kravitz argued that Twitter followers are “human beings who have the discretion to subscribe and/or unsubscribe” to the account and are not PhoneDog’s property, and asserted that “[t]o date, the industry precedent has been that absent an agreement prohibiting any employee from doing so, after an employee leaves an employer, they are free to change their Twitter handle.”

With respect to the amount-in-controversy issue, PhoneDog asserted that Kravitz’s continued use of the “@noahkravitz” handle resulted in at least $340,000 in damages, an amount that was calculated based on the total number of followers, the time during which Kravitz had control over the account, and a purported “industry standard” value of $2.50 per Twitter follower. Kravitz argued that any value attributed to the Twitter account came from his efforts in posting tweets and the followers’ interest in him, not from the account itself. Kravitz also disputed PhoneDog’s purported industry standard value of $2.50 per Twitter follower, and contended that valuation of the account required consideration of a number of factors, including (1) the number of followers, (2) the number of tweets, (3) the content of the tweets, (4) the person publishing the tweets, and (5) the person placing the value on the account.

With respect to the ownership issue, PhoneDog claimed that it had an ownership interest in the account based on the license to use and access the account granted to it in the Twitter terms of service, and also that it also had an ownership interest in the content posted on the account. PhoneDog also pointed to a purported “intangible property interest” in the Twitter account’s list of followers, which PhoneDog compared to a business customer list. Finally, PhoneDog asserted that, regardless of any ownership interest in the account, PhoneDog was entitled to damages based on Kravitz’s interference with PhoneDog’s access to and use of the account, which (among other things) purportedly affected PhoneDog’s economic relations with its advertisers.

The court determined that the amount-in-controversy issue was intertwined with factual and legal issues raised by PhoneDog’s claims and, therefore, could not be resolved at the motion-to-dismiss stage. Accordingly, the court denied without prejudice Kravitz’s motion to dismiss for lack of subject matter jurisdiction. The court also denied Kravitz’s motion to dismiss PhoneDog’s trade secret and conversion claims, but granted Kravitz’s motion to dismiss PhoneDog’s claims of interference with prospective economic advantage.

The parties subsequently settled the dispute, so, unfortunately, we will never know how the court would have ruled on the variety of interesting issues that the case presented. Interestingly, although the terms of the settlement remain confidential, as of mid-September, Kravitz appears to have kept control of the Twitter account and its attendant followers. It is worth noting that the case might have been more straightforward—and the result more favorable to the company—had PhoneDog established clear policies regarding the ownership of business-related social media accounts.

Ardis Health, LLC et al. v. Nankivell

A New York case, Ardis Health, LLC et al. v. Nankivell, Case No. 11 Civ. 5013 (S.D.N.Y.), more clearly illustrates the fundamental point that companies should proactively establish policies and practices that address the ownership and use of business-related social media accounts.

The plaintiffs in Ardis Health were a group of closely affiliated online marketing companies that develop and market herbal and beauty products. The defendant was a former employee who had held a position at Ardis Health, LLC as a “Video and Social Media Producer.” Following her termination, the defendant refused to turn over to the plaintiffs the login information and passwords for the social media accounts that she had managed for the plaintiffs during her employment. The plaintiffs then filed a lawsuit against the defendant and sought a preliminary injunction seeking, among other things, to compel her to provide them with that access information.

Fortunately for the plaintiffs, they had required the defendant to execute an agreement at the commencement of her employment that stated in part that all work created or developed by defendant “shall be the sole and exclusive property” of one of the plaintiffs, and that required the defendant to return all confidential information to the company upon request. This employment agreement also stipulated that “actual or threatened breach . . . will cause [the plaintiff] irreparable injury and damage.” On these facts, the court noted that “[i]t is uncontested that plaintiffs own the rights to” the social media account access information that the defendant had refused to provide. Interestingly, the court held that the plaintiffs were likely to prevail on their conversion claim, effectively treating the disputed social media account access information as a form of intangible personal property. The court also determined that plaintiffs were suffering irreparable harm as a result of the defendant’s refusal to turn over that access information. Accordingly, the court granted the plaintiffs’ motion for a preliminary injunction ordering the defendant to turn over the disputed login information and passwords to the plaintiffs.

As far as we can tell from the reported decision in Ardis Health, the defendant’s employment agreement did not expressly address the ownership or use of social media accounts or any related access information. Nonetheless, even the fairly generic work product ownership and confidentiality language included in the defendant’s employment agreement, as noted above, appears to have been an important factor in the favorable outcome for the plaintiffs, which illustrates the advantages of addressing these issues contractually with employees—in advance, naturally. And as discussed below, companies can put themselves in an even stronger position by incorporating more explicit terms concerning social media into their employment agreements.

Eagle v. Morgan and Maremont v. Fredman

Former employers aren’t always the plaintiffs in cases regarding the ownership of business-related social media accounts.  In an interesting twist, two other cases – Eagle v. Morgan, Case No. 11-4303 (E.D. Pa.), and Maremont v. Fredman, Case No. 10 C 7811 (N.D. Ill.) – were brought by employees who alleged that their employers had taken over and started using social media accounts that the employees considered to be personal accounts.

Eagle began as a dispute over an ex-employee’s LinkedIn account and her related LinkedIn connections. The plaintiff, Dr. Linda Eagle, was a founder of the defendant company, Edcomm. Dr. Eagle alleged that, following her termination, Edcomm personnel changed her LinkedIn password and account profile, including by replacing her name and photograph with the name and photo of the company’s new CEO. Among the various claims filed by each party, in pretrial rulings, the court granted Dr. Eagle’s motion to dismiss Edcomm’s trade secret claim and granted Edcomm’s motion for summary judgment on Dr. Eagle’s Computer Fraud and Abuse Act (CFAA) and Lanham Act claims.

Regarding the trade secret claim, the court held that LinkedIn connections did not constitute trade secrets because they were “either generally known in the wider business community or capable of being easily derived from public information.” Regarding her CFAA claims, the court concluded that the damages Dr. Eagle claimed she had suffered – putatively arising from harm to reputation, goodwill and business opportunities – were insufficient to satisfy the “loss” element of a CFAA claim, which requires some relation to “the impairment or damage to a computer or computer system.” Finally, in rejecting the plaintiff’s claim that Edcomm violated the Lanham Act by posting the new CEO’s name and picture on the LinkedIn account previously associated with Dr. Eagle, the court found that Dr. Eagle could not demonstrate that Edcomm’s actions caused a “likelihood of confusion,” as required by the Act.

Eventually, the Eagle case proceeded to trial. The court ultimately held for Dr. Eagle on her claim of unauthorized use of name under the Pennsylvania statute that protects a person’s commercial interest in his or her name or likeness, her claim of invasion of privacy by misappropriation of identity, and her claim of misappropriation of publicity. The court also rejected Edcomm’s counterclaims for misappropriation and unfair competition. Meanwhile, the court held for the defendants on Dr. Eagle’s claims of identity theft, conversion, tortious interference with contract, civil conspiracy, and civil aiding and abetting. Although the court’s decision reveals that Edcomm did have certain policies in place regarding establishment and use of business-related social media accounts by employees, unfortunately for Edcomm, those policies do not appear to have clearly addressed ownership of those accounts or the disposition of those accounts after employees leave the company.

In any event, although Dr. Eagle did prevail on a number of her claims, the court concluded that she was unable to establish that she had suffered any damages. Dr. Eagle put forth a creative damages formula that attributed her total past revenue to business generated by her LinkedIn contacts in order to establish a per contact value, and then used that value to calculate her damages for the period of time when she was unable to access her account. But the court held that Dr. Eagle’s damages request was insufficient for a number of reasons, primarily that she was unable to establish the fact of damages with reasonable certainty. The court also denied Dr. Eagle’s request for punitive damages. Therefore, despite prevailing on a number of her claims, Dr. Eagle’s victory in the case was somewhat pyrrhic.

In Maremont, the plaintiff, Jill Maremont, was seriously injured in a car accident and had to spend several months rehabilitating away from work. While recovering, Ms. Maremont’s employer, Susan Fredman Design Group, posted and tweeted promotional messages on Ms. Maremont’s personal Facebook and Twitter accounts, where she had developed a large following as a well-known interior designer. Although Ms. Maremont asked her employer to stop posting and tweeting, the defendant continued to do so. Ms. Maremont then brought claims against Susan Fredman Design Group under the Lanham Act, the Illinois Right of Publicity Act, and the Stored Communications Act, as well as a common law right to privacy claim. The parties filed cross-motions for summary judgment, which the court denied with respect to the Lanham Act and Stored Communications Act claims, largely due to lack of evidence on whether or not Ms. Maremont suffered actual damages as a result of her employer’s actions. The court granted Susan Fredman Design Group’s motion for summary judgment with respect to Ms. Maremont’s right of publicity claim, based on the fact that the defendant did not actually impersonate Ms. Maremont when it used her accounts. The court also granted Susan Fredman Design Group’s motion for summary judgment with respect to Ms. Maremont’s right of privacy claim because the “matters discussed in Maremont’s Facebook and Twitter posts were not private and that Maremont did not try to keep any such facts private.”

Proactive Steps

Considering how vital social media accounts are to today’s companies, and given the lack of clear applicable law concerning the ownership of such accounts, companies should take proactive steps to protect these valuable business assets.

For example, companies should consider clearly addressing the ownership of company social media accounts in agreements with their employees, such as employee proprietary information and invention assignment agreements. Agreements like this should state, in part, that all social media accounts that employees register or manage as part of their job duties or using company resources – including all associated account names and handles, pages, profiles, followers and content – are the property of the company, and that all login information and passwords for such accounts are both the property and the confidential information of the company and must be returned to the company upon termination or at any other time upon the company’s request. In general, companies should not permit employees to post under their own names on company social media accounts or use their own names as account names or handles. If particular circumstances require an employee or other individual to post under his or her own name – for example, where the company has engaged a well-known industry expert or commentator to manage the account – the company might want to go a step further and include even more specific contractual provisions that address ownership rights to the account at issue.

In parallel, companies should implement and enforce social media policies that provide employees with clear guidance regarding the appropriate use of business-related social media accounts, including instructions on how to avoid blurring the lines between company and personal accounts. (Keep in mind, however, that social media policies need to be carefully drafted so as not to not run afoul of the National Labor Relations Act, state laws restricting employers’ access to employees’ personal social media accounts, or the applicable social media platforms’ terms of use.) Finally, companies should control employee access to company social media accounts and passwords, including by taking steps to prevent individual employees from changing account usernames or passwords without authorization.

FTC Expands Reach on Conspicuousness of Privacy Disclosures in Settlement with Android Flashlight App

Posted in FTC, Privacy

An FTC settlement with a mobile app over its privacy disclosures alleged to be deceptive may seem to be run-of-the-mill.  After all, the FTC has been settling cases for years with companies whose data collection and use practices are allegedly not consistent with the representations those companies make in their privacy policies.

But the FTC’s Complaint and Order with Goldenshores Technologies (“Goldenshores”), announced on December 5th, is a particularly noteworthy Section 5 case because the FTC’s theory is that the company’s alleged violation of Section 5 resulted not out of an affirmative representation regarding its app alleged to have been deceptive, but from an alleged material omission, and from an allegation that whatever disclosures there were did not rise to the required level of prominence because they were in the privacy policy and EULA only.

These types of allegations and policy determinations have heretofore been limited to spyware, and have crept into online behavioral advertising, but have generally not been part of FTC enforcement actions in other contexts.  This case represents the FTC’s signal to industry that material facts, especially those involving sensitive data, and especially where the facts involve collection, use, or disclosure of data that may surprise ordinary users because it is out of context of the use of the service, must be disclosed not only in a privacy policy, but also outside the privacy policy, clearly and conspicuously, prior to collection of the data.

The App’s Collection and Use of “Sensitive Data”

Goldenshores is the developer of the immensely popular “Brightest Flashlight Free” flashlight app (the “app”) for Android devices.  The FTC Complaint explains that the app can be downloaded from the Google Play application store, amongst other places.  The gravamen of the FTC’s Complaint stems from the allegation that while the app is operating as a flashlight (using the phone’s screen and LED flash for the camera) it is also collecting and transmitting certain information from the mobile device to third parties including ad networks.  This information includes precise geolocation information and persistent device identifiers that can be used to track a user’s location over time.

The app ran into two problems with these alleged data collection and use practices.  First, the FTC alleged that it did not adequately disclose that information including geolocation and the persistent device identifiers would be collected and shared with third parties, such as advertising networks.  Second, the app did not accurately represent consumers’ choices with regard to the collection, use and sharing of this information.

However, the Complaint does not start out by focusing on these collection and use practices, and the app’s disclosures relating to them.  Instead—and not insignificantly—it starts by describing the app’s promotional page on the Google Play store.  The Complaint notes that this page describes the flashlight app, but “does not make any statements relating to the collection or use of data from users’ mobile devices” (emphasis added).  Similarly, the FTC notes that the general “permission” statements that appear for all Android applications provide notice about the collection of sensitive information, but not about any sharing of sensitive information.  But these issues do not reappear in the FTC’s allegations regarding the actual violations of Section 5 of the FTC Act for deceptive practices.  Thus, it seems safe to assume that the FTC cited the lack of notice prior to download about the use and sharing of sensitive information to signal to app developers and platforms that it expects to see such disclosures.

The App’s Disclosures Regarding Sensitive Data

The FTC’s allegations specifically focus on the disclosures made by the app in its privacy policy and end user license agreement (“EULA”).  In short, the Complaint notes that while the app’s privacy policy discloses that the app collects information relating to “your computer,” it does not specifically disclose: (1) that sensitive information such as precise geolocation is collected; or (2) that it is transmitted to third parties.  Based on this failure to disclose, the FTC alleged that the app violated Section 5 by materially misrepresenting the scope of its data collecting and sharing, specifically the collection and sharing of precise geolocation information and persistent device identifiers.

As for the EULA, the Complaint explains that after a user downloads and installs the app, the user is presented with a EULA that must be accepted to use the app.  First, like the privacy policy, the FTC alleges that the EULA does not accurately and fully disclose the data and sharing practices of the app.  Second, the FTC alleges that the EULA also misleads consumers by giving them the option to “refuse” its terms.   As the Complaint puts it, “that choice is illusory.”  The problem is that the app transmits device data including precise geolocation and the persistent identifier before the user accepts—or refuses—the terms of the EULA.  As a result, the EULA misrepresented that consumers had the option to “refuse” the collection of this information, because “regardless of whether consumers accept or refuse the terms of the EULA, the Brightest Flashlight App transmits . . . device data as soon as the consumer launches the application…”

New Disclosures Required by the Settlement

For the most part, the Agreement and Consent Order is what we’ve come to expect from the FTC in Section 5 cases relating to data collection and use practices.  Thus, for instance, Goldenshores and any apps it develops, including this Flashlight app, are barred from misrepresenting the manner in which information is collected, used, disclosed or shared.

What makes this Order unique, however, is the specificity the FTC provides with regard to the disclosures Goldenshores must make about the collection and use of precise geolocation information in its apps.  The Order requires a notice that goes significantly beyond the typical boilerplate “just-in-time” opt-in notice that apps typically use to obtain consent for the collection of precise geolocation information.  In this case, the separate out-of-policy just-in-time notice and opt-in consent that the app must provide prior to collecting precise geolocation information must include a disclosure that informs the user: 

(1)  That the application collects and transmits geolocation information;

(2)  How this information may be used;

(3)  Why the application is accessing geolocation information; and

(4)  The identity or specific categories of third parties that receive geolocation information directly or indirectly from the app.

Conclusion

Thus, what looks at first to be a simple privacy policy FTC deception case is actually rather significant for three reasons.  First, this is about the failure to disclose collection and use practices relating to “sensitive data,” which includes precise geolocation and the device’s unique identifier.  Second, the FTC flagged the lack of disclosures about such collection and use practices in the app store prior to download.  And third, the FTC gave very specific and detailed instructions to the app on how it must provide notice and choice about the collection of precise geo-location information, which could perhaps be an indication of where the FTC expects the entire industry to go in the near future.

Two Circuits Address the First Amendment Status of Facebook Activity

Posted in Employment Law, Litigation, Privacy, Supreme Court

Two recent U.S. appellate court decisions have clarified the extent to which the First Amendment protects the social media activities of government employees.  In Gresham v. City of Atlanta, the Court of Appeals for the Eleventh Circuit found that an individual’s First Amendment interest in posting to Facebook is reduced when he or she configures such post to be private, while in Bland v. Roberts, the Court of Appeals for the Fourth Circuit held that Facebook “likes” constitute protected speech under the First Amendment.  Although both decisions deal with the rights of government employees in particular, the decisions have relevance beyond government employees.

U.S. courts have long held that the government has a greater interest in restricting the speech of its employees than it does in restricting the speech of the citizenry in general.  However, the government’s ability to restrict the speech of its employees is limited by a test the U.S. Supreme Court outlined in Pickering v. Board of Education in 1968.  The test requires that, in order for the employee to maintain a successful First Amendment claim against his or her governmental employer, the employee must, among other things, show that he or she was speaking about a matter of public concern, and that his or her interest in doing so outweighs the government’s interest in providing effective and efficient service to the public.

First Amendment protection for “likes”: Bland v. Roberts.  In August of 2012, we discussed the decision of a District Court in Virginia that a government employee “liking” a Facebook page was insufficient speech to merit constitutional protection.  Deputies of the Hampton Sheriff’s Office alleged that they were terminated because they “liked” the campaign page of a candidate running against their boss, the current sheriff.  While much of the suit dealt with the current sheriff’s claim to qualified immunity and whether or not the deputies held policymaking positions which can be staffed based on political allegiances, the court also dismissed the deputies’ contention that their termination violated their First Amendment right to speak out on a matter of public concern.  The court held that merely “liking” a page “is not the kind of substantive statement that has previously warranted constitutional protection.”  The decision stirred considerable controversy and debate among constitutional scholars and within the social media industry.

On appeal, the Fourth Circuit overturned the lower court’s holding that Facebook “likes” are too insubstantial to merit First Amendment protection.  The court held that “liking” a Facebook page is both pure speech and symbolic speech, and is protected by the First Amendment even with respect to government employees.  The court found that the act of “liking” a Facebook page results in publishing a substantive position on a topic.  The court reasons that “liking” a political candidate’s campaign page is “the Internet equivalent of displaying a political sign in one’s front yard, which the Supreme Court has held is substantive speech.”  As a result, at least within the political context, “likes” enjoy the same strong First Amendment protection that other political speech does.

First Amendment protection for private posts: Gresham v. City of Atlanta.  The interplay between social media and the First Amendment was also at issue in the Gresham case.  In Gresham, an Atlanta police officer named Maria Gresham became concerned when a suspect she arrested was taken into a room alone by another officer who turned out to be the suspect’s aunt.  The suspect gave some items to his aunt and they may have spoken.  Officer Gresham felt that this constituted an inappropriate interference with her investigation and she aired her concerns by making a Facebook post which was only viewable by her friends.  In Atlanta, departmental rules for the conduct of police officers prohibit publicly criticizing other officers.  The department received a complaint that Gresham’s post had violated these rules and opened an investigation.  As a result of that investigation, Gresham was passed over for a promotion.  Gresham sued the city, asserting that the department had retaliated against her for engaging in protected First Amendment speech.

The District Court for the Northern District of Georgia found that Gresham’s First Amendment interest in making the post was outweighed by the City of Atlanta’s interest in maintaining good relations among its police officers.  In weighing Gresham’s First Amendment interest in making the post, the District Court noted that “the ability of the citizenry to expose public corruption is one of the most important interests safeguarded by the First Amendment.”  The District Court found that Facebook posts are protected under the First Amendment.  It also found, however, that the officer’s decision to configure her Facebook post to be viewable only by her friends made “her interest in making the speech . . . less significant than if she had chosen a more public vehicle.”

On appeal, the Court of Appeals for the Eleventh Circuit upheld the District Court’s decision and expanded on the District Court’s reasoning, observing that “the context of Plaintiff’s speech is not one calculated to bring an issue of public concern to the attention of persons with authority to make corrections, nor was its context one of bringing the matter to the attention of the public to prompt public discussion to generate pressure for such changes.”  Because her audience was small and poorly situated to act on the information she shared, the officer’s “speech interest is not a strong one.”  The Court of Appeals agreed with the District Court that the government has a strong interest in maintaining good relations among police officers, and that this interest outweighed Gresham’s weak First Amendment interest in making the post.  As a result, the City of Atlanta was found not to have violated Gresham’s First Amendment rights by restricting her speech.

The resulting rule for Gresham and her fellow officers may be somewhat counterintuitive: Atlanta police officers are effectively allowed to criticize one another very privately or very publicly, but the officers risk being disciplined if they criticize another officer in a somewhat public forum.  A minor breach of the departmental policy against public criticism is more likely to carry consequences than a major breach is.  That being said, the purpose underlying the Pickering rule is to ensure that crucial information reaches the public; making a post private undermines that purpose, so it reduces the protection the post receives under the Pickering rule.

In any event, with social media becoming more and more integrated into the daily fabric of our lives, one can assume that courts will be struggling with the intersection of free speech rights and social media usage for years to come.

Potential Limitations Placed on Unilateral Right to Modify Terms of Use

Posted in Litigation, Terms of Use

Contractual provisions giving a website operator the unilateral right to change its end user terms of service are ubiquitous and appear in the online terms of many major social media sites and other websites, including Facebook, Twitter, Instagram and Google. Although amendments to terms of service quite often cause consumers to complain, litigation regarding such changes is relatively rare. A recent decision from the U.S. District Court in the Northern District of Ohio, however, challenges the enforceability of unilateral amendments to online terms of service in at least some circumstances.

In Discount Drug Mart, Inc. v. Devos, Ltd. d/b/a Guaranteed Returns, Discount Drug Mart, a distributor of pharmaceuticals, sued Guaranteed Returns, a company that processes pharmaceutical product returns, for Guaranteed Returns’ failure to remit credits due under a written distribution agreement between the parties. Guaranteed Returns pointed to the forum selection clause on its website, which it argued required the parties to bring suit in either Nassau or Suffolk County in the State of New York. This provision appeared in Guaranteed Returns’ online “standard terms and conditions,” which Guaranteed Returns claimed were incorporated into the parties’ written distribution agreement.

The court held otherwise, citing the Sixth Circuit case Int’l Ass’n of Machinists and Aerospace Workers v. ISP Chemicals, Inc. and stating that “[i]ncorporation by reference is proper where the underlying contract makes clear reference to a separate document, the identity of the separate document may be ascertained, and incorporation of the document will not result in surprise or hardship.” The court also pointed out that Guaranteed Returns’ purported right to change its standard terms and conditions unilaterally could result in Discount Drug Mart being subject to surprise or hardship. Further, the court noted that there was no evidence that the forum selection clause had been included in the standard terms and conditions at the time the distribution agreement was signed (and Guaranteed Returns did nothing to try to prove this fact). Thus, the court concluded that the standard terms and conditions were not properly incorporated into the distribution agreement (although the court ended up finding in favor of Guaranteed Returns on other grounds).

It is difficult to say what, if any, precedential force Discount Drug Mart will have. Putting aside the facts that the case was brought in the Northern District of Ohio and was ultimately dismissed on grounds unrelated to this holding, the underlying background of the case was nuanced. First, although the court stated in dicta that “one party to a contract may not modify an agreement without the assent of the other party,” a statement that could be interpreted to mean that unilateral amendment of contracts is never permitted, the holding itself was limited to situations in which terms and conditions are incorporated by reference. That said, even this limited holding may be relevant to many website operators in the social media world, as the larger social media sites often use a network of contracts that reference each other (for example, Facebook’s “Platform Policies” requires developers to agree to the company’s “Statement of Rights and Responsibilities,” which are “requirements for anybody who uses Facebook” and which can be unilaterally modified by Facebook).

Second, the Discount Drug Mart court did not elaborate on the “surprise or hardship” standard, so it is possible that unilateral changes to end user terms would be upheld if the website operator gave proper notice to its end users of such changes in order to avoid causing surprise or hardship. The leading social media platforms currently have different approaches to providing notice of changes to their online terms of use. For example, Facebook provides seven days’ notice (although “notice” here includes posting on Facebook’s site governance page); Twitter will notify users of changes to its terms of service via an “@Twitter” update or through email (but only for changes that Twitter deems to be material in its sole discretion); and Instagram notifies users of its changes to its terms of use by posting them on Instagram. A court could find that notification of changes using one or more of these methods is sufficient to avoid subjecting an end user to surprise or hardship.

Finally, the court seemed to give weight to the lack of any evidence that the forum selection clause was included in Guaranteed Returns’ standard terms and conditions at the time that the parties entered into the distribution agreement. Today, however, most Internet service providers include “last modified” dates in their terms of use. Recording version dates and keeping copies of older terms of use could help a website operator show that a particular provision existed in terms of use at the time that the parties entered into an agreement referencing such terms (although these practices could also provide evidence to the contrary).

Discount Drug Mart is not the first decision to challenge a company’s right to unilaterally modify its online terms and conditions. In the 2007 case Douglas v. Talk America, the Ninth Circuit Court of Appeals held that Talk America could not enforce an arbitration clause against an individual who had initially accepted the applicable terms of service prior to Talk America’s unilateral addition of the arbitration clause. Although Talk America posted the amended terms online, the court noted that the individual’s assent to the new terms could only be inferred “after [the individual] received proper notice of the proposed changes.” Discount Drug Mart seems consistent with this decision to the extent that the case suggests that failure to provide adequate notice to end users of changes to online terms may invalidate such changes.

A decision in the Northern District in the U.S. District Court of Texas in 2009, Harris v. Blockbuster Inc., went further than the Douglas court by holding an arbitration clause in Blockbuster’s online terms of use rendered the terms of use illusory and unenforceable. The court’s holding was based on the fact that Blockbuster could, in theory, unilaterally modify the arbitration provisions and apply those modified provisions to earlier disputes. Harris cited the Fifth Circuit case, Morrison v. Amway Corp., in which the court had held an arbitration clause in online terms of use to be illusory under Texas law when defendant Amway attempted to apply arbitration terms that been had modified after the plaintiff had agreed to Amway’s standard terms. Although limited to the Northern District of Texas (for now), the implications of Harris could be troubling to online service providers, as the case suggests that if a company includes language allowing it to make unilateral changes to its terms by simply posting the revised terms on its website, those terms could be deemed invalid. In fact, at least one legal scholar has suggested that companies should not include such language in their online terms. For more on Harris, see our client alert here.

Discount Drug Mart does not necessarily provide any clear guidelines that online service providers must follow for their online terms to be valid and enforceable. Because the court based its holdings on specific factual circumstances and provided little insight into its reasoning, it is unclear at this point whether other courts will follow this opinion and impose limitations on companies’ rights to unilaterally change their online terms of service under different circumstances. However, given the legal precedent on the subject, it will likely behoove companies that incorporate their online terms into other documents to consider re-evaluating their amendment and notification practices to minimize any chance of subjecting end users to “surprise or hardship.”

Mobile Apps: No Surprises, Please

Posted in FTC, Privacy

From our sister blog, MoFo Tech:

Widely applicable rules regarding consumer privacy disclosures in our increasingly mobile world are only now emerging. Government agencies, individual states, and professional associations are all weighing in on how mobile app developers should disclose how they collect, store, use, and protect the wide range of highly personal data being collected every day.

The Application Privacy, Protection, and Security Act of 2013, better known as the APPS Act, is intended to bring conformity to the unwieldy world of mobile app development. With a divided Congress struggling to pass even mandatory legislation, though, passage of any type of discretionary legislation this year seems unlikely, says D. Reed Freeman Jr., a partner with Morrison & Foerster in Washington, D.C. In the meantime, Freeman says, developers should focus on the Federal Trade Commission, “because even without congressional action, it has broad jurisdiction, and it has already brought cases and issued guidance on mobile privacy and data security.”

Charged with the intentionally broad mandate of guarding consumers from “deceptive” and “unfair” business practices, the FTC has been proactively applying its consumer protection laws across nearly all media, including mobile technology. A recent FTC policy document is especially revealing because it describes how the FTC expects disclosures of material facts to be made on mobile devices, “and privacy disclosures can certainly be material,” Freeman says.

So it’s up to the mobile app company to think carefully about the ways its program could surprise a reasonable user and disclose them appropriately. Freeman offers this rule of thumb:  “Would a reasonable consumer, under the circumstances, understand what information is being collected about her while she’s on a mobile device and what it is being used for?” If so, companies need to disclose those facts clearly and not bury them in EULAs or terms of use.

California’s Online Privacy Protection Act, passed in 2003, has taken consumer privacy one step further than the FTC has. It requires companies that operate commercial websites or online services and that collect personal information of any kind—including usernames and passwords—to prominently post a privacy policy somewhere on their homepage, says Andrew Serwin, a partner in Morrison & Foerster’s San Diego office.

And while California’s jurisdiction ends at the state line, its reach is often national, Serwin adds. “Companies with customers in all 50 states have to ask themselves whether they want to develop state-specific programs or apply standards across the board,” he says. Since the mobile world doesn’t recognize geographic boundaries, Serwin recommends that developers work toward the highest standards and beyond. “Privacy isn’t just a legal issue. It’s a brand issue,” he says.

Apart from knowing the law, businesses need to consider their own reputations and their customer relationships when collecting, using, and protecting personal information, Serwin says. For example, how could losing users’ passwords tarnish the company’s image in the market? “Current law doesn’t specifically cover that possibility, but,” he notes, “it may be in the company’s best interest to address these types of issues.”