Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Toward a Grand Unifying Theory of Today’s Tech Trends

Posted in Cloud Computing, Internet of Things, Marketing, Privacy, Wearable Computers

04_17_Big Data Analytics diagram_v8As a technology law blogger and co-editor of Socially Aware, I monitor emerging developments in information technology. What’s hot in IT today? Any shortlist would have to include social media, mobile, wearable technology, the Internet of Things (IoT), cloud computing and big data.

That list is all over the map, right? Or is it? On closer inspection, these technologies are far more closely intertwined than they may appear to be at first glance.

So what’s the connection between, say, social media and the Internet of Things? Or wearable tech and cloud computing?

Here’s my theory: These technologies all reflect the ceaseless drive by businesses to collect, store and exploit ever more data about their customers. In short, these technologies are ultimately about selling more stuff to us.

With this “grand unifying theory” in mind, one sees how these seemingly disparate technologies complement one another. And the legal challenges and risks they pose become clear.

Collection of Data

Let’s start with the collection of consumer data. Of the six key trends identified above, four relate directly to such collection: social media, mobile, wearable technology and IoT.

When we use the Internet, marketers are tracking our activities; the data generated by our online behavior is collected and then used to target ads that will be more relevant to us.

If we spend time on movie sites, we’re more likely to see ads promoting new film releases. If we visit food blogs, we’re going to be served ads selling cookware.

Creepy? It can be. But such tracking and targeting make it possible for many website operators to offer online content and services for free. Indeed, many believe that such tracking and targeting are essential to the vibrancy of our Internet ecosystem. (Although Google is reportedly experimenting with an offering where one would pay not to see ads while surfing the Web.)

In the past, serious limitations existed on the ability of marketers to track and target us. We might have given our name, email and home address to a website, but not much else; now, with social media, we routinely volunteer loads of personal information—our jobs, hobbies, special skills, taste in music and movies, even our “relationship status.” And not just information about ourselves, but our families, friends and colleagues as well. As a result, social media companies have compiled huge databases about us—in Facebook’s case, nearly 1.4 billion of us.

Also, not long ago, we surfed the Web from either home or office—limiting the ability to be tracked and targeted while away from those locations. The rise of Internet-connected mobile devices has changed all that, of course—now we can access the Web from anywhere, and mobile devices can pinpoint our location, even when we’re not browsing. Marketers can track our daily journey to and from home to work and back again, even serving us “just in time” discount offers as we pass a clothing store or restaurant.

From a marketer’s perspective, social media and mobile are all about expanding the amount and type of customer data that can be collected. Thanks to mobile devices and apps, tracking and targeting are no longer desk-bound and can occur even if a customer is not connected to the Internet.

Wearable tech? Like cell phones, wearables make tracking and targeting possible while one is away from a traditional computer or not actively using the Web. These devices can also collect information that cell phones can’t – our heart rate or body temperature, or the number of hours one slept last week.

For marketers, the Internet of Things is especially exciting because it raises the possibility of being able to track and target consumers anywhere in their homes, even while they are away from their desktop computers or mobile devices.

Imagine your “smart” refrigerator not only determining when you’re low on milk, but offering a 15 percent discount if you were to buy today a quart of milk at your local market. Or your Internet-connected washing machine recommending a new laundry detergent based on its monitoring of your laundry loads.

Another hot technology trend – commercial drones – is relevant here. Although unmanned aerial vehicles (UAVs) have generated attention for their ability to facilitate package delivery and accommodate WiFi access, they can be used to collect data on consumers when they’re outdoors or near a window, even when they are without cell phones, wearables or other devices used to track their movement and activities.

Ingestibles—“smart” pills containing sensors that are swallowed, allowing the collection of data within one’s body—are a nascent technology that, as they become more widely used, may ultimately fit into this theory.

Storage of Data

With social media platforms, mobile, wearable and IoT devices and UAVs collecting information on an unprecedented scale, that data needs to be stored somewhere. Enter the cloud. All of these new technologies depend heavily on the massive storage capacity made possible by cloud systems; it wouldn’t be cost effective otherwise. (Case in point: A 2013 study revealed that 90% of all the data in the world had been collected over the prior two years.)

Exploitation of Data

Once all this data has been collected and stored in the cloud, what then?

That’s where big data enters the picture. Big data is providing companies with the analytic tools for sifting through these inconceivably large databases in order to exploit the bits therein.

For example, that photo you uploaded to Instagram can now be analyzed for marketing opportunities. Perhaps you were holding a bag of potato chips; using big data analytics, the chip maker could target you in its next online ad campaign. Or maybe a competing snack company wants to entice you to switch brands. Why stop there? What about the shirt that you were wearing? And that pair of jeans? (I’ve written on the application of big data analytics to the billions of photos hosted on social media sites here.)

Similarly, information collected from wearables, when processed by big data tools, opens up new opportunities for marketers. Your pulse rate may be of interest to the health care industry. Your jogging workouts may attract attention from retailers of athletic shoes and clothing.

But the mother lode just might be all of the marketing insights to be generated by big data analytics stemming from multiple IoT devices in one’s home—the thermostat, stove, refrigerator, coffee machine, toaster, washer/dryer, humidifier, alarm clock and so on: for the first time ever, marketers will have access to real-time information regarding once-private quotidian activities.

Legal Considerations

So that’s my theory: The adoption of today’s hottest IT technologies is being driven in large part by the insatiable desire of businesses to collect and store ever-larger amounts of consumer data, and to then use that data to more successfully market to consumers. When these technologies are viewed in light of this theory, some key legal observations emerge.

First, because these technologies all involve the collection, storage and exploitation of consumer data, privacy and data security are necessarily raised and indeed are the most important legal considerations. That’s not meant to minimize intellectual property, product liability and other legal concerns associated with these technologies; privacy and data security laws, however, are the ones specifically designed to regulate the collection, use and exploitation of consumer data.

Second, these technologies are being developed and implemented far faster than the ability of legislators, regulators and courts to develop rules to govern them. It will be essential for companies embracing these technologies to self-regulate—failure to do so will result in an inevitable backlash, leading to burdensome regulations that will undermine innovation.

Third, these technologies will present real challenges to the majority of companies that want to “do the right thing” by their customers. For example, consumers ideally should be provided with notice and an opportunity to consent prior to the collection, storage and exploitation of their personal information, but how can this be done through, say, a smart electric toothbrush? These issues need to be addressed early in the development cycle for next-generation products—it can’t be an afterthought. Moreover, are customers receiving real, tangible value in connection with the data being collected from them?

Fourth, as our social-media pages, devices and appliances become more closely tied together, and linked to massive troves of data about us in the cloud, businesses need to be aware that it takes only one weak link to put the entire ecosystem at risk. Hackers will no longer need to bypass your computer or phone’s security to capture personal data; they may be able to access your records through, say, an Internet-enabled toaster that lacks adequate security controls.

Finally, companies need to pay attention to whether they need to collect all the data that can be collected through these technologies. Ideally, they should seek to minimize the amounts of personally identifiable information they hold, in order to reduce privacy- and security-related legal risks, and liability.

No doubt this last recommendation may be hard for many marketers to embrace; after all, data-gathering is in their DNA. And that same hard-wiring is in all of our DNA—the original source code for data collection, storage and exploitation. We wouldn’t be human without it.

(This is an expanded, “director’s cut” version of an op-ed piece that originally appeared in MarketWatch.)

Five Social Media Law Issues To Discuss With Your Clients

Posted in Arbitration, Copyright, Employment Law, IP, Labor Law, Litigation, Online Promotions, Terms of Use

Social_Community85The explosive growth of social media has clients facing legal questions that didn’t even exist a few short years ago. Helping your clients navigate this muddled legal landscape will have them clicking “like” in no time.

What’s in a Like?

Not long ago, the word “like” was primarily a verb (and an interjection used by “valley girls”). You could have likes and dislikes in the sense of preferences, but you couldn’t give someone a like, claim to own a like or assert legal rights in likes. Today, however, a company’s social media pages and profiles, and the associated likes, followers and connections, are often considered valuable business assets. Courts have come to various conclusions regarding whether likes and similar social media constructs constitute property, but one thing is clear: Every company that uses social media should have in place clear policies regarding employee social media use and ownership of business-related social media accounts.

Employees who manage a company’s social media accounts often insert themselves as the “voice” of the brand and establish a rapport with the company’s fans and followers. Without clear policies that address ownership of social media accounts, and clearly distinguish between the company’s accounts and employees’ personal accounts, your client may find itself in a dispute when these employees leave the company and try to take the company’s fans and followers with them.

Read a more detailed description of “likes” as assets here.

Dirty Laundry

It comes as no surprise that employees frequently use social media to complain about managers and coworkers, pay, work conditions and other aspects of their employment. Companies often would prefer not to air these issues publicly, so they establish policies and impose discipline when employees’ social media activity becomes problematic. Companies need to be careful, however, that their policies and disciplinary actions comply with applicable law.

A number of National Labor Relations Board decisions have examined whether employees’ statements on social media constitute “concerted activity”—activity by two or more employees that provides mutual aid or protection regarding terms or conditions of employment—for purposes of the National Labor Relations Act (which, notably, applies regardless of whether the employees are unionized or not). Companies also need to be careful to comply with state statutes limiting employer access to employees’ personal social media accounts, such as California Labor Code Section 980, which prohibits an employer from asking an employee or applicant to disclose personal social media usernames or passwords, access personal social media in the presence of the employer, or divulge personal social media.

Read more about the intersection of social media policies and labor law here and here.

Terms of (Ab)use

Companies often consider their social media pages and profiles to be even more important than are the companies’ own websites for marketing and maintaining customer engagement. But a company’s own website has one advantage over a third party social media platform: The company sets its own terms for use of its website, while the third party social media platform is subject to terms of use imposed by the platform operator. And, in many cases, the terms imposed on users of social media platforms are onerous and make little distinction between individual users using the platform just for recreation and corporate users who depend on the platform for their businesses.

Social media terms of use often grant platform operators broad licenses to content posted on the platform, impose one-sided indemnification obligations on users, and permit platform operators to terminate users’ access with or without cause. You may have little luck negotiating modifications to such online contracts for your clients, but you can at least inform your clients of the terms that govern their use of social media, so that they can weigh the costs and benefits.

Read more about social media platforms’ terms of use here, here, and here.

Same as It Ever Was

When it comes to using social media for advertising, the media may be new but the rules are the same as ever. Companies that advertise through social media—especially by leveraging user endorsements—need to comply with Section 5 of the FTC Act, which bars “unfair or deceptive acts or practices.” Bloggers and others who endorse products must actually use the product and must disclose any “material connections” they have with the product providers (for example, a tech blogger reviewing a mobile phone that she received for free from the manufacturer should disclose that fact). Because this information is likely to affect consumers’ assessment of an endorsement, failure to disclose may be deemed deceptive. So if you have a client that uses endorsements to promote its products, make sure to brush up on the FTC “Dot Com Disclosures” and other relevant FTC guidance.

Read more about endorsement disclosure obligations here.

Good Rep

As noted, a company’s social media pages, followers, etc., may constitute valuable business assets. But buyers in M&A transactions often neglect such assets when formulating the seller’s reps and warranties. Buyers should consider asking the seller to disclose all social media accounts that the target company uses and to represent and warrant that none of the target’s social media account names infringe any third party trademark or other IP rights, that all use of the accounts complies with applicable terms of service, and that the target has implemented policies providing that the company (and not any employee) owns all business-related social media accounts and imposing appropriate guidelines regarding employee use of social media.

Finally, if you have clients that use social media, it’s important to be familiar with the popular social media platforms and their (ever-changing) rules and features. Learning to spot these issues isn’t going to turn you into the next Shakira—as of this writing, the most liked person on Facebook with well over 100 million likes—but your clients will surely appreciate your help as they traverse the social media maze.

Read more about social media assets in M&A transactions here.

This piece originally appeared in The Recorder.

The Right to Give One-Star Reviews

Posted in Online Reviews, Terms of Use

06_29SASocialNetworkCongress has taken a step toward protecting consumers’ rights to post negative reviews on websites like Ripoff Report or Yelp with the introduction, by Representative Darrell E. Issa of California, of the Consumer Review Freedom Act of 2015 (the CFRA).

The CFRA follows a California law, enacted in 2014, which made it illegal for businesses to penalize their customers for posting negative reviews of their products or services online. The California law, AB 2365, was passed in response to a growing number of incidents where businesses have used non-disparagement clauses buried in form contracts to charge fines of several hundred to several thousand dollars. Such incidents have occurred all over the country—from a New York hotel withholding $500 from a couple’s security deposit after a member of the couple’s wedding party posted a negative review, to a Michigan-based Internet retailer charging two of its customers in Utah $3,500 after they published a review criticizing the retailer’s customer service.

AB 2365 sought to put a stop to such incidents by prohibiting businesses from including in any contract for the sale or lease of consumer goods or services any provision that requires the consumer to waive his or her right “to make any statement regarding the seller or lessor or its employees or agents, or concerning the goods or services.” The statute also makes it unlawful to enforce such a provision “or to otherwise penalize a consumer for making any statement protected under” the law. This is presumably intended to address situations in which a business does not explicitly prohibit negative reviews, but instead seeks to impose a penalty on a consumer who posts a negative review, as in the Michigan case noted above.

The CFRA is intended to take the California policy and expand it nationwide. Similarly to AB 2365, the CFRA prohibits businesses from including in any form contract a provision that prohibits or restricts a person from, or imposes a penalty or fee against a person for, engaging in a “written, verbal, or pictorial review, performance assessment of, or other similar analysis of, the products, services, or conduct of a business or person which is a party to the form contract.” The CFRA empowers the Attorney General to bring actions for a civil penalty of up to $16,000 for each day that the business requires the use of the penalizing contract by a distinct person.

The CRFA also closes a potential loophole in AB 2365 that at least one enterprising organization had been encouraging its clients to use. Medical Justice, an organization that provides template form contracts to medical service providers, had included language in those contracts purporting to assign to the service provider the copyright in any review posted by a patient. If effective, this assignment would allow the service provider to issue takedown notices under the Digital Millennium Copyright Act or threaten the publishing websites with infringement actions. AB 2365 did not expressly address such assignment provisions, but the CRFA voids any provision that “transfers … to any person or business any intellectual property rights that the individual may have in any otherwise lawful [communication] about the person or the goods or services provided by the person or business.”

Though it is unclear how likely the CFRA is to become law, it has bipartisan sponsorship and certain key players have publicly voiced their support. For example, Yelp has come out strongly in favor of the CFRA in a post on its official blog. The bill is currently being reviewed by the House Committee on Energy and Commerce and has been referred to a subcommittee.

With a political environment that is increasingly hostile to non-disparagement clauses, businesses will now have to consider different ways of avoiding negative reviews—perhaps by providing better products and services.

 

 

The Guide to Social Media and Securities Law

Posted in SEC

Untitled Extract PagesThe growing use of social media has created challenges for federal securities regulators and, given the significance of social media as a preferred method of communication for a large percentage of market participants, the need to adapt Federal securities laws and the regulatory framework applicable to broker-dealers and investment advisers to social media channels has become all the more urgent.

To help navigate these issues, Socially Aware contributors and Morrison & Foerster partners Jay Baris and David Lynn have just released their Guide to Social Media and Securities Law, which provides a comprehensive overview of how federal regulation of securities has evolved in the face of the growing use of social media by investors, securities issuers, broker-dealers, investment advisers and investment companies.

The guide is now available here. We think that you will find it to be a terrific resource.

Digital Advertising Alliance Focuses on Mobile Ads

Posted in Marketing, Online Promotions, Privacy

As more users spend more time on their mobile devices, advertising dollars are following. And the compliance regime that governs interest-based advertising (IBA) (formerly referred to as online behavioral advertising or OBA) is expanding as well. (IBA is the collection of information about users’ online activities across different websites or mobile applications, over time, for the purpose of delivering online advertising to those users based on those activities.) The regime arose from a February 2009 Federal Trade Commission (FTC) report entitled Self-Regulatory Principles for Online Behavioral Advertising, which the Digital Advertising Alliance (DAA), a consortium of media and marketing associations, translated into a self-regulatory program (DAA Principles) in an effort to avoid legislation.

The DAA Principles focus on providing consumers with notice of and control over how information collected from their use of online services is used for IBA purposes. To facilitate such notice and choice, the DAA provides an advertising option icon to be placed in or near an interest-based ad. The icon, when clicked, delivers consumers to a landing page that describes the data collection practices associated with the ad and provides an opt-out mechanism. The Council of Better Business Bureaus, which, along with the Direct Marketing Association, enforces the DAA Principles, has construed the principles to also require notice on any site where information is collected for IBA purposes. Such notice typically takes the form of an “Our Ads” or similarly named link in the site footer, separate from the privacy policy link, that clicks through to the same landing page as the advertising option icon, or to similar notice and choice. A dedicated industry website, www.aboutads.info, also provides consumers with the ability to exercise choice with regard to IBA.

The DAA has always held that the DAA Principles apply universally; in July 2013, it issued guidance regarding their application to the mobile environment (Mobile Guidance). The DAA has also acknowledged the challenge that screen-size, among other things, may pose to complying with the principles’ notice and choice requirements in the same fashion as in the desktop experience. In February 2015, however, the DAA announced two new measures to facilitate compliance with the requirements on mobile devices: (1) a new consumer choice page optimized for mobile (which is otherwise the same as www.aboutads.info), and (2) a downloadable app, “AppChoices,” that enables consumers to manage ad preferences for certain third party in-app ad delivery services. The Mobile Guidance explains how a company engaged in IBA should provide notice and choice—via the consumer web page and/or AppChoices app, as applicable—to its users. The DAA recently announced that the DAA Principles will be enforced in the mobile space, effective September 1, 2015. This enforcement will include not only the notice-and-choice regime but also other mobile-specific issues addressed by the DAA’s Mobile Guidance, such as the use of precise geolocation. As a result, companies should work diligently to figure out their compliance strategies for their mobile websites and applications.

The NYDFS Finalizes its BitLicense Proposal

Posted in E-Commerce

Group Of Multi-Ethnic People Social NetworkingOn June 3, 2015, the New York Department of Financial Services (“NYDFS”) issued a final rule regarding its “BitLicense” regulatory regime (“Final Rule”). The Final Rule follows an initial proposal from July 2014 and a revised proposal from February 2015 (“Revised Proposal”). Our analysis of the July 2014 proposal is available here, and our analysis of the Revised Proposal is available here. We also reported on Superintendent Lawsky’s November 2014 remarks at the Money 20/20 conference and the NYDFS approval of the itBit trust company application.

At a high level, the Final Rule requires licensing for any person that engages in “Virtual Currency Business Activity,” as defined in the Final Rule, and subjects licensees to extensive compliance obligations, capital requirements, examination, and approval requirements, among other things. The Final Rule also provides for a “conditional license” designed to reduce regulatory burden on startups and small businesses engaged in Virtual Currency Business Activities.

The Final Rule contains few material differences from the Revised Proposal, with the changes primarily clarifying certain definitions in the Revised Proposal. Concurrent with the release of the Final Rule, NYDFS Superintendent Benjamin Lawsky delivered remarks at the BITS Emerging Payments forum in Washington, D.C. Superintendent Lawsky confirmed that the Final Rule “does not include the major changes we saw in the last round,” which included the provisions regarding the transitional BitLicense.

The Final Rule contains few material differences from the Revised Proposal, with the changes primarily clarifying certain definitions in the Revised Proposal. Concurrent with the release of the Final Rule, NYDFS Superintendent Benjamin Lawsky delivered remarks at the BITS Emerging Payments forum in Washington, D.C. Superintendent Lawsky confirmed that the Final Rule “does not include the major changes we saw in the last round,” which included the provisions regarding the transitional BitLicense.

The following bullets highlight noteworthy changes from the Revised Proposal:

  • The Final Rule clarifies the prepaid card exclusion from the definition of virtual currency by amending the definition of a “prepaid card.” Specifically, the definition is clarified to cover only prepaid cards that are issued, can be reloaded and can be redeemed “in and for fiat currency.”
  • The Final Rule further enumerates the circumstances under which a licensee must obtain approval for a material change to the licensee’s business. Specifically, the Final Rule states that a material change, or a “materially new product, service, or activity”—a new defined term—may occur where the proposed new product, service, or activity may raise (i) “a legal or regulatory issue about the permissibility of the product, service, or activity,” or (ii) “safety and soundness or operational concerns.” Approval for products, services, or activities that are “materially different” from those described on the licensing application is still required. Notwithstanding the new circumstances under which approval is required, Superintendent Lawsky stated that “companies will not need approval for standard software or app updates,” and when considering what would constitute a materially new product, service, or activity, Superintendent Lawsky provided an example of a firm that was licensed as a wallet provider that decided to begin offering exchange services.
  • Superintendent Lawsky also made clear that the Final Rule applies only to financial intermediaries that are responsible for safeguarding customer funds. According to Superintendent Lawsky, software developers and other individuals who work with virtual currency from a design standpoint are exempted from the licensing requirement.
  • Superintendent Lawsky described the care that the NYDFS took not to impose duplicative requirements on licensees. For example, in his prepared remarks, he stated that firms will be able to apply for a BitLicense and a money transmitter license simultaneously. In addition, the Final Rule clarifies that a licensee must only file suspicious activity reports with the Superintendent if the licensee is not already submitting the reports to the Financial Crimes Enforcement Network (“FinCEN”).
  • Nonetheless, the Final Rule does impose additional reporting requirements for virtual currency transactions that are not currently subject to federal reporting requirements. Specifically, the Final Rule requires that a licensee notify the Superintendent within 24 hours of any virtual currency transaction or series of transactions made during a single business day that total more than $10,000. As such, the Final Rule creates a virtual currency-to-virtual currency Currency Transaction Report (“CTR”) requirement, where such a transaction may otherwise avoid the FinCEN CTR reporting requirements.

We will continue to follow the implementation of the Final Rule, as well as the efforts of other state regulatory authorities with respect to virtual currency and provide updates as warranted.

“Firsts” for the World of Virtual Currencies

Posted in E-Commerce

X23 - BNThere have been two recent virtual currency-related actions worthy of note: (1) the Financial Crimes Enforcement Network (“FinCEN”) announced its first civil enforcement action against a virtual currency exchanger, and (2) the New York Department of Financial Services (“NYDFS”) granted its first license to a Bitcoin exchange.

FIRST VIRTUAL CURRENCY EXCHANGER ENFORCEMENT ACTION

On May 5, 2015, FinCEN announced an enforcement action against a virtual currency exchanger to settle alleged violations of the Bank Secrecy Act (“BSA”) and its implementing regulations. The virtual currency exchanger facilitates the sale and exchange of a particular virtual currency for fiat currency. A Statement of Facts and Violations issued concurrently alleges that the virtual currency exchanger previously engaged in these activities without registering as a money services business (“MSB”) and failed to comply with other BSA requirements. Under the terms of the settlement, the virtual currency exchanger agreed to pay a $700,000 civil money penalty and take certain remedial actions. A settlement between the virtual currency exchanger and the U.S. Attorney’s Office was also announced to resolve criminal charges associated with the alleged BSA violations.

FinCEN alleges that the virtual currency exchanger facilitated transfers of virtual currency and provided virtual currency exchange transaction services without registering as an MSB from March 2013 through April 2013. During this time period, FinCEN alleged that the virtual currency exchanger “sold convertible virtual currency” in violation of the BSA. FinCEN also alleged that the virtual currency exchanger failed to comply with BSA requirements during this period because it:

  • Failed to develop a written anti-money laundering (“AML”) program;
  • Failed to report transactions at or above $2,000 in value that it knew, suspected, or had reason to suspect were suspicious; and
  • Engaged in a series of transactions in which the virtual currency exchanger either failed to file suspicious activity reports, or filed them in an untimely manner.

The enforcement action follows FinCEN’s March 2013 guidance, which clarified the application of the BSA and its implementing regulations to virtual currency businesses. Based on the March 2013 guidance, an “exchanger” or “administrator” of virtual currency is considered a money services business (“MSB”), and as such, must register with FinCEN, develop and implement an AML program, report suspicious transactions, implement know-your-customer procedures, and comply with the Funds Transfer Rule, which requires that an MSB obtain, verify, and keep certain information about individual transactions of $3,000 or above.

It is important to note the virtual currency exchanger registered as an MSB on September 4, 2013, developed a written AML program on September 26, 2013, and hired an AML compliance officer in January 2014. However, the settlement agreement focused on violations that preceded these actions.

Settlement Terms

Under the terms of the settlement, the virtual currency exchanger agreed to pay a civil money penalty of $700,000, $450,000 of which was deemed to be partially satisfied upon payment to the U.S. Attorney’s Office as part of a separate settlement of criminal charges. In addition, the virtual currency exchanger agreed to take remedial steps to ensure compliance with BSA requirements, as well as to implement “enhanced” remedial measures, including:

  • Conducting a three-year “look-back” to require suspicious activity reporting for prior suspicious transactions;
  • Retaining external independent auditors to review compliance with the BSA every two years until 2020; and
  • Enhancing the protocol that the virtual currency exchanger uses by improving existing analytical tools applicable to the protocol, including the reporting of “any counterparty using” the protocol, the reporting of the “flow of funds within” the protocol, and the reporting of “the degree of separation.”

FinCEN’s action underscores the increasing focus of FinCEN and other law enforcement agencies on virtual currency and the access points to virtual currency systems, and could represent the first of a series of enforcement actions against exchangers or administrators aimed at ensuring compliance with the BSA.

ITBIT LICENSING

The other notable “first” occurred on May 7, 2015, when the NYDFS granted a charter under the New York Banking Law to itBit Trust Company, LLC (“itBit”), a commercial Bitcoin exchange. Following the NYDFS approval, itBit becomes the first virtual currency company to receive a charter from NYDFS.

itBit had been operating in Singapore since November 2013, but now will be able to operate as a limited-purpose trust company under the New York Banking Law. itBit submitted its application following the NYDFS March 2014 order that initiated a process for accepting licensing applications from virtual currency exchanges under the New York Banking Law. According to its press release announcing the approval, the NYDFS “conducted a rigorous review of that application, including, but not limited to, the company’s anti-money laundering, capitalization, consumer protection, and cyber security standards.”

As a limited-purpose trust, itBit will have to meet, among other things, capital requirements and AML requirements. itBit will also be obligated to meet any additional obligations imposed under the final New York BitLicense regulation.1 However, the requirements that apply to limited-purpose trust companies will likely be at least as stringent as those that apply to BitLicensees. The NYDFS expects to issue its final BitLicense regulation later this month. Additional information on the NYDFS proposed BitLicense regulation is available here and here.

We will continue to follow the efforts of the NYDFS and other state regulators as they relate to virtual currencies.

Status Updates: Errand Apps for Everyone?; A Right to Be Forgotten Update; Your Entire Google Search History

Posted in First Amendment, Privacy, Status Updates

Information wants to be (not quite) free. In its early years, the Internet was often seen as a vehicle for democratizing data, taking information that was previously accessible only to a select few and making it available to the masses. Many Internet entrepreneurs still espouse those ideals, developing business models that cut out the middle man to make goods and services that were once seen as luxuries, such as  high-end eyewear and financial planning services, widely available at affordable prices. The current trend, however, seems headed in a different direction entirely, with new sites and apps offering luxury concierge-like services at high prices. For example, Postmates, an app that provides couriers to fetch goods from stores and restaurants, recently dropped four mint mojito iced coffees right into the hands of one Wall Street Journal reporter who admits he “paid nearly $30 for the luxury.” Many of the entrepreneurs behind this new wave of “errands apps” nevertheless maintain that they fully intend to make their services available to people of all income levels. Two such executives are Tri Tran, a co-founder of the prepared-meal delivery service Munchery, and Nick Allen, the creator Shuddle, an app that sends cars driven by well-vetted chauffeurs to ferry children to and from playdates and other appointments. Both plan to lower their prices once they have acquired more customers—Munchery hopes to be able to buy in bulk and Shuddle intends to provide carpooling services. But New York Times columnist Farhad Manjoo is skeptical, arguing that economies of scale (i.e., ones in which prices drop as fixed costs are spread out over more customers) are rare in the tech world. “I remain unsure if they will ever get to the point where they can serve the masses,” he concludes. “Yet even if Shuddle and Munchery do not get their prices low enough to go mainstream, they deserve credit for trying.”

Lest we forget. Established a year ago this month by a European Court of Justice decision, the right to be forgotten requires search engines like Google to comply with an individual’s request to remove “inadequate, irrelevant,” or “excessive” links that appear in search results when someone conducts an Internet search of the individual’s name. The ruling gives search engines broad discretion—so broad that Google has so far rejected more take-down requests than it has granted (457,958 compared to 322,601). In determining whether to grant a take-down request, Google says it considers “whether the results include outdated or inaccurate information about the person.” The Internet giant also weighs “whether or not there’s a public interest in the information remaining in our search results—for example, if it relates to financial scams, professional malpractice, criminal convictions or your public conduct as a government official (elected or unelected).” Ars Technica gave some examples of the take-down requests that Google has refused: a Hungarian high-ranking public official’s request to remove content about his more-than-20year-old criminal conviction, and a French priest’s request to remove articles about his excommunication from the church. Perhaps not surprisingly, Google has removed more URLs from Facebook than from any other site.

Search me. Speaking of Google, here’s a potentially embarrassing way to pass some time: view, download and export your entire Google search engine history—or at least all of the searches you conducted while you were logged into your Google account (of course, as The Washington Post points out, if you have Gmail you’re likely logged into your Google account almost all the time). An unofficial Google blog contains the instructions. Simply visit the Google home page and type “Google Web History” into the search bar. When you reach that page and login using your Google ID, you should see all of your searches over the past few days immediately. You can further download and export your entire Google search history by clicking the three-vertical-dot icon in the upper right-hand corner of the Google Web History page and selecting “download searches” from the dropdown menu. The point, says The Washington Post, is to give people an easier way to transfer their data from Google to other services, such as AOL. Since Google will deliver your query history in a file format that may be unreadable to you, the newspaper suggests you open the result in your computer’s notepad or other plain-text editing apps, and search for the term “query text.”

The FTC Weighs in on In-Store Tracking. Or Does It?

Posted in FTC, Privacy

PrintIn law school, everybody learns the adage that hard cases make bad law. When it comes to the Federal Trade Commission, a better aphorism might be, “easy cases make new law.” The FTC’s recent settlement with Nomi Technologies Inc. is, as the FTC’s press release notes, the “FTC’s first against a retail tracking company.” On its face, the case is like many FTC privacy cases: It challenges a statement in the company’s privacy policy for allegedly being inconsistent with the company’s actual practices and thus deceptive. Under the surface, however, the case may open the door for the FTC to create a notice-and-choice regime for the physical tracking of consumers, analogous to its well-established notice-and-choice regime for online tracking.

“Retail Tracking” and Nomi’s Allegedly Deceptive Practices

Retail tracking occurs when retailers, or their third-party service providers, capture and track the movements of consumers in and around stores through their mobile devices, such as through the use of Wi-Fi or beacons, in order, for example, to better understand store traffic or serve targeted offers. The FTC’s chief technologist recently published detailed comments on the “privacy trade-offs” of retail tracking and the various technologies that companies are using to engage in it. Given the potential lack of transparency around the practice and the corresponding privacy implications, it is not surprising that the FTC decided to address the practice through its Section 5 authority, even if the FTC did so in an indirect fashion.

It is also not surprising that the FTC has moved cautiously into this space. The facts of In re Nomi, as alleged in the complaint, are simple. Nomi provided mobile device tracking technology that enabled its clients, brick-and-mortar retailers, to receive analytics reports about aggregate customer traffic patterns — that is, how long consumers stay in the store and in which sections, how long they wait in line, what percentage of consumers pass by the store altogether, and so on. Nomi represented in the privacy policies posted on its website that it would “[a]lways allow consumers to opt out of Nomi’s service on its website as well as at any retailer using Nomi’s technology.” While Nomi offered an opt-out on its website, it allegedly did not provide an opt-out mechanism at its clients’ retail locations, thus rendering its privacy policy promise deceptive, in violation of Section 5 of the FTC Act.

The FTC further alleged that Nomi represented, expressly or by implication, that consumers would be given notice when they were being tracked at a retail location. The statement of Chairwoman Edith Ramirez and Commissioners Julie Brill and Terrell McSweeny in support of the complaint and proposed order explains that “the express promise of an in-store opt out necessarily makes a second, implied promise: that retailers using Nomi’s service would notify consumers that the service was in use. This promise was also false. Nomi did not require its clients to provide such a notice. To our knowledge, no retailer provided such a notice on its own.” By allegedly failing to provide notice when a retail location was utilizing Nomi’s service to track customers, Nomi’s implied promise to provide notice was also deceptive.

The FTC Keeps Nomi Narrow, for Now. What Lessons Can Others Learn?

The proposed order provides for very narrow injunctive relief: It simply enjoins Nomi from misrepresenting how consumers can control the collection, use, disclosure or sharing of information collected from them or their devices, and from misrepresenting the extent to which consumers will receive notice about such tracking. The majority commissioners, in their statement, were at pains to disclaim any significance of the case with regard to the practice of retail tracking specifically:

While the consent order does not require that Nomi provide in-store notice when a store uses its services or offer an in-store opt out, that was not the Commission’s goal in bringing this case. This case is simply about ensuring that when companies promise consumers the ability to make choices, they follow through on those promises.

In other words, Nomi is the FTC’s first case involving brick-and-mortar tracking, but the FTC is not yet creating new law: The proposed order does not impose any affirmative notice and choice obligations on industry participants in the retail tracking space. It is not surprising that the commission declined to take such a drastic step with a practice that is still, relatively speaking, in its infancy, and that does not, on its face, involve sensitive personal information (though, while the information collected may be anonymous and analyzed only in aggregate, some retailers may, or at least could, pair tracking information through their apps with other information about identifying a specific consumer).When the FTC does impose specific obligations relating to a particular practice, it typically moves in an incremental fashion. For example, the FTC noted in its 2009 Report on Self-Regulatory Principles for Online Behavioral Advertising and again in its 2012 Privacy Report that the collection of precise geolocation requires affirmative express consent because such information is sensitive. The FTC continued to indicate, in guidance and follow-on staff reports, that a failure to provide notice and obtain affirmative opt-in consent for the collection of precise geolocation information could give rise to a cause of action for deception under Section 5 of the FTC Act.Then, when the FTC settled a case (

When the FTC does impose specific obligations relating to a particular practice, it typically moves in an incremental fashion. For example, the FTC noted in its 2009 Report on Self-Regulatory Principles for Online Behavioral Advertising and again in its 2012 Privacy Report that the collection of precise geolocation requires affirmative express consent because such information is sensitive. The FTC continued to indicate, in guidance and follow-on staff reports, that a failure to provide notice and obtain affirmative opt-in consent for the collection of precise geolocation information could give rise to a cause of action for deception under Section 5 of the FTC Act.Then, when the FTC settled a case (Goldenshores), alleging violations of Section 5 relating to an Android app’s collection, use and disclosure of precise geolocation from users’ devices, the order imposed specific parameters on the out-of-policy notice and choice that the app had to provide — effectively creating a new notice and choice regime for the collection, use and disclosure of such information that companies ignore at their peril.By contrast, the

Then, when the FTC settled a case (Goldenshores), alleging violations of Section 5 relating to an Android app’s collection, use and disclosure of precise geolocation from users’ devices, the order imposed specific parameters on the out-of-policy notice and choice that the app had to provide — effectively creating a new notice and choice regime for the collection, use and disclosure of such information that companies ignore at their peril.By contrast, the

By contrast, the narrow approach the FTC has taken with Nomi raises the question of whether the FTC would ever impose a notice and choice obligation for offline, retail tracking. We have no certainty around the FTC’s view, but it is reasonable to anticipate that the FTC will move in a direction that mirrors its position with respect to online tracking — that is, that at least when information is collected for targeted advertising purposes, a company should provide meaningful disclosures to consumers about the tracking and choice with respect to whether to allow it.[2] The FTC could ultimately deem a failure to provide such notice and/or choice an unfair and/or deceptive practice under Section 5 of the FTC Act.

What does this mean for retailers and other places of business? In light of Nomi and our expectations with respect to the direction the FTC is likely to take, companies that engage in in-store tracking should consider how best to provide their customers with notice and choice. Whatever the FTC does, it will probably move conservatively. That means that the FTC is likely to continue to identify practices as violations of Section 5 if they can be remedied without stifling retail tracking technology as it matures.

The Nomi complaint presents two interrelated themes that provide a guide to future enforcement. First, choice must be linked to notice, meaning that, as far as the FTC is concerned, consumers do not have meaningful choice unless they also have notice at the point of collection, even if notice is provided only in a privacy policy only. Nomi can thus be read to suggest that, at least in some circumstances, choice with regard to virtual tracking needs to be accompanied by notice in the brick-and-mortar world. Second, the complaint suggests, obliquely, that tracking consumers’ physical activities is “material” — i.e., that it is likely to affect the consumer’s conduct. If that is right, then this type of tracking must be disclosed to consumers because the failure to make such a disclosure would be, axiomatically, a material omission.

How should retailers proceed? One option is to track only those customers who have downloaded the retailer’s app and affirmatively agreed to be tracked for identified purposes, such as the delivery of targeted offers. Another option is to use a vendor that subscribes to the Future of Privacy Forum Mobile Location Analytics Code of Conduct, which requires participating mobile location analytics companies to, among other things, provide consumers with appropriate notice and choice. These types of compliance strategies could help protect companies from the next possible phase of FTC enforcement in this space, since they address what appear to be, for now, the most direct ways to avoid conducting retail tracking without providing notice and choice.

Status Updates: Facebook Posts—Reliable Evidence?; Quora Post Costs Applicant a Job; a New Ephemeral Messaging App

Posted in Disappearing Content, Discovery, E-Discovery, Litigation, Status Updates

Facebook: Fact or fiction? These days, courts are more and more frequently faced with disputes over whether, as part of the discovery process, a litigant should be entitled to view the opposing party’s social media posts. As we’ve discussed, some courts deciding physical and emotional injury claims have held that the photos and status updates that the plaintiffs in those cases posted to Facebook were relevant to proving or disproving those claims. But are they always? A recent column in Slate points out that some judges and experts are questioning whether a person’s social media posts are adequate reflections of his or her emotional well-being. In one 2013 case over alleged disability discrimination—the plaintiff claimed her work supervisor mocked her after she told him she’d been diagnosed with adult Attention Deficit Hyperactivity Disorder—a federal district court judge in New York held that “The fact that an individual may express some degree of joy, happiness, or sociability on certain occasions sheds little light on the issue of whether he or she is actually suffering emotional distress… For example, a severely depressed person may have a good day or several good days and choose to post about those days and avoid posting about moods more reflective of his or her actual emotional state.” We at Socially Aware tend to agree with this more skeptical view of the extent to which one’s “social” life reflects one’s real life. After all, if a woman can fake an entire vacation on Facebook, many of the platform’s users are likely posting status updates and pictures that are out of sync with their actual moods.

Cutting words. Stories about people being fired or having a job offer rescinded because of their social media missteps have been around almost as long as social media itself, but they usually involve “what were they thinking?” types of behavior. We recently came across one that is a little less clear-cut. An engineer who’d just gotten job offers from Uber and Zenefits tried to crowdsource information that would help him decide between the two employers by posting what he considered to be the pros and cons of each opportunity on Quora, a Q&A social network that allows users to pose questions to the community. He said good things about both companies, but in his “cons” list for Zenefits, he wrote, “My biggest problem with Zenefits is that it isn’t a buzzword like Uber. Most people won’t know what Zenefits is (or so I think). I think that this isn’t as exciting a brand name to have on your resume when applying to the likes of Google.” Zenefits CEO and co-founder Parker Conrad saw the Quora post and responded, right on the thread: “Definitely not Zenefits (n.b.—we are revoking the questioner’s offer to work at Zenefits),” he wrote. “We really value people who ‘get’ what we do and who *want* to work here, specifically. It’s not for everyone, but there are enough ppl out there who do want to work here that we can afford to be selective.” Conrad later edited his response, deleting the part about revoking the engineer’s offer, but his decision stands: The engineer is no longer welcome at Zenefits. Reactions on Twitter went both ways, The Washington Post reported. And some commentators felt that both parties were at fault.

Here today . . . Perhaps inspired by social media users’ concerns that their posts will be used against them in the ways we’ve just described—and, in the case of Cyber Dust, billionaire investor Mark Cuban’s receipt of a subpoena for his own text messages—new disappearing messaging apps are springing up all the time. One that recently got the attention of the crowd at a tech conference in New York is the photo-sharing app Rewind. Rewind allows you to create photo timelines through which the members of your network can scroll. As a result of the scrolling feature, a whole set of photos only takes up the space of a single photo in users’ feeds. The posts vanish after 24 hours. According to Tech Crunch, by making the photos disappear, the app’s creators hope “to elicit the same sort of spontaneity as Snapchat Stories,” which have been heralded as the future of social media.