The U.S. Supreme Court on Oct. 16, 2017, announced it had granted the government’s petition for certiorari in United States v. Microsoft and will hear a case this Term that could have lasting implications for how technology companies interact with the U.S government and governments overseas. At issue is a consequential Second Circuit decision from last year that held that warrants issued under the Stored Communications Act (SCA) do not reach emails and other user data stored overseas by a U.S. provider.

While no federal appellate court besides the Second Circuit has squarely addressed the issue, multiple district courts outside the Second Circuit have declined to follow the Second Circuit’s reasoning in similar fact patterns involving other technology giants. The result is that U.S. law enforcement has different authority to access foreign-stored user data depending on where in the United States a warrant application is made. Google, for example, has expended significant resources to develop new tools to determine the geographic location of its users’ data so as to be in accord with the Second Circuit’s approach. Yet the company currently faces a hearing on sanctions for its alleged willful noncompliance with law enforcement requests in the Ninth Circuit based on a district court ruling that parted ways with the Second Circuit.

Continue Reading SCOTUS to Resolve Lower-Court Dispute Over U.S. Warrants Seeking Foreign-Stored User Data

In this era of big data, a company’s value may increasingly depend on the value of the information it has collected and stored. As companies amass ever-growing amounts of often sensitive personal data, the privacy and cybersecurity risks involved in mergers and acquisitions have become greater. As a result, today’s M&A transactions necessarily require deep due diligence on the privacy and cybersecurity risks posed by these deals, including a review of the M&A target’s communications on internal- and external-facing social media platforms.

In a practical webinar on September 26, 2017, Socially Aware contributor Christine Lyon and Mike Krigbaum discussed privacy and data security due diligence in M&A transactions. The topics they covered included:

  • Common challenges and pitfalls in performing privacy and cybersecurity due diligence;
  • The questions an acquirer’s team should ask to better identify, evaluate, and manage an acquisition target’s privacy and cybersecurity vulnerabilities; and
  • Steps the seller’s team can take to mitigate risk and help ensure that the deal is not jeopardized.

To view a recording of the webcast, click here.

Because it bases its assesments on job title, location and industry, LinkedIn’s new Salary feature might be more accurate than are other online compensation estimation tools.

States are trying to pass laws that balance bereaved people’s desire to access their deceased loved ones’ social media accounts with the privacy interests of the account holders and the people with whom they corresponded. Without such laws, access to a deceased person’s digital assets might depend on the various social media platforms’ terms of use.

In lawsuits, social media has occasionally made it easier to serve process on adverse parties, but it has also made it more difficult to ensure that jurors remain unbiased.

A UK company wants to set car insurance premiums using an algorithm that analyzes car owners’ Facebook posts for pertinent personality traits?! The plan likely won’t go far; it violates Facebook’s platform policy.

Kenya deported a registered refugee for posting to social media his support of the U.N. secretary-general’s firing of a Kenyan commander of a peacekeeping mission in South Sudan, the refugee’s native country.

Thinking of posting a photo of yourself in the voting booth on Tuesday? Not so fast. In many states it’s illegal to share on social media photos of completed ballots and photos of yourself inside a voting booth. Courts all over the U.S. are hearing challenges to these so-called “ballot selfie” laws.

Does a lawyer violate ethics rules by purchasing the names of competing lawyers or law firms as keywords that improve the purchasing lawyer’s own rank in Google search results?

In the three years since its launch, an app called Scholly, which matches students with a personalized list of scholarships, has been downloaded over a million times. Here’s some advice for other social entrepreneurs from the company’s 25-year-old founder and CEO.

Some researchers believe the likes, status updates and photos posted to social media platforms will someday be the source material for breakthroughs in the field of psychiatry.

A UK solicitor was fined by a professional conduct regulator for posting a series of “unprofessional and offensive” tweets bragging about his victory over vulnerable adversaries.

lines of binary codes traveling through the virtual tunnel

Deluged with an unprecedented amount of information available for analysis, companies in just about every industry are discovering increasingly sophisticated ways to make market observations, predictions and evaluations. Big Data can help companies make decisions ranging from which candidates to hire to which consumers should receive a special promotional offer. As a powerful tool for social good, Big Data can bring new opportunities for advancement to underserved populations, increase productivity and make markets more efficient.

But if it’s not handled with care, Big Data has the potential to turn into a big problem. Increasingly, regulators like the Federal Trade Commission (FTC) are cautioning that the use of Big Data might perpetuate and even amplify societal biases by screening out certain groups from opportunities for employment, credit or other forms of advancement. To achieve the full potential of Big Data, and mitigate the risks, it is important to address the potential for “disparate impact.”

Disparate impact is a well-established legal theory under which companies can be held liable for discrimination for what might seem like neutral business practices, such as methods of screening candidates or consumers. If these practices have a disproportionate adverse impact on individuals based on race, age, gender or other protected characteristics, a company may find itself liable for unlawful discrimination even if it had no idea that its practices were discriminatory. In cases involving disparate impact, plaintiffs do not have to show that a defendant company intended to discriminate—just that its policies or actions had the discriminatory effect of excluding protected classes of people from key opportunities.

As the era of Big Data progresses, companies could expose themselves to discrimination claims if they are not on high alert for Big Data’s potential pitfalls. More than ever, now is the time for companies to adopt a more rigorous and thoughtful approach to data.

Consider a simple hypothetical: Based on internal research showing that employees who live closer to work stay at the company longer, a company formulates a policy to screen potential employees by their zip code. If the effect of the policy disproportionately excludes classes of people based on, say, their race—and if there is not another means to achieve the same goal with a smaller disparate impact—that policy might trigger claims of discrimination.

Making matters more complex, companies have to be increasingly aware of the implications of using data they buy from third parties. A company that buys data to verify the creditworthiness of consumers, for example, might be held liable if it uses the data in a way that has a disparate impact on protected classes of people.

Expanding Uses of Disparate Impact

For decades, disparate-impact theories have been used to challenge policies that excluded classes of people in high-stakes areas such as employment and credit. The Supreme Court embraced the theory for the first time in a 1971 employment case called Griggs v. Duke Power Co., which challenged the company’s requirement that workers pass intelligence tests and have high school diplomas. The court found that the requirement violated Title VII of the Civil Rights Act of 1964 because it effectively excluded African-Americans and there was not a genuine business need for it. In addition, courts have allowed the disparate-impact theory in cases brought under the Americans with Disabilities Act and the Age Discrimination in Employment Act.

The theory is actively litigated today and has been expanding into new areas. Last year, for example, the Supreme Court held that claims using the disparate-impact theory can be brought under the Fair Housing Act.

In recent years, the FTC has brought several actions under the disparate-impact theory to address inequities in the consumer-credit markets. In 2008, for example, the agency challenged the policies of a home-mortgage lender, Gateway Funding Diversified Mortgage Services, which gave its loan officers autonomy to charge applicants discretionary overages. The policy, according to the FTC, had a disparate impact on African-American and Hispanic applicants, who were charged higher overages than whites, in violation of the Federal Trade Commission Act and the Equal Credit Opportunity Act.

The Good and Bad Impact of Big Data

As the amount of data about individuals continues to increase exponentially, and companies continue to find new ways to use that data, regulators suggest that more claims of disparate impact could arise. In a report issued in January, the FTC expressed concerns about how data is collected and used. Specifically, it warned companies to consider the representativeness of their data and the hidden biases in their data sets and algorithms.

Similarly, the White House has also shown concern about Big Data’s use. In a report issued last year on Big Data and its impact on differential pricing—the practice of selling the same product to different customers at different prices—President Barack Obama’s Council of Economic Advisers warned: “Big Data could lead to disparate impacts by providing sellers with more variables to choose from, some of which will be correlated with membership in a protected class.”

Meanwhile, the European Union’s Article 29 Data Protection Working Party has cautioned that Big Data practices raise important social, legal and ethical questions related to the protection of individual rights.

To be sure, government officials also acknowledge the benefits that Big Data can bring. The FTC in its report noted that companies have used data to bring more credit opportunities to low-income people, to make workforces more diverse and provide specialized health care to underserved communities.

And in its report, the Council of Economic Advisers acknowledged that Big Data “provides new tools for detecting problems, both before and perhaps after a discriminatory algorithm is used on real consumers.”

Indeed, in the FTC’s action brought against the mortgage lending company Gateway Funding Diversified Mortgage Services, the agency said the company had failed to “review, monitor, examine or analyze the loan prices, including overages, charged to African-American and Hispanic applicants compared to non-Hispanic white applicants.” In other words, Big Data could have helped the company spot the problem.

Policy Balancing Act

The policy challenge of Big Data, as many see it, is to root out discriminatory effects without discouraging companies from innovating and finding new and better ways to provide services and make smarter decisions about their business.

Regulators will have to decide which Big Data practices they consider to be harmful. There will inevitably be some gray areas. In its report, the FTC suggested advertising by lenders could be one example. It noted that a credit offer targeted at a specific community that is open to all will not likely trigger violations of the law. But it also observed that advertising campaigns can affect lending patterns, and the Department of Justice in the past has cited a creditor’s advertising choices as evidence of discrimination. As a result, the FTC advised lenders to “proceed with caution.”

As the era of Big Data gets under way, it’s not bad advice for all companies.

*    *    *

This post originally appeared as an op-ed piece in MarketWatch.

For more on potential legal issues raised by Big Data usage, please see our Socially Aware post, Big Data, Big Challenges: FTC Report Warns of Potential Discriminatory Effects of Big Data.

 

 

We’re trying something new here at Socially Aware: In addition to our usual social-media and tech-law analyses and updates, we’re going to end each work week with a list of links to interesting social media stories around the Web, primarily things that caught our eye during the week that we may or may not ultimately write about in a future blog post.

Here’s our first list – enjoy!

Should prisoners be allowed to have Facebook pages?

Why do older people love Facebook? A New York Times writer asked her 61-year-old dad.

Judge upholds ex-cop’s murder conviction despite defense’s claim that juror’s Facebook posts evidenced a dislike for police.

The CIA’s venture capital arm is investing in companies that develop artificial intelligence to sift through enormous numbers of social media postings and decipher patterns.

Another cringe-worthy social media marketing campaign gaffe, this time by KFC Australia.

Facebook will now allow businesses to deliver automated customer support through chatbots.

The European Commission (the “Commission”) and the U.S. Department of Commerce issued the draft legal texts for the much anticipated EU-U.S. Privacy Shield (the “Shield”), set to replace the currently inoperative Safe Harbor program (“Safe Harbor”). The new agreement is aimed at restoring the trust of individuals in the transatlantic partnership and the digital economy, and putting an end to months of compliance concerns of U.S. and EU companies alike. The draft will be discussed with EU data protection authorities (“DPAs”) and adopted by Member States representatives before it becomes binding.

The publication of the Shield documents, on February 29, 2015, came at a time of high expectations and a certain tension. Last October, the European Court of Justice (the “ECJ”) invalidated the Commission’s decision 2000/520/EC and effectively shut down the Safe Harbor framework, which until then allowed thousands of European companies to send personal information to U.S. companies that had committed to protecting personal information.   As a result, thousands of U.S. and EU companies were suddenly left in a legal limbo.  In response to the risk of enforcement against companies relying on Safe Harbor, and to address the concerns raised by EU DPAs, the Commission announced in early February that a new political agreement had indeed been reached with the U.S. government. It also made good on its promise to make the details of the agreement public by month’s end.

At first glance, the Shield bears a strong resemblance to Safe Harbor, which misled some commentators to denounce it as a mere duplicate in disguise.  However, the Shield introduces substantial changes for data protection, including additional rights for EU individuals, stricter compliance requirements for U.S. organizations, and further limitations on government access to personal data. From the perspective of U.S. companies, it appears that the Shield may actually signify a shift to heavily monitored compliance. In this sense, the question may no longer be “How good is the Privacy Shield for privacy?” but rather “How burdensome will it become for businesses?”

This alert takes a closer look at the Shield and highlights some of the key differences from the Safe Harbor and other available data transfer mechanisms.

Some of the key takeaways include:

  • Safeguards related to intelligence activities will extend to all data transferred to the U.S., regardless of the transfer mechanism used.
  • The Shield’s dispute resolution framework provides multiple avenues for individuals to lodge complaints, more than those available under the Safe Harbor and alternative transfer mechanisms such as Standard Contractual Clauses or Binding Corporate Rules.
  • An organization’s compliance with the Privacy Shield will be directly and indirectly monitored by a wider array of authorities in the U.S. and the EU, possibly increasing regulatory risks and compliance costs for participating organizations.
  • The Department of Commerce will significantly expand its role in monitoring and supervising compliance, including by carrying out ex officio compliance reviews and investigations of participating organizations.
  • Participating organizations will be subjected to additional compliance and reporting obligations, some of which will continue even after they withdraw from the Privacy Shield.

Overview

The Commission made public all the documents that will constitute the new agreement, namely: a draft Adequacy Decision, FAQs, a Factsheet, Annexes detailing the principles and various compliance mechanisms, and a Commission Communication describing the current developments in the broader context of transatlantic discussions of the past few years.

In its press release, the Commission stated that the Shield “reflects the requirements” set by the ECJ in its ruling from October 6, 2015 (the “Schrems ruling”). As a reminder, key concerns of the Schrems ruling included: (1) the indiscriminate and excessive government access to EU citizens’ personal information, and (2) the lack of judicial redress mechanisms for EU citizens for privacy related complaints.

According to the Commission, the Shield will provide for “strong obligations on US companies” as well as “robust enforcement” mechanisms to ensure that such obligations are complied with. It will lay down “clear safeguards and transparency obligations on US government access.” Thirdly, it will ensure effective redress of EU Citizens’ rights by means of “several redress possibilities.” Finally, an annual joint review mechanism will allow the Commission, the U.S. Department of Commerce, and the European DPAs to monitor how well the Shield functions. Continue Reading Privacy Shield vs. Safe Harbor: A Different Name for an Improved Agreement?

Magnifying2In a new report, the Federal Trade Commission (FTC) declines to call for new laws but makes clear that it will continue to use its existing tools it to aggressively police unfair, deceptive—or otherwise illegal—uses of big data. Businesses that conduct big data analytics, or that use the results of such analysis, should familiarize themselves with the report to help ensure that their practices do not raise issues.

The report, titled “Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues” grew out of a 2014 FTC workshop that brought together stakeholders to discuss big data’s potential to both create opportunities for consumers and discriminate against them. The Report aims to educate businesses on key laws, and also outlines concrete steps that businesses can take to maximize the benefits of big data while avoiding potentially exclusionary or discriminatory outcomes.

What Is “Big Data”?

The Report explains that “big data” arises from a confluence of factors, including the nearly ubiquitous collection of consumer data from a variety of sources, the plummeting cost of data storage, and powerful new capabilities of drawing connections and making inferences and predictions from collected data. The Report describes the life cycle of big data as involving four phases:

  • Collection: Little bits of data are collected about individual consumers from a variety of sources, such as online shopping, cross-device tracking, online cookies or the Internet of Things (i.e., connected products or services).
  • Compilation and Consolidation: The “little” data is compiled and consolidated into “big” data, often by data brokers who build profiles about individual consumers.
  • Data Mining and Analytics: The “big” data is analyzed to uncover patterns of past consumer behavior or predict future consumer behavior.
  • Use: Once analyzed, big data is used by companies to enhance the development of new products, individualize their marketing, and target potential consumers.

The Report focuses on the final phase of the life cycle: the use of big data. It explores how consumers may be both helped and harmed by companies’ use of big data.

Benefits and Risks of Big Data

The Report emphasizes that, from a policy perspective, big data can provide significant opportunities for social improvements: big data can help target educational, credit, health care, and employment opportunities to low-income and underserved communities.  For instance, the Report notes that big data is already being used to benefit underserved communities, such as by providing access to credit using nontraditional methods to establish creditworthiness, tailoring health care to individual patients’ characteristics, and increasing equal access to employment to hire more diverse workforces. Continue Reading Big Data, Big Challenges: FTC Report Warns of Potential Discriminatory Effects of Big Data