• Buy local. Facebook has just announced that it’s going to provide hyper-local advertising services for merchants who want to reach consumers in very specific geographic areas. This new feature reportedly will allow a business to target just those consumers who are within a mile of the physical location of such business. Facebook is able to roll out this new business because so many of Facebook’s one billion plus mobile users permit Facebook to collect their location information, or otherwise provide Facebook with the data needed to allow hyper-local ads. This new feature should launch in the United States in just a few weeks.
  • Psst – wanna know a secret? Secret is a hot new social network designed to permit people to share their secrets online in a completely anonymous setting, without letting anyone know who has made the post. But how secure is it actually? According to a Wired article, not very secure. “White hat” hackers – those who try to find the vulnerabilities of a network without doing harm – have repeatedly found out people’s supposed secrets by using basic hacking techniques. The best-known hack works only one way; the hacker can find a person’s secret if the hacker knows the person’s e-mail address, but can’t tie a posted secret to any particular individual. The Wired article raises an interesting question as to whether any app or platform can be truly social and truly secret at the same time.
  • Nyet. The U.S. Court of Appeals for the Second Circuit recently rejected an effort by prosecutors to use a profile page from a popular Russian social media platform, Vk.com, to link a defendant with the sending of an allegedly fake birth certificate from a particular e-mail address. The Vk.com profile page at issue included a photograph of the defendant and the name “azmadeuz,” which was part of the e-mail address in question. The trial court had admitted the page into evidence, but the Second Circuit reversed, finding that, although it doesn’t take much to authenticate evidence, the page at issue could not be authenticated. In particular, the Second Circuit found that there could be no “reasonable conclusion” that the page at issue belonged to the defendant and wasn’t bogus in some way. The truly interesting question is whether there should be a higher standard for authenticating social media and other Internet-based evidence; the Second Circuit, however, declined the opportunity to set such a higher standard; rather, the focus should remain on the specific facts surrounding the specific item of evidence to be authenticated.

Not to be outdone by Florida, California has yet again amended its data security breach law and again in groundbreaking (yet confusing) fashion. On September 30, 2014, California Governor Brown signed into law a bill (“AB 1710”) that appears to impose the country’s first requirement to provide free identity theft protection services to consumers in connection with certain data security breaches. The law also amends the state’s personal information safeguards law and Social Security number (“SSN”) law. The amendments will become effective on January 1, 2015.

Free Identity Theft Protection Services Required for Certain Breaches

Most significantly, AB 1710 appears to amend the California breach law to require that a company offer a California resident “appropriate identity theft prevention and mitigation” services, at no cost, if a breach involves that individual’s name and SSN, driver’s license number or California identification card number. Specifically, AB 1710 provides, in pertinent part, that if a company providing notice of such a breach was “the source of the breach”:

an offer to provide appropriate identity theft prevention and mitigation services, if any, shall be provided at no cost to the affected person for not less than 12 months, along with all information necessary to take advantage of the offer to any person whose information was or may have been breached.

The drafting of this requirement is far from clear and open to multiple readings. In particular, the use of the phrase “if any” can be read in multiple ways. For example, the phrase “if any” can be read to modify the phrase “appropriate identity theft prevention and mitigation services.” Under this reading, the law would impose an obligation to provide free identity theft protection services if any such services are appropriate. The phrase “if any,” however, could be read to modify the “offer” itself. Under this alternate reading, the law would provide that if a company intends to offer identity theft protection services, those services must be at no cost to the consumer. It is difficult to know how the California Attorney General (“AG”) or California courts will interpret this ambiguity. One thing is clear: until the AG or courts opine, the standard will remain unclear.

The drafting of the requirement also is not clear in other ways. For example, the statute does not specify what type of services would qualify as “appropriate identity theft prevention and mitigation services.” For example, would a credit monitoring product alone be sufficient to meet the requirement? Or would the law require something in addition to credit monitoring, such as an identity theft insurance element?

Nonetheless, state AGs historically have encouraged companies to provide free credit monitoring to consumers following breaches. In addition, even though not legally required, free credit monitoring has become a common practice, particularly for breaches involving SSNs and also increasingly for high-profile breaches. Nonetheless, California appears to be the first state to legally require that companies offer some type of a free identity theft protection service for certain breaches.

AB 1710 is particularly notable in its approach. First, the offer of free identity theft protection services will only be required for breaches involving SSNs, driver’s licenses or California identification card numbers. In this regard, an offer of free identity theft protection services will not be required for breaches involving other types of covered personal information, such as payment card information or usernames and passwords. This approach endorses a position that many companies have long held—that credit monitoring is appropriate only when the breach creates an actual risk of new account identity theft (as opposed to fraud on existing accounts). In addition, the offer of free identity theft protection services will only be required for a period of one year (as opposed to, for example, two years). The length of the offer of free credit monitoring has always been an issue of debate, and California has now endorsed a position that a one-year offer is sufficient.

Continue Reading Breaking Old Ground: California Again Amends Data Security Breach Law

As the quality of visual recognition software continues to improve, privacy concerns have grown concomitantly. Because we now document our lives with so many pictures posted to social media—Facebook hosts over 250 billion photos, with 350 million new photos added every day—photographs are becoming hugely important to the big data movement. Indeed, some say Facebook stores over 4 percent of all the pictures ever taken in history. What truths may lurk behind all those images—and who wants to know?

Cutting-edge visual recognition software programs now make it possible not only to identify a person in a photo on Facebook or elsewhere, but also to determine what that person is doing in the photo.

There’s already image recognition software, used by the fashion industry, that lets a shopper take a picture on his or her smartphone of a piece of clothing and then match that piece by color, pattern, and shape to the offerings of 170 retailers that sell something similar. That’s a benign use of this technology. But more ominous applications are already emerging.

My sense is that this concern is helping to fuel the growth of ephemeral social media sites such as Snapchat, where—at least in theory—photos don’t sit there in perpetuity to be exploited by data miners; they last all of 10 seconds.

After all, imagine all your online photos being processed into a data profile by advertisers or law enforcement, showing where you live, where you’ve been, with whom you hang out and what activities you’ve participated in. If a single picture is worth a thousand words, what are 250 billion photos worth?

“Web scraping” or “web harvesting”—the practice of extracting large amounts of data from publicly available websites using automated “bots” or “spiders”—accounted for 18% of site visitors and 23% of all Internet traffic in 2013. Websites targeted by scrapers may incur damages resulting from, among other things, increased bandwidth usage, network crashes, the need to employ anti-spam and filtering technology, user complaints, reputational damage and costs of mitigation that may be incurred when scrapers spam users, or worse, steal their personal data.

Though sometimes difficult to combat, scraping is quite easy to perform. A simple online search will return a large number of scraping programs, both proprietary and open source, as well as D.I.Y. tutorials. Of course, scraping can be beneficial in some cases. Companies with limited resources may use scraping to access large amounts of data, spurring innovation and allowing such companies to identify and fill areas of consumer demand. For example, Mint.com reportedly used screen scraping to aggregate information from bank websites, which allowed users to track their spending and finances. Unfortunately, not all scrapers use their powers for good. In one case on which we previously reported, the operators of the website Jerk.com allegedly scraped personal information from Facebook to create profiles labeling people “Jerk” or “not a Jerk.” According to the Federal Trade Commission (FTC), over 73 million victims, including children, were falsely told they could revise their profiles by paying $30 to the website.

Website operators have asserted various claims against scrapers, including copyright claims, trespass to chattels claims and contract claims based on allegations that scrapers violated the websites’ terms of use. This article, however, focuses on another tool that website operators have used to combat scraping: the federal Computer Fraud and Abuse Act (CFAA).

Continue Reading Data for the Taking: Using the Computer Fraud and Abuse Act to Combat Web Scraping

The European Court of Justice (ECJ) issued a quite surprising decision against Google which has significant implications for global companies.

On May 13, 2014 the ECJ issued a ruling which did not follow the rationale or the conclusions of its Advocate General, but instead sided with the Spanish data protection authority (DPA) and found that:

  • Individuals have a right to request from the search engine provider that content that was legitimately published on websites should not be searchable by name if the personal information published is inadequate, irrelevant or no longer relevant;
  • Google’s search function resulted in Google acting as a data controller within the meaning of the Data Protection Directive 95/46, despite the fact that Google did not control the data appearing on webpages of third party publishers;
  • Spanish law applied because Google Inc. processed data that was closely related to Google Spain’s selling of advertising space, even where Google Spain did not process any of the data. In doing so, it derogated from earlier decisions, arguing the services were targeted at the Spanish market, and such broad application was required for the effectiveness of the Directive.

The ruling will have significant implications for search engines, social media operators and businesses with operations in Europe generally. While the much debated “right to be forgotten” is strengthened, the decision may open the floodgates for people living in the 28 countries in the EU to demand that Google and other search engine operators remove links from search results. The problem is that the ECJ mentions a broad range of data that may be erased. Not only should incorrect or unlawful data be erased, but also all those data which are “inadequate, irrelevant, or no longer relevant”, as well as those which are “excessive or not kept up to date” in relation to the purposes for which they were processed. It is left to the companies to decide when data falls into these categories.

In that context, the ruling will likely create new costs for companies and possibly thousands of individual complaints. What is more, companies operating search engines for users in the EU will have the difficult task of assessing each complaint they process and whether the rights of the individuals prevail over the rights of the public. Internet search engines with operations in the EU will have to handle requests from individuals who want the deletion of search results that link to pages containing their personal data.

That said, the scope of the ruling is limited to name searches. While search engines will have to de-activate the name search, the data can still be available in relation to other keyword searches. The ECJ did not impose new requirements relating to the content of webpages, in an effort to maintain the freedom of expression, and more particularly, press freedom. But this will still result in a great deal of information legally published to be available only to a limited audience.

Below we set out the facts of the case and the most significant implications of the decision, and address its possible consequences on all companies operating search engines. Continue Reading European Court of Justice Strengthens Right to Be Forgotten