The Law and Business of Social Media
February 10, 2020 - Contracts, Web Scraping, Privacy, Copyright, Litigation

Clearview AI and the Legal Challenges Facing Facial Recognition Databases

Clearview AI and the Legal Challenges Facing Facial Recognition Databases

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images.

By analyzing the biometric information from the images in the database, Clearview AI allows a user to upload a photo of any person and immediately see all publicly available photos of that person, along with links to where those photos appear. According to the company, Clearview AI’s facial recognition software has been used by more than 600 law enforcement agencies, including the FBI and the Department of Homeland Security.

While Clearview AI touts its software as groundbreaking, such an expansive facial recognition database raises serious privacy concerns. Several U.S. senators, along with several digital rights groups such as the Electronic Frontier Foundation, have criticized Clearview AI for eroding individual privacy rights. Recently, Twitter sent a cease-and-desist letter to Clearview AI accusing Clearview AI of violating Twitter’s policies and demanding that Clearview AI stop taking photos and other data from the site. Similar cease and desist letters from Google, Facebook, Venmo and YouTube reportedly followed.

Litigation Against Clearview AI

Clearview AI has received additional scrutiny after being served with three class action complaints, two pending in the U.S. District Court for the Northern District of Illinois and one pending in the U.S. District Court for the Eastern District of Virginia. In each lawsuit, the plaintiffs are seeking an injunction that will (a) bar Clearview AI from collecting the class members’ photographs and (b) order Clearview AI to delete all class members’ data from the company’s database, potentially destroying Clearview AI’s business. The complaints in these cases provide examples of some of the legal claims available to individuals to combat unauthorized webscraping of their private information.

In the first lawsuit, the plaintiff, Illinois resident David Mutnick, seeks class certification on behalf of a nationwide Constitutional Rights Class as well as an Illinois Class. On behalf of those classes, Mutnick alleges a combination of constitutional and state law claims.

Mutnick asserts a variety of constitutional claims arising under the First, Fourth and Fourteenth Amendments, along with violation of the Contracts Clause. While Clearview AI is not a state actor and thus typically not subject to constitutional claims, Mutnick argues that Clearview AI conspired with state actors to violate his federally protected rights and Clearview AI is a liable private party under a “joint participation” theory.

In the second lawsuit, the plaintiff, Illinois resident Anthony Hall, asserts several state law claims on behalf of a class of Illinois residents. Hall alleges that Clearview AI’s unauthorized webscraping and sale of its software give rise to a common law conversion claim and violate the Illinois Consumer Fraud and Deceptive Business Practices Act.

Additionally, both Mutnick and Hall also allege multiple violations of the Illinois Biometric Information Privacy Act. This Illinois law: (1) prohibits private companies from collecting individual biometric data without first notifying the individual and obtaining their written consent; (2) limits the circumstances under which biometric data may be disclosed; and; (3) prohibits private entities from profiting from the biometric information. Under the statute, individuals have a private right of action to enforce the law against companies that violate it and may recover up to $5,000 for each intentional or reckless violation.

The Illinois Biometric Information Privacy Act is currently the most stringent biometric privacy law in the United States, and it is the only biometric privacy law that provides a private right of action. It is a powerful statute.

In the third lawsuit, the plaintiff, Virginia resident Shelby Roberson, asserts several state law claims. Roberson alleges that by scraping the class members’ images, Clearview AI violated several provisions off the Virginia Computer Crimes Act. She also alleges that Clearview AI’s use of the images violates Va. Code § 8.01-40, a state law provision that gives individuals the right to control the commercial use of their names and images.

The outcome in each of these lawsuits remains to be seen, but given the privacy implications of Clearview AI’s software, it would be surprising if more parties don’t also assert claims against Clearview AI.

Other Potential Claims of Unauthorized Webscraping Against Clearview AI

The various social media sites from which Clearview AI took the photos that populate its database have tools at their disposal to combat unauthorized webscraping.

These are only some of the tools available to social media sites to combat unauthorized webscraping by firms like Clearview AI. And when it comes to copying personal photos, even individuals outside Illinois who are not protected by the Illinois Biometric Information Privacy Act may have independent misappropriation or breach of privacy claims. As other states consider enacting or expanding data privacy and biometric privacy laws, the number of options available to challenge unauthorized webscraping will likely continue to grow. In the meantime, Clearview AI and the controversy surrounding it highlight how important it is to understand artificial intelligence and how it intersects with the law.