Photo of J. Alexander Lawrence

Foreign websites that use geotargeted advertising may be subject to personal jurisdiction in the United States, even if they have no physical presence in the United States and do not specifically target their services to the United States, according to a new ruling from the Fourth Circuit Court of Appeals.

In UMG Recordings, Inc. v. Kurbanov, twelve record companies sued Tofig Kurbanov, who owns and operates the websites: flvto.biz and 2conv.com. These websites enable visitors to rip audio tracks from videos on various platforms, like YouTube, and convert the audio tracks into downloadable files.

The record companies sued Kurbanov for copyright infringement and argued that a federal district court in Virginia had specific personal jurisdiction over Kurbanov because of his contacts with Virginia and with the United States more generally. Kurbanov moved to dismiss for lack of personal jurisdiction, and the district court granted his motion.
Continue Reading Stretching the Bounds of Personal Jurisdiction, 4th Circuit Finds Geotargeted Advertising May Subject Foreign Website Owner to Personal Jurisdiction in the U.S.

Is scraping data from a publicly available website trade secret misappropriation? Based on a new opinion from the Eleventh Circuit, It might be.

In Compulife Software, Inc. v. Newman, Compulife Software, a life insurance quote database service alleged that one of its competitors scraped millions of insurance quotes from its database and then sold the proprietary data themselves. Compulife brought a number of claims against its competitors, including misappropriation of trade secrets under Florida’s version of the Uniform Trade Secrets Act (FUTSA) and under the Federal Defend Trade Secrets Act (DTSA).

Following a bench trial, Magistrate Judge James Hopkins found that, while Compulife’s underlying database merits trade secret protection, the individual quotes generated through public Internet queries to the database do not. So using a bot to take those individual quotes one by one did not constitute a misappropriation of trade secrets. On appeal, however, the Eleventh Circuit disagreed, vacated, and remanded the case.

Facts of the Case

Compulife’s main product is its “Transformative Database,” which contains many different premium-rate tables that it receives from life insurance companies. While these rate tables are available to the public, Compulife often receives these tables before they are released for general use. In addition, Compulife applies a special formula to these rate tables to calculate its personalized life insurance quotes.
Continue Reading Webscraping a Publicly Available Database May Constitute Trade Secret Misappropriation

Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.

Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear.
Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity

New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death.
Continue Reading Are Facebook Posts Discoverable? Application of the Forman Test in N.Y.

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images.
Continue Reading Clearview AI and the Legal Challenges Facing Facial Recognition Databases

A random Twitter account tags a Japanese company and badmouths it in a series of tweets. Because the tweets are tagged, a search of the company’s name on Twitter will display the tweets with the negative comments among the search results. Upset over the tweets, the Japanese company wants to sue the tweeter in Japan. But how can it? The tweeter has not used his real name.

This is where discovery under 28 U.S.C. § 1782 can help. Section 1782 provides a vehicle for companies or individuals seeking U.S. discovery in aid of foreign litigation—even if the litigation is merely contemplated and not yet commenced. Specifically, Section 1782 provides that a federal district court may grant an applicant the authority to issue subpoenas in the United States to obtain documents or testimony, including documents or testimony seeking to unmask an anonymous Internet poster to pursue defamation claims abroad.

To pursue Section 1782 discovery, an applicant needs to establish:

  • that the requested discovery is for use in an actual or contemplated proceeding in a foreign or international tribunal;
  • that the applicant is an “interested person” in that proceeding; and
  • that the person from whom the discovery is sought resides or is found in the district of the court where the applicant is making the application.


Continue Reading Foreign Companies Can Use 28 U.S.C. § 1782 to Unmask Anonymous Internet Posters

A recent decision from a federal court in New York highlights the limits social media users enjoy under Section 230 of the Communications Decency Act (CDA). The case involves Joy Reid, the popular host of MSNBC’s AM Joy who has more than two million Twitter and Instagram followers, and the interaction between a young Hispanic boy and a “Make America Great Again” (MAGA)–hat wearing woman named Roslyn La Liberte at a Simi Valley, California, City Council meeting.

The case centers on a single re-tweet by Reid and two of her Instagram posts.

Here is Reid’s re-tweet.

It says: “You are going to be the first deported” “dirty Mexican” Were some of the things they yelled at this 14 year old boy. He was defending immigrants at a rally and was shouted down.   

Spread this far and wide this woman needs to be put on blast.

 
 

Here is Reid’s first Instagram post from the same day.

It says: joyannreid He showed up to a rally to defend immigrants. … She showed up too, in her MAGA hat, and screamed, “You are going to be the first deported” … “dirty Mexican!” He is 14 years old. She is an adult. Make the picture black and white and it could be the 1950s and the desegregation of a school. Hate is real, y’all. It hasn’t even really gone away.
Continue Reading The Joys and Dangers of Tweeting: A CDA Immunity Update

For the last twenty years, the music industry has been in a pitched battle to combat unauthorized downloading of music. Initially, the industry focused on filing lawsuits to shut down services that offered peer-to-peer or similar platforms, such as Napster, Aimster and Grokster. For a time, the industry started filing claims against individual infringers to dissuade others from engaging in similar conduct. Recently, the industry has shifted gears and has begun to focus on Internet Service Providers (ISPs), which provide Internet connectivity to their users.

The industry’s opening salvo against ISPs was launched in 2014 when BMG sued Cox Communications, an ISP with over three million subscribers. BMG’s allegations were relatively straightforward. BMG alleged that Cox’s subscribers are engaged in rampant unauthorized copying of musical works using Cox’s internet service, and Cox did not do enough to stop it. While the DMCA provides safe harbors if an ISP takes appropriate action against “repeat infringers,” BMG alleged that Cox could not avail itself of this safe harbor based on its failure to police its subscribers.
Continue Reading Will the Music Industry Continue To Win Its Copyright Battle Against ISPs?

A recent decision from the Ninth Circuit Court of Appeals in a dispute between LinkedIn and hiQ Labs has spotlighted the thorny legal issues involved in unauthorized web scraping of data from public websites. While some may interpret the LinkedIn decision as greenlighting such activity, this would be a mistake. On close review of the decision, and in light of other decisions that have held unauthorized web scrapers liable, the conduct remains vulnerable to legal challenge.

hiQ and LinkedIn

Founded in 2012, hiQ is a data analytics company that uses automated bots to scrape information from LinkedIn’s website. hiQ targets the information that users have made public for all to see in their LinkedIn profile. hiQ pays nothing to LinkedIn for the data, which it uses, along with its own predictive algorithm, to yield “people analytics,” which it then sells to clients.

In May 2017, LinkedIn sent a cease-and-desist letter to hiQ demanding that it stop accessing and copying data from LinkedIn’s servers. LinkedIn also implemented technical measures to prevent hiQ from accessing the site, which hiQ circumvented.

Shortly thereafter, with its entire business model under threat, hiQ filed suit in the United States District Court for the Northern District of California seeking injunctive relief and a declaration that LinkedIn had no right to prevent it from accessing public LinkedIn member profiles.
Continue Reading Ninth Circuit’s LinkedIn Decision Does Not Greenlight the Unauthorized Web Scraping of Public Websites

A recent Second Circuit decision makes clear that the safe harbor that social media and other Internet companies enjoy under Section 230 of the Communications Decency Act broadly applies to a wide variety of claims.

When you think about the Section 230 safe harbor, don’t just think defamation or other similar state law claims. Consider whether the claim—be it federal, state, local, or foreign—seeks to hold a party that publishes third-party content on the Internet responsible for publishing the content. If, after stripping it all down, this is the crux of the cause of action, the safe harbor should apply (absent a few statutory exclusions discussed below). The safe harbor should apply even if the party uses its discretion as a publisher in deciding how best to target its audience or to display the information provided by third parties.

In 2016, Facebook was sued by the estates of four U.S. citizens who died in terrorist attacks in Israel and one who narrowly survived but was grievously injured. The plaintiffs claimed that Facebook should be held liable under the federal Anti-Terrorism Act and the Justice Against Sponsors of Terror Act, which provide a private right of action against those who aid and abet acts of international terrorism, conspire in furtherance of acts of terrorism, or provide material support to terrorist groups. The plaintiffs also asserted claims arising under Israeli law.
Continue Reading CDA Section 230 Immunizes Platform From Liability for Friend and Content Suggestion Algorithms