New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death.
Continue Reading

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images.
Continue Reading

A random Twitter account tags a Japanese company and badmouths it in a series of tweets. Because the tweets are tagged, a search of the company’s name on Twitter will display the tweets with the negative comments among the search results. Upset over the tweets, the Japanese company wants to sue the tweeter in Japan. But how can it? The tweeter has not used his real name.

This is where discovery under 28 U.S.C. § 1782 can help. Section 1782 provides a vehicle for companies or individuals seeking U.S. discovery in aid of foreign litigation—even if the litigation is merely contemplated and not yet commenced. Specifically, Section 1782 provides that a federal district court may grant an applicant the authority to issue subpoenas in the United States to obtain documents or testimony, including documents or testimony seeking to unmask an anonymous Internet poster to pursue defamation claims abroad.

To pursue Section 1782 discovery, an applicant needs to establish:

  • that the requested discovery is for use in an actual or contemplated proceeding in a foreign or international tribunal;
  • that the applicant is an “interested person” in that proceeding; and
  • that the person from whom the discovery is sought resides or is found in the district of the court where the applicant is making the application.


Continue Reading

A recent decision from a federal court in New York highlights the limits social media users enjoy under Section 230 of the Communications Decency Act (CDA). The case involves Joy Reid, the popular host of MSNBC’s AM Joy who has more than two million Twitter and Instagram followers, and the interaction between a young Hispanic boy and a “Make America Great Again” (MAGA)–hat wearing woman named Roslyn La Liberte at a Simi Valley, California, City Council meeting.

The case centers on a single re-tweet by Reid and two of her Instagram posts.

Here is Reid’s re-tweet.

It says: “You are going to be the first deported” “dirty Mexican” Were some of the things they yelled at this 14 year old boy. He was defending immigrants at a rally and was shouted down.   

Spread this far and wide this woman needs to be put on blast.

 
 

Here is Reid’s first Instagram post from the same day.

It says: joyannreid He showed up to a rally to defend immigrants. … She showed up too, in her MAGA hat, and screamed, “You are going to be the first deported” … “dirty Mexican!” He is 14 years old. She is an adult. Make the picture black and white and it could be the 1950s and the desegregation of a school. Hate is real, y’all. It hasn’t even really gone away.
Continue Reading

For the last twenty years, the music industry has been in a pitched battle to combat unauthorized downloading of music. Initially, the industry focused on filing lawsuits to shut down services that offered peer-to-peer or similar platforms, such as Napster, Aimster and Grokster. For a time, the industry started filing claims against individual infringers to dissuade others from engaging in similar conduct. Recently, the industry has shifted gears and has begun to focus on Internet Service Providers (ISPs), which provide Internet connectivity to their users.

The industry’s opening salvo against ISPs was launched in 2014 when BMG sued Cox Communications, an ISP with over three million subscribers. BMG’s allegations were relatively straightforward. BMG alleged that Cox’s subscribers are engaged in rampant unauthorized copying of musical works using Cox’s internet service, and Cox did not do enough to stop it. While the DMCA provides safe harbors if an ISP takes appropriate action against “repeat infringers,” BMG alleged that Cox could not avail itself of this safe harbor based on its failure to police its subscribers.
Continue Reading

Courts continue to grapple with the enforceability of online agreements. While courts generally enforce clickwrap agreements—online agreements where users affirmatively show their acceptance after being presented with the terms, usually by clicking “I agree”—browsewrap agreements have stood on shakier enforceability grounds. Browsewrap agreements are online terms that, unlike a clickwrap agreement, do not require any affirmative indication of consent. Indeed, users can often continue using a website without ever viewing the terms of a browsewrap agreement, or possibly even knowing they exist. As the Northern District of California’s decision in Alejandro Gutierrez v. FriendFinder Networks Inc. demonstrates, browsewrap agreements are not always unenforceable, but reaching such a determination can be a highly fact-specific inquiry requiring significant discovery—including discovery of offline activities, such as phonecalls between the user and the online service provider.

AdultFriendFinder.com (AFF) is an online dating website. The website is generally free, although users can pay for particular upgrades and services. Users must register to use the site, and AFF collects users’ personal information as part of the registration process. Use of AFF is governed by the site’s Terms of Use (the Terms). Users don’t have to explicitly agree to the Terms in order to register or use AFF, but the Terms are readily available on the site, and they state that continued use of AFF constitutes acceptance. The Terms also include an arbitration provision.
Continue Reading

As regular readers of Socially Aware already know, there are many potential traps for companies that use photographs or other content without authorization from the copyright owners. For example, companies have faced copyright infringement claims based on use of photos pulled from Twitter. Claims have even arisen from the common practice of embedding tweets on blogs and websites, and we have seen a flurry of stories recently about photographers suing celebrities for posting photos of themselves.

Now there is another potential source of liability: the appearance of murals in the background of photographs used in advertisements. In at least two recent cases, automotive companies have faced claims of copyright infringement from the creators of murals painted on buildings that appear in the backgrounds of ads.

Most recently, in a federal district court in the Eastern District of Michigan, Mercedes Benz sought a declaratory judgment that its photographs, taken in Detroit (with permits from the city) and later posted on Instagram, did not infringe the copyrights of three defendants whose murals appeared in the backgrounds of those photographs.
Continue Reading

A recent Second Circuit decision makes clear that the safe harbor that social media and other Internet companies enjoy under Section 230 of the Communications Decency Act broadly applies to a wide variety of claims.

When you think about the Section 230 safe harbor, don’t just think defamation or other similar state law claims. Consider whether the claim—be it federal, state, local, or foreign—seeks to hold a party that publishes third-party content on the Internet responsible for publishing the content. If, after stripping it all down, this is the crux of the cause of action, the safe harbor should apply (absent a few statutory exclusions discussed below). The safe harbor should apply even if the party uses its discretion as a publisher in deciding how best to target its audience or to display the information provided by third parties.

In 2016, Facebook was sued by the estates of four U.S. citizens who died in terrorist attacks in Israel and one who narrowly survived but was grievously injured. The plaintiffs claimed that Facebook should be held liable under the federal Anti-Terrorism Act and the Justice Against Sponsors of Terror Act, which provide a private right of action against those who aid and abet acts of international terrorism, conspire in furtherance of acts of terrorism, or provide material support to terrorist groups. The plaintiffs also asserted claims arising under Israeli law.
Continue Reading

A federal district court dismissed a case against supermodel Gigi Hadid for posting to Instagram a photo of herself that was taken by a paparazzo. The reason for the court’s decision was simple: The party claiming copyright ownership of the photo failed to get it registered with the U.S. Copyright Office, a prerequisite to filing