In a purported attempt to safeguard free speech, President Trump has issued an order “Preventing Online Censorship,” that would eliminate the protections afforded by one of our favorite topics here at Socially Aware, Section 230 of the Communications Decency Act, which generally protects online platforms from liability for content posted by third parties. President

Online service providers typically seek to mitigate risk by including arbitration clauses in their user agreements. In order for such agreements to be effective, however, they must be implemented properly. Babcock vs. Neutron Holdings, Inc., a recent Southern District of Florida case involving a plaintiff who was injured while riding one of the defendant’s Lime e-scooters, illustrates that courts will closely scrutinize the details of how an online contract is presented to users to determine whether or not it is enforceable.
Continue Reading Sweating the Details: Court Analyzes User Interface to Uphold Online Arbitration Clause

A federal district court in New York held that a photographer failed to state a claim against digital-media website Mashable for copyright infringement of a photo that Mashable embedded on its website by using Instagram’s application programming interface (API). The decision turned on Instagram’s terms of use.

Mashable initially sought a license from the plaintiff, a professional photographer named Stephanie Sinclair, to display a photograph in connection with an article the company planned to post on its website, mashable.com. The plaintiff refused Mashable’s offer, but Mashable, nevertheless, embedded the photograph on its website through the use of Instagram’s API.

Instagram’s terms of use state that users grant Instagram a sublicensable license to the content posted on Instagram, subject to Instagram’s privacy policy. Instagram’s privacy policy expressly states that content posted to “public” Instagram accounts is searchable by the public and available for others to use through the Instagram API.
Continue Reading S.D.N.Y.: Public Display of Embedded Instagram Photo Does Not Infringe Copyright

Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.

Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear.
Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity

New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death.
Continue Reading Are Facebook Posts Discoverable? Application of the Forman Test in N.Y.

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images.
Continue Reading Clearview AI and the Legal Challenges Facing Facial Recognition Databases

A random Twitter account tags a Japanese company and badmouths it in a series of tweets. Because the tweets are tagged, a search of the company’s name on Twitter will display the tweets with the negative comments among the search results. Upset over the tweets, the Japanese company wants to sue the tweeter in Japan. But how can it? The tweeter has not used his real name.

This is where discovery under 28 U.S.C. § 1782 can help. Section 1782 provides a vehicle for companies or individuals seeking U.S. discovery in aid of foreign litigation—even if the litigation is merely contemplated and not yet commenced. Specifically, Section 1782 provides that a federal district court may grant an applicant the authority to issue subpoenas in the United States to obtain documents or testimony, including documents or testimony seeking to unmask an anonymous Internet poster to pursue defamation claims abroad.

To pursue Section 1782 discovery, an applicant needs to establish:

  • that the requested discovery is for use in an actual or contemplated proceeding in a foreign or international tribunal;
  • that the applicant is an “interested person” in that proceeding; and
  • that the person from whom the discovery is sought resides or is found in the district of the court where the applicant is making the application.


Continue Reading Foreign Companies Can Use 28 U.S.C. § 1782 to Unmask Anonymous Internet Posters

A recent decision from a federal court in New York highlights the limits social media users enjoy under Section 230 of the Communications Decency Act (CDA). The case involves Joy Reid, the popular host of MSNBC’s AM Joy who has more than two million Twitter and Instagram followers, and the interaction between a young Hispanic boy and a “Make America Great Again” (MAGA)–hat wearing woman named Roslyn La Liberte at a Simi Valley, California, City Council meeting.

The case centers on a single re-tweet by Reid and two of her Instagram posts.

Here is Reid’s re-tweet.

It says: “You are going to be the first deported” “dirty Mexican” Were some of the things they yelled at this 14 year old boy. He was defending immigrants at a rally and was shouted down.   

Spread this far and wide this woman needs to be put on blast.

 
 

Here is Reid’s first Instagram post from the same day.

It says: joyannreid He showed up to a rally to defend immigrants. … She showed up too, in her MAGA hat, and screamed, “You are going to be the first deported” … “dirty Mexican!” He is 14 years old. She is an adult. Make the picture black and white and it could be the 1950s and the desegregation of a school. Hate is real, y’all. It hasn’t even really gone away.
Continue Reading The Joys and Dangers of Tweeting: A CDA Immunity Update

For the last twenty years, the music industry has been in a pitched battle to combat unauthorized downloading of music. Initially, the industry focused on filing lawsuits to shut down services that offered peer-to-peer or similar platforms, such as Napster, Aimster and Grokster. For a time, the industry started filing claims against individual infringers to dissuade others from engaging in similar conduct. Recently, the industry has shifted gears and has begun to focus on Internet Service Providers (ISPs), which provide Internet connectivity to their users.

The industry’s opening salvo against ISPs was launched in 2014 when BMG sued Cox Communications, an ISP with over three million subscribers. BMG’s allegations were relatively straightforward. BMG alleged that Cox’s subscribers are engaged in rampant unauthorized copying of musical works using Cox’s internet service, and Cox did not do enough to stop it. While the DMCA provides safe harbors if an ISP takes appropriate action against “repeat infringers,” BMG alleged that Cox could not avail itself of this safe harbor based on its failure to police its subscribers.
Continue Reading Will the Music Industry Continue To Win Its Copyright Battle Against ISPs?