In Ganske v. Mensch, a defamation suit stemming from a “battle by Tweet,” a federal district court in New York held that the allegedly defamatory statements in the defendant’s Tweet were nonactionable statements of opinion and dismissed the case. The case illustrates that courts in such “Twibel” (Twitter + libel) cases may view Tweets and similar statements on social media as informal and “freewheeling” in nature, which reasonable readers would understand to be expressions of opinion rather than statements of fact.

Charles Ganske, a former Associated Press (AP) journalist, sued Louise Mensch, a blogger and former member of the British Parliament, for defamation and tortious interference. Ganske argued that Mensch defamed him and interfered with his employment at AP based on a single Tweet that she posted on July 27, 2018, by which she “interjected herself” into a Twitter thread between Ganske and another Twitter user with the handle @Conspirator0.

Mensch’s Tweet from her @patribotics Twitter account stated: “To this xenophobic tweet of yours, sir, I fear we must tell @APCentral ‘citation needed’. You clearly personally spread Russian bots on your own site; and @Conspirator0 work on it has sent you into a frenzy of tweeting and trying to discredit him.”

Ganske claimed that Mensch’s Tweet contained false and defamatory statements about him because neither he nor his Tweets were xenophonic and he never spread Russian bots on any website. He also alleged that Mensch deliberately tagged his employer, AP, and published the Tweet to @APCentral in order to interfere with his employment. Ganske’s employment with AP was later terminated, and Ganske argued that this was the result of Mensch’s Tweet.
Continue Reading S.D.N.Y. Dismisses Defamation Case Arising Out of “Battle by Tweet”

In Elliott v. Donegan, a federal district court in New York held that Section 230 of the Communications Decency Act does not warrant the dismissal of a defamation claim where the plaintiff’s complaint did not “foreclose[] the possibility that Defendant created or developed the allegedly unlawful content.” The content at issue was a “publicly accessible, shared Google spreadsheet” titled “Shitty Media Men” that the defendant Moira Donegan had started containing a list of men in the media business who had allegedly committed some form of sexual violence.

Donegan, a media professional, circulated the list via email and other electronic means to women in her industry in an effort to share information about “people to avoid.” The list included the headings “NAME, AFFILIATION, ALLEGED MISCONDUCT, and NOTES.” It also included a disclaimer at the top of the spreadsheet stating, “This document is only a collection of misconduct allegations and rumors. Take everything with a grain of salt.”

After Donegan first published the list, “Shitty Media Men” assumed a life of its own. The spreadsheet went viral and a flurry of allegations were swiftly submitted. Soon, 70 men were named and major media outlets were preparing to publish related stories. As a result, the defendant took the spreadsheet offline about 12 hours after she posted it.
Continue Reading EDNY Refuses to Dismiss on § 230 Grounds in “Shitty Media Men” Defamation Case

It is another win for social media platforms in the realm of the Communications Decency Act’s Section 230. In a case of first impression within the Third Circuit, the Eastern District of Pennsylvania in Hepp v. Facebook ruled that social media platforms are immune under the Communications Decency Act for right of publicity violations under state law by users of such platforms.

Karen Hepp, a television news anchor for FOX 29 News, filed a complaint against several social media platforms, including Facebook, Imgur, Reddit, and Giphy (collectively, “social media defendants”), alleging that the social media defendants violated Pennsylvania’s right of publicity statute and Hepp’s common law right of publicity, based on such defendants’ “unlawful use of her image.”

Two years before filing her complaint, Hepp discovered that a photograph of her was taken without her consent by a security camera in a New York City convenience store. The photograph was subsequently used in online advertisements for erectile dysfunction and dating websites. For example, Hepp’s photograph was featured: (a) on Imgur under the heading “milf,” and (b) on a Reddit post titled “Amazing” in the subgroup r/obsf (“older but still $#^able”). Hepp alleged that, as a public figure, she suffered harm from the unauthorized publication of her image on the platforms hosted by the social media defendants, but she did not allege that such defendants created, authored, or directly published the photograph at issue.


Continue Reading District Court in 3rd Circuit Sides with 9th Circuit: §230 Protects Social Platforms from State Law Intellectual Property Claims

Foreign websites that use geotargeted advertising may be subject to personal jurisdiction in the United States, even if they have no physical presence in the United States and do not specifically target their services to the United States, according to a new ruling from the Fourth Circuit Court of Appeals.

In UMG Recordings, Inc. v. Kurbanov, twelve record companies sued Tofig Kurbanov, who owns and operates the websites: flvto.biz and 2conv.com. These websites enable visitors to rip audio tracks from videos on various platforms, like YouTube, and convert the audio tracks into downloadable files.

The record companies sued Kurbanov for copyright infringement and argued that a federal district court in Virginia had specific personal jurisdiction over Kurbanov because of his contacts with Virginia and with the United States more generally. Kurbanov moved to dismiss for lack of personal jurisdiction, and the district court granted his motion.
Continue Reading Stretching the Bounds of Personal Jurisdiction, 4th Circuit Finds Geotargeted Advertising May Subject Foreign Website Owner to Personal Jurisdiction in the U.S.

In a purported attempt to safeguard free speech, President Trump has issued an order “Preventing Online Censorship,” that would eliminate the protections afforded by one of our favorite topics here at Socially Aware, Section 230 of the Communications Decency Act, which generally protects online platforms from liability for content posted by third parties. President

Online service providers typically seek to mitigate risk by including arbitration clauses in their user agreements. In order for such agreements to be effective, however, they must be implemented properly. Babcock vs. Neutron Holdings, Inc., a recent Southern District of Florida case involving a plaintiff who was injured while riding one of the defendant’s Lime e-scooters, illustrates that courts will closely scrutinize the details of how an online contract is presented to users to determine whether or not it is enforceable.
Continue Reading Sweating the Details: Court Analyzes User Interface to Uphold Online Arbitration Clause

A federal district court in New York held that a photographer failed to state a claim against digital-media website Mashable for copyright infringement of a photo that Mashable embedded on its website by using Instagram’s application programming interface (API). The decision turned on Instagram’s terms of use.

Mashable initially sought a license from the plaintiff, a professional photographer named Stephanie Sinclair, to display a photograph in connection with an article the company planned to post on its website, mashable.com. The plaintiff refused Mashable’s offer, but Mashable, nevertheless, embedded the photograph on its website through the use of Instagram’s API.

Instagram’s terms of use state that users grant Instagram a sublicensable license to the content posted on Instagram, subject to Instagram’s privacy policy. Instagram’s privacy policy expressly states that content posted to “public” Instagram accounts is searchable by the public and available for others to use through the Instagram API.
Continue Reading S.D.N.Y.: Public Display of Embedded Instagram Photo Does Not Infringe Copyright

Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.

Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear.
Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity

New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death.
Continue Reading Are Facebook Posts Discoverable? Application of the Forman Test in N.Y.

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images.
Continue Reading Clearview AI and the Legal Challenges Facing Facial Recognition Databases