China’s “internet police,” who coordinate online censorship, have become especially busy since the coronavirus outbreak.

Inspired by homicides that were precipitated by social media posts created by one group of teenagers to incite another, a Florida bill would allow law enforcement to charge juveniles with a misdemeanor for posting photos of themselves with firearms online.

In an effort to control the proliferation of “a broad range of online harms”—from cyberbullying to child exploitation—the UK government has chosen the communications watchdog Ofcom as its first pick for enforcing its plan requiring platforms to take “reasonable” measures to protect their users from those harms.

Two-and-a-half years after the EU initiated an optional code of conduct on online hate speech, the percentage of flagged content that gets reviewed within 24 hours by the platforms that have opted in has risen considerably.

Unlike the rest of the European Union, which has adopted an opt-in code of conduct to address the online hate-speech problem, Germany is proposing legislation that would impose hefty fines on social media platforms that fail to report illegal content such as posts that are related to terrorism or qualify as racial incitement. Read how much they risk having to pay.

As the demand for “aspirational” influencers gives way to a desire for “authenticity,” influencers who chronicled their COVID-19 coping efforts drew ire for privileged behaviors including fleeing town to sit out the quarantine in vacation towns where they risk spreading the virus.

A federal district court in New York held that a photographer failed to state a claim against digital-media website Mashable for copyright infringement of a photo that Mashable embedded on its website by using Instagram’s application programming interface (API). The decision turned on Instagram’s terms of use.

Mashable initially sought a license from the plaintiff, a professional photographer named Stephanie Sinclair, to display a photograph in connection with an article the company planned to post on its website, mashable.com. The plaintiff refused Mashable’s offer, but Mashable, nevertheless, embedded the photograph on its website through the use of Instagram’s API.

Instagram’s terms of use state that users grant Instagram a sublicensable license to the content posted on Instagram, subject to Instagram’s privacy policy. Instagram’s privacy policy expressly states that content posted to “public” Instagram accounts is searchable by the public and available for others to use through the Instagram API. Continue Reading S.D.N.Y.: Public Display of Embedded Instagram Photo Does Not Infringe Copyright

Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.

Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear. Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity

A federal district court in Illinois allowed claims for vicarious and direct copyright infringement to proceed against an employee of the Chicago Cubs Baseball Club for retweeting a third-party tweet containing the plaintiff’s copyrighted material. Read the opinion.

Thinking of backing Biden in November? Would his unequivocal opinion on Section 230 of the Communications Decency Act affect your decision?

In an opinion important to platforms that monetize user-generated content, the U.S. Court of Appeals for the Ninth Circuit held the safe harbor provisions in §512(c) of the Digital Millennium Copyright Act (DMCA) did not exempt Zazzle—a company that creates and sells items such as t-shirts and mugs bearing images uploaded by users—from copyright liability for willfully infringing 35 separate copyrighted works. In one of this blog’s most popular posts ever, I explain how platforms that commercialize their user-generated content can reduce their risk.

The advertising industry and the U.S. Chamber of Commerce are encouraging California Attorney General Xavier Becerra to postpone the anticipated July enforcement of the California Consumer Privacy Act, citing the law’s complexity.

The estate of the late musician Prince successfully brought a copyright infringement claim against an individual who unofficially recorded and uploaded videos containing performances of copyrighted songs. Accordingly to a federal district court in Massachusetts, the videos do not qualify for the fair use exception to copyright infringement because the uploader/defendant “did not imbue Prince’s musical compositions with new meaning or add any of his own expression to the underlying works.” Read more of the court’s reasoning.

In a controversy as old as the Internet itself, Germans are debating whether social media users should be permitted to remain anonymous.

Actor Steven Seagal will pay more than $300,000 to resolve U.S. Securities and Exchange Commission claims that he failed to tell Twitter and Facebook followers he was being paid to promote an initial coin offering.

Twitter has a special process for reviewing tweets by public figures—including President Trump—that have been flagged for potentially violating the platform’s rules. This profile of Twitter’s top lawyer, Vijaya Gadde, describes it.

The Federal Trade Commission (FTC) appears to be using its ongoing review of current rules and guides to revisit its approach to driving home the message that the relationship between a social media “influencer” and the brand he or she is endorsing must be disclosed. As we have described previously, the FTC has interpreted its Guides Concerning the Use of Endorsements and Testimonials in Advertising (the “Endorsement Guides”) to require that online advertisements — like all other advertising — clearly and conspicuously disclose material connections between endorsers (i.e., influencers) and the brands they promote because such connections may affect the credibility of the endorsement. And, in recent years, the FTC has — through enforcement actions, press releases, guidance, closing letters, and letters sent directly to endorsers (including prominent public figures) — made clear its belief that: (1) appropriate disclosures by influencers are essential to protecting consumers; and (2) in too many instances, such disclosures are absent from celebrity or other influencer endorsements.

Now, in connection with a request for comments on the Endorsement Guides, FTC Commissioner Rohit Chopra has issued a scathing statement calling on the FTC to “take bold steps to safeguard our digital economy from lies, distortions, and disinformation.” In this regard, Commissioner Chopra suggests that the FTC’s efforts to date have not been effective in “deterring misconduct in the marketplace” relating to inauthentic and fake reviews, and that, in particular, elements of the Endorsement Guides should be codified as formal rules so that violators can be liable for civil penalties and damages under the FTC Act.

Also of note is that Commissioner Chopra has asserted that the FTC should refocus its efforts on advertisers themselves, and not the influencers that promote their brands.  According to the Commissioner, “when companies launder advertising by paying someone for a seemingly authentic endorsement or review, this is illegal payola,” and “companies paying for undisclosed influencer endorsements and reviews are not [being] held fully accountable for this illegal activity.” Seeking to aggressively penalize advertisers themselves would be a shift in emphasis for the FTC, as its recent efforts to combat inadequate disclosures in influencer advertising have focused on influencers. For example, the FTC recently produced a brochure detailing the responsibility of influencers “to make [required] disclosures, to be familiar with the Endorsement Guides, and to comply with laws against deceptive ads.” The FTC also brought an enforcement action against influencers, and foreshadowed that more enforcement will happen in the future.

Continue Reading Fake News & Paid Reviews: FTC Seeks Comments on its Endorsement Guides

New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death. Continue Reading Are Facebook Posts Discoverable? Application of the Forman Test in N.Y.

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images. Continue Reading Clearview AI and the Legal Challenges Facing Facial Recognition Databases

Socially Aware contributors Alex Lawrence and Kristina Ehle authored an article for the Computer Law Review International that discusses the impact of the hiQ Labs v. LinkedIn decision from the U.S. Court of Appeals for the Ninth Circuit, which holds that automated scraping of publicly accessible data does not violate the Computer Fraud and Abuse Act.

“While some may interpret the LinkedIn decision as greenlighting [unauthorized webscraping], this would be a mistake,” the authors wrote. “On close review of the decision, and in light of other decisions that have held unauthorized webscrapers liable, the conduct remains vulnerable to legal challenge in the United States.”

The authors added that the court “expressed concern that LinkedIn sent the cease-and-desist letter because it planned to create a new product that competed with hiQ’s services, which the court held could raise concerns under California’s unfair competition laws,” and noted that, to avoid such claims under U.S. law, “unauthorized webscrapers should be addressed promptly before they free ride for years and build a business off your data.”

Read the full article.

On December 19, 2019, the Staff of the U.S. Securities and Exchange Commission’s Division of Corporation Finance issued guidance outlining the Staff’s views about disclosure obligations that companies should consider with respect to technology, data and intellectual property risks that could arise when operations take place outside the United States. Companies should consider this guidance when preparing risk factor and other disclosures included in upcoming periodic reports and registration statements.

Background

The Staff notes that the SEC’s principles-based disclosure regime recognizes that new risks may arise over time, affecting different companies in different ways. For those companies that conduct business operations outside the United States, risks can arise for technology and intellectual property, particularly when operations take place in jurisdictions that do not provide protection that is comparable to the United States. The Staff observes that companies may be exposed to material risks of “theft of proprietary technology and other intellectual property, including technical data, business processes, data sets or other sensitive information.” Exposure to such risks can be heightened when companies conduct business in some foreign jurisdictions, house technology, data and intellectual property abroad, or license technology to joint ventures with foreign partners. Continue Reading SEC Staff Issues Guidance on Technology, Data & IP Risks in International Operations

In a move that might be part of a settlement that YouTube has entered into with the Federal Trade Commission, the video-sharing site said it will ban “targeted” advertisements on videos likely to be watched by children. Because targeted ads rely on information collected about the platform’s users, displaying such ads to children younger than 13 without parental permission violates the Children’s Online Privacy Act (COPPA). Until now, YouTube has avoided banning targeted ads on its primary site, arguing that children should only be using YouTube Kids, a site that is free of targeted ads.

Twitter announced plans to change several aspects of its platform. One of the new features that the company is researching would allow users to control who—if anyone—may respond to their tweets. Inspired by Twitter’s desire to give users control over how far their tweets spread, the feature should be available later this year. Read about other plans that the platform has in store.

In anticipation of the 2020 election, Facebook said it will remove from its platform deep fakes, heavily altered content likely to mislead Facebook’s users.

Supporters of Section 230 of the Communications Decency Act, which protects online platforms from liability for user-generated content, are playing defense again, this time from House Speaker Nancy Pelosi, who wants the statute’s language sheltering web companies from liability stripped from the United States’ trade pact with Mexico and Canada. Find out why.

New York State Governor Andrew Cuomo proposed legislation that would make it a crime for convicted sex offenders to misrepresent themselves online. It also would require sex offenders to disclose to the Division of Criminal Justice Services the screen names they use for each of their social media accounts, dating apps and gaming apps.

Tesla chief executive Elon Musk’s tweet using the phrase “pedo guy” to refer to a man who had insulted Musk during a television interview did not amount to defamation, a California federal court jury found. Learn the basis of their decision.

As we reported late last year, in an effort to protect users’ mental health, the social media platform Instagram is phasing out popularity metrics such as “likes.” With such popularity metrics invisible to users, follower engagement—which brands use to determine an influencer’s value—will be demonstrated mostly in the form of comments. Because, on Instagram, comments are largely driven by captions, the quality of captions will be a major factor in determining which influencers continue to be successful on that platform despite the fact that it is primarily visual, one columnist argues.

Speaking of influencers, eight-year-old Ryan of Ryan’s World makes earning a living as an influencer look easy, having raked in $26 million in 2019 by posting videos like the one of him running around his garden to scoop up plastic eggs with toys inside them. But, the BBC reports, Ryan is something of an “outlier,” and “96.5% of YouTubers don’t make enough from advertising revenue alone to break the US poverty line.” Find out the names of the other top-ten highest earning influencers.