A new report from the U.S. Copyright Office suggests that Congress should fine-tune the Digital Millennium Copyright Act (DMCA) to, among other things, alter the takedown system that platforms must adhere to in order to be eligible for the safe harbor the DMCA affords to online platforms when third parties post infringing content. Read about the Copyright Office’s issues with the current takedown system.

Malwarebytes, an online filtering company, has asked the U.S. Supreme Court to grant certiorari in a case brought by one of Malwarebytes’ competitors, Enigma Software, alleging that—when Malwarebytes flagged one of Enigma’s most popular offerings as a potential threat—Malwarebytes, among other things, committed a deceptive business practice. The Ninth Circuit refused to dismiss the case, holding that Section 230(c)(2) of the Communications Decency Act did not insulate Malwarebytes from liability.

A new law in France would impose fines of up to $1.36 million on technology platforms that fail to take down terrorist and child pornography content within one hour of that content being flagged, or fail to remove hateful comments that concern topics including gender or disability within 24 hours of being flagged.

Members of the Florida Bar who deliver targeted ads through social media must comply with that state bar’s more restrictive direct solicitation rules rather than its general advertising rules.

A law firm fired one of its Dallas-based employees after learning he had posted to his personal social media account a rant about businesses requesting him to wear a mask to thwart the possibility of spreading or catching the COVID-19 virus. The rant included a threat to “show [his] Glock 21” handgun shooting range results to “the lame security guard outside of a ghetto store.”

The New Jersey Supreme Court’s Disciplinary Review Board (DRB) decided that John Robertelli, a Rivkin Radler lawyer, violated the state’s Rules of Professional Conduct when—in order to gather evidence while acting as counsel for defendants in a personal injury case—Robertelli surreptitiously accessed the private Facebook account of the plaintiff, whom Robertelli knew was represented by opposing counsel. The DRB also recommended that the New Jersey Supreme Court adopt a policy on using social media for discovery purposes. Read the guidelines the DRB suggested.

Big changes are afoot at Facebook, which has recently introduced Shops, allowing users to purchase products directly from businesses’ Facebook pages, and announced the addition of new features to Workplace, the company’s “enterprise-focused chat and video platform.”

China’s “internet police,” who coordinate online censorship, have become especially busy since the coronavirus outbreak.

Inspired by homicides that were precipitated by social media posts created by one group of teenagers to incite another, a Florida bill would allow law enforcement to charge juveniles with a misdemeanor for posting photos of themselves with firearms online.

In an effort to control the proliferation of “a broad range of online harms”—from cyberbullying to child exploitation—the UK government has chosen the communications watchdog Ofcom as its first pick for enforcing its plan requiring platforms to take “reasonable” measures to protect their users from those harms.

Two-and-a-half years after the EU initiated an optional code of conduct on online hate speech, the percentage of flagged content that gets reviewed within 24 hours by the platforms that have opted in has risen considerably.

Unlike the rest of the European Union, which has adopted an opt-in code of conduct to address the online hate-speech problem, Germany is proposing legislation that would impose hefty fines on social media platforms that fail to report illegal content such as posts that are related to terrorism or qualify as racial incitement. Read how much they risk having to pay.

As the demand for “aspirational” influencers gives way to a desire for “authenticity,” influencers who chronicled their COVID-19 coping efforts drew ire for privileged behaviors including fleeing town to sit out the quarantine in vacation towns where they risk spreading the virus.

A federal district court in New York held that a photographer failed to state a claim against digital-media website Mashable for copyright infringement of a photo that Mashable embedded on its website by using Instagram’s application programming interface (API). The decision turned on Instagram’s terms of use.

Mashable initially sought a license from the plaintiff, a professional photographer named Stephanie Sinclair, to display a photograph in connection with an article the company planned to post on its website, mashable.com. The plaintiff refused Mashable’s offer, but Mashable, nevertheless, embedded the photograph on its website through the use of Instagram’s API.

Instagram’s terms of use state that users grant Instagram a sublicensable license to the content posted on Instagram, subject to Instagram’s privacy policy. Instagram’s privacy policy expressly states that content posted to “public” Instagram accounts is searchable by the public and available for others to use through the Instagram API. Continue Reading S.D.N.Y.: Public Display of Embedded Instagram Photo Does Not Infringe Copyright

Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.

Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear. Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity

A federal district court in Illinois allowed claims for vicarious and direct copyright infringement to proceed against an employee of the Chicago Cubs Baseball Club for retweeting a third-party tweet containing the plaintiff’s copyrighted material. Read the opinion.

Thinking of backing Biden in November? Would his unequivocal opinion on Section 230 of the Communications Decency Act affect your decision?

In an opinion important to platforms that monetize user-generated content, the U.S. Court of Appeals for the Ninth Circuit held the safe harbor provisions in §512(c) of the Digital Millennium Copyright Act (DMCA) did not exempt Zazzle—a company that creates and sells items such as t-shirts and mugs bearing images uploaded by users—from copyright liability for willfully infringing 35 separate copyrighted works. In one of this blog’s most popular posts ever, I explain how platforms that commercialize their user-generated content can reduce their risk.

The advertising industry and the U.S. Chamber of Commerce are encouraging California Attorney General Xavier Becerra to postpone the anticipated July enforcement of the California Consumer Privacy Act, citing the law’s complexity.

The estate of the late musician Prince successfully brought a copyright infringement claim against an individual who unofficially recorded and uploaded videos containing performances of copyrighted songs. Accordingly to a federal district court in Massachusetts, the videos do not qualify for the fair use exception to copyright infringement because the uploader/defendant “did not imbue Prince’s musical compositions with new meaning or add any of his own expression to the underlying works.” Read more of the court’s reasoning.

In a controversy as old as the Internet itself, Germans are debating whether social media users should be permitted to remain anonymous.

Actor Steven Seagal will pay more than $300,000 to resolve U.S. Securities and Exchange Commission claims that he failed to tell Twitter and Facebook followers he was being paid to promote an initial coin offering.

Twitter has a special process for reviewing tweets by public figures—including President Trump—that have been flagged for potentially violating the platform’s rules. This profile of Twitter’s top lawyer, Vijaya Gadde, describes it.

The Federal Trade Commission (FTC) appears to be using its ongoing review of current rules and guides to revisit its approach to driving home the message that the relationship between a social media “influencer” and the brand he or she is endorsing must be disclosed. As we have described previously, the FTC has interpreted its Guides Concerning the Use of Endorsements and Testimonials in Advertising (the “Endorsement Guides”) to require that online advertisements — like all other advertising — clearly and conspicuously disclose material connections between endorsers (i.e., influencers) and the brands they promote because such connections may affect the credibility of the endorsement. And, in recent years, the FTC has — through enforcement actions, press releases, guidance, closing letters, and letters sent directly to endorsers (including prominent public figures) — made clear its belief that: (1) appropriate disclosures by influencers are essential to protecting consumers; and (2) in too many instances, such disclosures are absent from celebrity or other influencer endorsements.

Now, in connection with a request for comments on the Endorsement Guides, FTC Commissioner Rohit Chopra has issued a scathing statement calling on the FTC to “take bold steps to safeguard our digital economy from lies, distortions, and disinformation.” In this regard, Commissioner Chopra suggests that the FTC’s efforts to date have not been effective in “deterring misconduct in the marketplace” relating to inauthentic and fake reviews, and that, in particular, elements of the Endorsement Guides should be codified as formal rules so that violators can be liable for civil penalties and damages under the FTC Act.

Also of note is that Commissioner Chopra has asserted that the FTC should refocus its efforts on advertisers themselves, and not the influencers that promote their brands.  According to the Commissioner, “when companies launder advertising by paying someone for a seemingly authentic endorsement or review, this is illegal payola,” and “companies paying for undisclosed influencer endorsements and reviews are not [being] held fully accountable for this illegal activity.” Seeking to aggressively penalize advertisers themselves would be a shift in emphasis for the FTC, as its recent efforts to combat inadequate disclosures in influencer advertising have focused on influencers. For example, the FTC recently produced a brochure detailing the responsibility of influencers “to make [required] disclosures, to be familiar with the Endorsement Guides, and to comply with laws against deceptive ads.” The FTC also brought an enforcement action against influencers, and foreshadowed that more enforcement will happen in the future.

Continue Reading Fake News & Paid Reviews: FTC Seeks Comments on its Endorsement Guides

New York courts are increasingly ordering the production of social media posts in discovery, including personal messages and pictures, if they shed light on pending litigation. Nonetheless, courts remain cognizant of privacy concerns, requiring parties seeking social media discovery to avoid broad requests akin to fishing expeditions.

In early 2018, in Forman v. Henkin, the New York State Court of Appeals laid out a two-part test to determine if someone’s social media should be produced: “first consider the nature of the event giving rise to the litigation and the injuries claimed . . . to assess whether relevant material is likely to be found on the Facebook account. Second, balanc[e] the potential utility of the information sought against any specific ‘privacy’ or other concerns raised by the account holder.”

The Court of Appeals left it to lower New York courts to struggle over the level of protection social media should be afforded in discovery. Since this decision, New York courts have begun to flesh out how to apply the Forman test.

In Renaissance Equity Holdings LLC v. Webber, former Bad Girls Club cast member Mercedes Webber, or “Benze Lohan,” was embroiled in a succession suit. Ms. Webber wanted to continue to live in her mother’s rent controlled apartment after the death of her mother. To prevail, Ms. Webber had to show that she had lived at the apartment for a least two years prior to her mother’s death. Continue Reading Are Facebook Posts Discoverable? Application of the Forman Test in N.Y.

Every day, social media users upload millions of images to their accounts; each day 350 million photos are uploaded to Facebook alone. Many social media websites make users’ information and images available to anyone with a web browser. The wealth of public information available on social media is immensely valuable, and the practice of webscraping—third parties using bots to scrape public information from websites to monetize the information—is increasingly common.

The photographs on social media sites raise thorny issues because they feature individuals’ biometric data—a type of data that is essentially immutable and highly personal. Because of the heighted privacy concerns, collecting, analyzing and selling biometric data was long considered taboo by tech companies — at least until Clearview AI launched its facial recognition software.

Clearview AI’s Facial Recognition Database

In 2016, a developer named Hoan Ton-That began creating a facial recognition algorithm. In 2017, after refining the algorithm, Ton-That, along with his business partner Richard Schwartz (former advisor to Rudy Giuliani) founded Clearview AI and began marketing its facial recognition software to law enforcement agencies. Clearview AI reportedly populates its photo database with publicly available images scraped from social media sites, including Facebook, YouTube, Twitter, and Venmo, and many others. The New York Times reported that the database has amassed more than three billion images. Continue Reading Clearview AI and the Legal Challenges Facing Facial Recognition Databases

Socially Aware contributors Alex Lawrence and Kristina Ehle authored an article for the Computer Law Review International that discusses the impact of the hiQ Labs v. LinkedIn decision from the U.S. Court of Appeals for the Ninth Circuit, which holds that automated scraping of publicly accessible data does not violate the Computer Fraud and Abuse Act.

“While some may interpret the LinkedIn decision as greenlighting [unauthorized webscraping], this would be a mistake,” the authors wrote. “On close review of the decision, and in light of other decisions that have held unauthorized webscrapers liable, the conduct remains vulnerable to legal challenge in the United States.”

The authors added that the court “expressed concern that LinkedIn sent the cease-and-desist letter because it planned to create a new product that competed with hiQ’s services, which the court held could raise concerns under California’s unfair competition laws,” and noted that, to avoid such claims under U.S. law, “unauthorized webscrapers should be addressed promptly before they free ride for years and build a business off your data.”

Read the full article.

On December 19, 2019, the Staff of the U.S. Securities and Exchange Commission’s Division of Corporation Finance issued guidance outlining the Staff’s views about disclosure obligations that companies should consider with respect to technology, data and intellectual property risks that could arise when operations take place outside the United States. Companies should consider this guidance when preparing risk factor and other disclosures included in upcoming periodic reports and registration statements.

Background

The Staff notes that the SEC’s principles-based disclosure regime recognizes that new risks may arise over time, affecting different companies in different ways. For those companies that conduct business operations outside the United States, risks can arise for technology and intellectual property, particularly when operations take place in jurisdictions that do not provide protection that is comparable to the United States. The Staff observes that companies may be exposed to material risks of “theft of proprietary technology and other intellectual property, including technical data, business processes, data sets or other sensitive information.” Exposure to such risks can be heightened when companies conduct business in some foreign jurisdictions, house technology, data and intellectual property abroad, or license technology to joint ventures with foreign partners. Continue Reading SEC Staff Issues Guidance on Technology, Data & IP Risks in International Operations