In late October, Facebook announced that it would change its name to Meta, signaling a shift of the social media giant’s focus toward the metaverse, a virtual space where social media, gaming, augmented reality, virtual reality, and cryptocurrencies converge and allow people to interact virtually with one another. It’s a relatively new concept in technology with many different definitions and interpretations. Facebook CEO Mark Zuckerberg made the announcement at the company’s annual Connect conference that brings together virtual and augmented reality developers, content creators, marketers, and other technologists to explore the industry’s growth and expansion.

In addition to the name change, Facebook also announced in early November that it would be shutting down its face recognition system that close to a third of all Facebook users use. This change in technology policy represents a seismic shift for the company and for the larger social media landscape, reflecting ongoing conversations in the industry about data privacy and access to personal information.

Just in time for the holiday shopping season: Pinterest launches its own shopping network

In late October, image sharing social media giant Pinterest announced the launch of Pinterest TV, a digital destination where users and subscribers can interact with Pinterest creators selling their goods and creations. The channel features original episodes from “Pinners” in livestream sessions that are recorded and can be accessed after broadcast in an on-demand archive.

COVID-19 fueled a meteoric increase in both online and live streaming shopping activities delivered from traditional television channels such as QVC and Home Shopping Network as well as internet companies such as YouTube, TikTok, Snapchat, and Amazon. Axios reported in September that the live video shopping format is poised for exponential growth, while Coresight Research, a retail and tech analysis firm, predicted that livestream e-commerce could see growth to $25 billion in sales within the next two years.

Social media companies prevail in Communications Decency Act’s intellectual property exception; Third Circuit cites internet publisher immunity

Karen Hepp, a well-known Philadelphia television journalist, was photographed in 2017 by a New York City convenience store camera without her knowledge or consent. That photograph later found its way in a variety of online advertisements ranging from dating services to erectile dysfunction products.

Hepp sued Facebook, Reddit, and Imgur in the United States District Court for the Eastern District of Pennsylvania. She claimed that each platform violated Pennsylvania’s statutory and common law right of publicity laws. The Third Circuit of the United States Court of Appeals recently dismissed the case, citing that each of these companies were protected and had immunity under Section 230 of the Communications Decency Act 0f 1996 (CDA). Read the full opinion of Hepp v. Facebook, No. 20-2725 (3d Cir. 2021).

Algorithms under fire: proposed bill could alter Section 230 of the Communications Decency Act

House Energy and Commerce Committee Chairman Frank Pallone, Democratic Representative from New Jersey, is sponsoring the Justice Against Malicious Algorithms Act (JAMA), a bill that would modify Section 230 of the Communications Decency Act of 1996 (CDA), which protects websites from liability over content posted by their users. The bill seeks to amend and strip away immunity from online platforms if they “knowingly or recklessly uses a personalized algorithm to recommend content that materially contributes to physical or severe emotional injury” according to Axios. The bill does not apply to platforms with fewer than five million unique monthly visitors or users.

Some analysts don’t think the bill will get past the House of Representatives, but even with its introduction, it sends a strong message for social media platforms to introduce more stringent policies away from self-regulation of content.

LinkedIn’s presence in China to change under censorship concerns

LinkedIn, the professional networking giant, recently announced that it will retire its Chinese version and replace it with a new job board called “InJobs” to connect China-based professionals with employers. LinkedIn acknowledges that operating in China presents challenges in compliance with the country’s requirements.

LinkedIn recently came under fire for recently blocking U.S. journalists’ profiles in China. One such journalist, Bethany Allen-Ebrahimian, pressed the platform about its censorship activities in China.

This unfolding story raises ongoing questions about Western technology companies and the roles they play in China, especially in their interplay with the Chinese government and its strict censorship laws and regulations.

Over the past several years, Section 230 of the Communications Decency Act, the federal law that provides social media platforms with immunity from liability for user content and was once hailed as “the law that gave us the modern Internet,” has gone from relative obscurity (at least outside of tech circles) to being a household name and politicians’ favorite punching bag. Interestingly, the objections to Section 230 come from advocates on both sides of the aisle. Those on both the left and the right see the law as permitting platforms to maintain content moderation policies that result in significant social ills—they just tend to disagree about what those social ills are, and thus what those content moderation policies ought to be.

Generally speaking, advocates on the left see platforms as being too permissive in allowing misinformation to run rampant and blame that spread of misinformation for everything from Trump’s presidential win to, earlier in President Biden’s term, low COVID vaccination rates. Many on the left also argue that Twitter and Facebook should have banned then-President Trump from their platforms long before they did so in the wake of the January 6 insurrection. By contrast, those on the right view platforms as too restrictive when it comes to content moderation. As they see it, “Big Tech” is a puppet of Democrats and acts on their orders, systematically “censoring” conservative speakers and content as part of a larger liberal conspiracy. These claims from Republicans only increased after Trump finally was banned after January 6.

In short, no one has been happy with how social media platforms self-regulate and both sides believe that, if only Section 230 were amended or even eliminated entirely, their preferred policies would be put in place and the content they (dis)favor would finally be dealt with correctly.

Yet while dozens of pieces of legislation altering Section 230 have been introduced in both houses over the past few years, only one has become law—FOSTA-SESTA. FOSTA was a direct response to online personal ads appearing on Backpage.com and the belief that Section 230 insulated the site from liability for facilitating sex trafficking. It turns out that belief was false; Backpage was brought down without FOSTA and with Section 230 alive and well, but that’s a different story for a different day. The point for present purposes is that while there has been much hand wringing about Section 230 at the federal level, there has been much less action.

There has been much more legislative action at the state level, however, particularly in conservative states. The first to enact an anti-“Big Tech,” anti-Section 230 law was Florida, with Senate Bill 7072. A federal district court enjoined most of that law back in June, a day before it was set to go into effect, holding that it was preempted in large part by Section 230 and was an unconstitutional restriction on platforms’ speech. Florida appealed the court’s ruling and we will be watching closely to see how the Eleventh Circuit rules in the appeal (Plaintiff’s response was submitted on November 8).

Our focus in this post is on what came next: Texas House Bill 20 (HB 20), which was enacted in September and which Texas Governor Abbott described as targeting “a dangerous movement by social media companies to silence conservative viewpoints and ideas.” The law was set to go into effect on December 2, but on November 29, a federal district court in Austin heard oral arguments on its (un)constitutionality. On December 1, the court enjoined every part of it that plaintiffs NetChoice and the Computer & Communications Industry Association challenged.

HB 20

The Short Version

HB 20 (1) prevents social media companies with more than 50 million monthly active users from “censoring” users on the basis of viewpoint; (2) prohibits email service providers from “impeding the transmission” of emails based on content (including at least some spam); (3) requires covered platforms to have a complaint procedure for users who disagree with the platforms’ removal of users’ content based on a determination that the content violated the platforms’ acceptable use policies; and (4) requires covered platforms to make a number of disclosures.

The Long Version

  • (1) Must Carry

One of the most controversial provisions of HB 20 prohibits covered platforms (i.e., those with over 50 million monthly active users) from “censoring” on the basis of user viewpoint, user expression, or the ability of a user to receive the expression of others. (It also bans “censorship” on the basis of a user’s geographic location in Texas.) “Censor” is defined as to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” In essence, content moderation is censorship. There are, though, four exceptions. Platforms may “censor” content that:

  • (a) the platform is “specifically authorized to censor by federal law”;
  • (b) an organization with the purpose of preventing the sexual exploitation of children or protecting survivors of sexual abuse from ongoing harassment requests the removal of (note that these platforms cannot remove the content if the actual subject of that harassment makes the request);
  • (c) directly incites criminal activity or consists of specific threats of violence targeted against a person or group on the basis of race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge; or
  • (d) is unlawful.

Given this, HB 20 would prohibit a social media platform from demoting the content of white supremacists unless it was effectively illegal as incitement or a true threat. Hate speech, which is generally protected by the First Amendment, could not be removed.

  • (2) Bans email service providers from “impeding the transmission” of another party’s emails and a private right of action for those affected

For reasons that are frankly hard to understand, HB 20 prohibits email service providers from “impeding the transmission” of another person’s emails based on the content of the messages unless the provider has a good faith and reasonable belief that the message contains either: malicious computer code; obscene material; material depicting sexual conduct; material that violates other laws; or the provider is authorized to block the transmission under other applicable state or federal laws. The law disincentivizes email service providers from blocking spam on the basis of these exceptions, though, by providing a private right of action to those “injured by a violation of this [provision] arising from the transmission of an unsolicited or commercial electronic mail message” and entitles them to statutory damages up to $25,000 a day their message is unlawfully impeded.

HB 20 defines neither “impeding” nor “transmission,” but it is hard to imagine an interpretation that doesn’t conflict with the protections granted to these service providers by Section 230(c)(2)(A) and 230(c)(2)(B). Section 230(c)(2) prevents the imposition of civil liability on internet service providers on account of (A) “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” or (B) “any action taken to enable or make available to [users] the technical means to restrict access to” those same materials. None of us wants our inboxes filled with spam and, mercifully, Section 230(c)(2) encourages email service providers to help us avoid that from happening. Unfortunately, Plaintiffs did not challenge this provision, so the court did not address whether it would be preempted by Section 230.

  • (3) and (4) Disclosures and a User Complaint Procedure

HB 20 imposes three transparency obligations on covered platforms. They must: (a) disclose a laundry list of information about their business practices, including specific and detailed information about their content moderation policies; (b) have and make accessible an acceptable use policy that includes a number of elements, including a complaint intake mechanism (an earlier draft required platforms to provide a toll-free phone number users could call to complain); and (c) release biannual transparency reports.

As for the complaint intake mechanism, covered platforms are required to abide by a potentially burdensome process for removing user content that violates its acceptable use policy. For starters, when the platform removes the content, then (so long as the user can be located and the content is not part of an ongoing legal investigation) the company must: (a) notify the user and explain the reason for removal; (b) allow the user to appeal the decision; and (c) provide written notice to the user regarding the determination of the user’s appeal. If a user appeals the platform’s determination, the platform has 14 days (excluding weekends) to review the content again and notify the user of its determination. It is unclear how specific the explanation that the platform provides needs to be, but in any event this process seems likely to impose significant administrative burdens on platforms. This, in turn, could significantly discourage platforms from removing user content. While the conservative champions of the law may see that as precisely the goal, it would likely cause platforms to become less pleasant and welcoming to users. And, of course, Section 230 was enacted at least partly to encourage “computer service providers to establish standards of decency without risking liability for doing so,” as the Second Circuit noted in Domen v. Vimeo. Put another way, HB 20 appears to have been enacted to discourage exactly what Section 230 sought to encourage.

The Court’s Order

Somewhat surprisingly, the district court did not reach the question of whether HB 20 is preempted, in whole or in part, by Section 230. Unlike in the Florida case mentioned above, here the court enjoined the law entirely on the basis of the First Amendment.

Platforms Exercise Constitutionally Protected Editorial Discretion

Step One: If an entity exercises editorial discretion in choosing what to publish, the government cannot compel it to publish other content

The district court cited the Supreme Court’s three big compelled speech cases—Tornillo, Hurley, and PG&E—finding them to jointly “stand for the general proposition that private companies that use editorial judgment to choose whether to publish content—and, if they do publish content, use editorial judgment to choose what they want to publish—cannot be compelled by the government to publish other content.”

(As a quick a reminder, in Miami Herald Pub. Co. v. Tornillo the Court struck down a “right of reply” law that required newspapers to publish a candidate’s reply if the paper criticized her. In Hurley v. Irish-Am. Gay, Lesbian & Bisexual Group Of Boston, the Court held that a private organization had the right to exclude a gay rights group from their parade even in the face of a state antidiscrimination law that would have required the organization to permit the gay rights group’s participation. In Pacific Gas & Electric v. Public Utilities Commission of California, the Court found unconstitutional California’s requirement that a public utility company include a third-party newsletter in addition to its own newsletter in the envelopes in which it sent bills to its customers.)

Step Two: The covered platforms exercise editorial discretion

The court then moved on to conclude that the covered platforms engage in protected editorial discretion.

The parties disputed whether the platforms were more like newspapers and therefore entitled to a higher level protection for their speech (Plaintiffs’ position) or common carriers which, in their capacity as mere conduits of the speech of others, have been required to provide access on nondiscriminatory terms without raising First Amendment concerns (Defendant’s position). To support its position, Texas pointed to HB 20’s own pronouncement that the covered platforms were common carriers. The district court, though, was unmoved, and made clear that wishing does not make it so. “This Court starts from the premise that social media platforms are not common carriers.” At oral arguments, Texas took the position that, “the common carriage doctrine is essential to the First Amendment challenge . . . It dictates the rest of this suit in terms of the First Amendment inquiry.” Given the state’s position, the court acknowledged that it could have stopped there but decided not to, “in order to make a determination about whether social media platforms exercise editorial discretion or occupy a purgatory between common carrier and editor.”

The court found it straightforward that platforms exercise editorial discretion. They “curate both users and content to convey a message about the type of community the platform seeks to foster and, as such, exercise editorial discretion over their platform’s content.” The court pointed out that HB 20 itself was premised on the position that the covered platforms exercise just such discretion when they “silence conservative viewpoints and ideas.” In reaching its determination, the court dealt with a claim made by both the state and a number of commentators. Namely, that the platforms cannot be speaking when they curate their content because they use algorithms to do so. The court (rightly) called this a “distraction.”

HB 20 Violates Platforms’ First Amendment Rights

With the above analysis in place, the court went on to systematically address each of HB 20’s challenged provisions (i.e., every provision but the one prohibiting email service providers from “impeding the transmission” of third-party content).

The Must Carry Provision

  • The court returned to Tornillo, Hurley, and PG&E to determine that HB 20 impermissibly compels platforms’ speech. The statute targets the platforms’ editorial judgments, compelling them to “balance” their own judgments by including speech they otherwise would not, which was “precisely” what Tornillo, Hurley, and PG&E found unconstitutional.
  • The court also held that HB 20 requires platforms to change their speech. HB 20 prohibits “censorship” and thereby prohibits platforms from removing or including the content and users that they want. Citing Hurley, the court found this to “require[] platforms to ‘alter the expressive content of their [message].’” In a footnote, the court distinguished the social media platforms’ situation from those present in two cases where the Court rejected appellants’ attempts to invoke Tornillo: Pruneyard Shopping Center v. Robins (in which the Supreme Court upheld against a First Amendment challenge a California constitutional provision that compelled a private shopping center to allow a group onto its property to gather signatures for a petition, even though the shopping center’s policy forbade such activity) and Rumsfeld v. Forum for Academic & Institutional Rights (in which the Supreme Court held that a law requiring law schools to give military recruiters the same access as nonmilitary recruiters or lose certain federal funds did not compel the schools to speak, even though the schools disagreed with the military’s “Don’t Ask, Don’t Tell” policy.)
  • The statute also impermissibly burdens the platforms’ speech by specifying how they may arrange user content and potentially prohibits them from appending disclaimers onto posts.
  • The threat of lawsuits for violating the must carry provision also chills platforms’ speech rights.

Disclosure and User Complaint Procedure Provisions

  • The court found both the disclosure and user-complaint procedure requirements “inordinately burdensome given the unfathomably large number of posts on these sites and apps.” The court here cites some staggering numbers. In three months in 2021, Facebook removed 8.8 million pieces of “bullying and harassment content,” 9.8 million pieces of organized “hate content,” and 25.2 million pieces of “hate speech content.” In three months in 2021, YouTube removed 1.16 billion The court recognized that it would simply be impossible for these companies to operate if every removal were appealable.
  • In addition, the disclosure requirements compel speech.
  • The requirements also chill protected speech, as do the consequences of noncompliance.

HB 20 Discriminates on the Basis of Content and Speaker

The court was not yet finished with HB 20, also finding the law to permit unconstitutional content- and speaker-based discrimination.

  • HB 20 permits platforms to enforce their own content moderation policies for content that “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace office or judge.” Citing A.V. v. City of St. Paul, Minnesota, the court agreed with Plaintiffs that the State has “‘no legitimate reason to allow the platforms to enforce their policies over threats based only on . . . favored criteria but not’ other criteria like sexual orientation, military service, or union membership.”
  • Only platforms with over 50 million monthly active users are covered under HB 20. The court made it clear that it thought that number was no accident when it pointed out that one state senator unsuccessfully proposed lowering the threshold to 25 million monthly users in an effort to include Parler and Gab, which are both popular with conservatives. “The record in this case confirms that the Legislature intended to target large social media platforms perceived as being biased against conservative views and the State’s disagreement with the social media platforms’ editorial discretion over their platforms. The evidence thus suggests that the State discriminated between social media platforms (or speakers) for reasons that do not stand up to scrutiny.”

HB 20 Does Not Even Survive Intermediate Scrutiny (Let Alone the Strict Scrutiny It Triggers)

  • The court held that the state failed to articulate a legitimate interest served by HB 20. The State offered two interests served by the law: (1) the “Free and unobstructed use of public forums and the information conduits provided by common carriers” and (2) “providing individual citizens effective protection against discriminatory practices, including discriminatory practices by common carriers.” The court thought (1) failed for several reasons, the most obvious being that platforms are not common carriers and “[e]ven if they were, the State provides no convincing support for recognizing a governmental interest in the free and unobstructed use of common carriers’ information conduits.” Additionally, the court pointed out that the Supreme Court rejected that same government interest in Tornillo. As for (2), the court cited Hurley once again. “Even given a state’s general interest in anti-discrimination laws, ‘forbidding acts of discrimination’ is ‘a decidedly fatal objective’ for the First Amendment’s ‘free speech commands.’” (Obviously, the state can and does prohibit certain acts of discrimination—even when those acts are engaged in by private associations, as the Court recognized in Roberts v. United States Jaycees—so as written, this is not quite true.)
  • The court also held that HB 20 was not narrowly tailored. Here, the court referenced the Northern District of Florida’s language when enjoining a similar law. “[T]he [Florida district court] colorfully described [the statute] as ‘an instance of burning the house to roast a pig.’ This Court could not do better in describing HB 20.” Instead of creating its own unmoderated platform, which Texas of course could do, the court found that the state targeted the large platforms because the state’s true intention was to target those platforms they believed to be “West Coast oligarchs” who were “silenc[ing] conservative viewpoints and ideas.”

As in Florida, the court found that nothing that could be severed and survive.

Concluding Thoughts

The court’s opinion is a complete win for platforms (and, frankly, users). As both the preliminary injunction in this case as well as in the Florida case make clear, the First Amendment and Section 230 will continue to pose obstacles to the viability of state laws like HB 20.

Social Links is our ongoing series here at Socially Aware that rounds up current developments at the intersection of social media, policy, research, and the law.

Embedding social media posts can be considered copyright infringement…but is it?

A Manhattan federal judge ruled in August 2021 that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright.

The case centered around a 2017 video of a starving polar bear that nature photographer Paul Nicklen took and posted on his Instagram and Facebook accounts. Its purpose was to highlight the effects of global warming. When the image went viral, Sinclair Broadcasting Group published an article about it and embedded Nicklen’s post without obtaining his permission.

In reaching his decision, U.S. District Judge Jed Rakoff rejected the “server test” from the 9th Circuit Court of Appeals, which generally holds that embedding content from a third party’s social media account only violates the content owner’s copyright if a copy is stored on the defendant’s servers.

A recent decision on this very topic, however, reveals a different perspective on embedding practices and the “server test.” Reuters reports that a San Francisco federal court rejected a group of photographers’ claims that Instagram’s embedding tool infringed on their copyrights. Judge Charles Breyer of the U.S. District Court for the Northern District of California ruled that “the Instagram feature doesn’t violate the photographers’ exclusive right to display the pictures publicly because third-party websites that embed the images don’t store their own copies of them.” This decision is second in a series that rejects the “server test” cited above.

Photographers Alexis Hunley and Matthew Brauer led the class action complaint in May, followed by Instagram’s move to dismiss in July. According to Reuters, Judge Breyer noted that “Because they do not store the images and videos, they do not ‘fix’ the copyrighted work in any “tangible medium of expression…. Therefore, when they embed the images and videos, they do not display ‘copies’ of the copyrighted work.”

These decisions will most likely end in appeals. We will continue to monitor the developments as they unfold.

Harnessing the wisdom – and skills – of the crowd to combat social media’s trust issues

Elections, COVID-19 vaccinations, Gabrielle Petito and Brian Laundrie’s disappearance, conspiracy theories about other missing persons: each represents a category that has cast social media platforms under intense scrutiny about their handling of the information – or misinformation – that is published and appears across the social media landscape.

While social media platforms employ fact checkers to validate the veracity of posts, the staggering volume of posts on any given day makes it impossible for them to employ enough people to confirm or reject every entry on every channel globally, 24/7. Scale, not to mention cost, remains the central issue.

A recent Wired article suggests that using the “wisdom of the crowds” – groups of lay people – often matches or surpasses the accuracy that professional fact checkers provide. In addition to cost savings, this model also offers something that professional fact checker programs do not: scalability.

These issues around trust and misinformation on social media platforms will continue to dominate headlines across technological, legal, and public opinion forums.

Social media algorithms result in surprising and unexpected moments for those grieving lost loved ones and friends

Many of us have experienced moments when social media serves up posts, memories, or birthday reminders of those who have passed. A recent Wired article explores the emotional and psychological effects of those on the receiving end of such social media notices. Particularly in the COVID-19 era, when many have experienced sudden losses of loved ones, friends, and family members, social media platforms have provided much-needed forums for those wanting to share news, information, tributes, memorials, and memories of those who have passed. Many of the major social media outlets have mechanisms and settings to help family members and friends manage their deceased loved ones’ accounts in respectful ways. Facebook offers both memorialized accounts and legacy contact settings; Instagram provides similar memorialization settings; and Twitter has processes in place  to work with an authorized estate representative to deactivate an account. Google provides a similarly robust program, allowing individuals to designate “digital beneficiaries” who will act on their behalf to manage their accounts once they’ve passed.

While none of us wants to think about end-of-life directives, taking into consideration the digital footprint that we leave behind on social media as part of our legacy is another factor in the technological landscape where we live today.

Young people, social media, and emerging from COVID-19

Much has been written in the last 18 months about the effects that COVID-19 has had on all of us, but in particular, on the youth in the United States. Common Sense, Hopelab, and the California Healthcare Foundation published Coping with COVID-19: How Young People Use Digital Media to Manage Their Mental Health earlier this year. It examined how youth used social media and other online tools to cope with the separation and isolation from friends and other social structures that are vital to their intellectual, social, and emotional development. 

While depression is on the rise among young people (as this infographic explains), there’s good news on the horizon. Recent analyses have noticed positive trends among young people with decreased levels of depression and cited two major factors that occurred during the pandemic: teens are getting more sleep, and they’re spending more time with family.

In addition, GLAAD (formerly known as the Gay and Lesbian Alliance Against Defamation) recently published its first-ever Social Media Safety Index report that examined social media platforms and LGBTQ+ youths’ involvement and usage. While the report cited an increase in hate speech and other hostile forums on social media, other analysts provided a more nuanced interpretation, citing that social media platforms provide a much-needed lifeline for LGBTQ+ youth as they seek information and support.

As part of the new Fair Consumer Contracts Act, [Gesetz für Faire Verbraucherverträge; published in the Federal Gazette (Part I) no. 53/2021, p. 3433 et seq., full text publicly available (in German) Germany will soon require specific cancellation/termination mechanisms for consumer subscriptions. These mechanisms come on top of the updated EU-wide consumer contract rules under the EU Directives on Contracts for Digital Services and Content and on Contracts for the Consumer Sale of Goods and will take effect on July 1, 2022. Significant implementation effort is expected for affected providers.

Businesses offering subscriptions to German consumers will have to provide a specifically labeled button for their online retail channels that leads to a contact form through which consumers must be enabled to cancel existing subscriptions with the click of only one further button. If sufficient data to identify the subscription to be cancelled is entered by the consumer, the submission of the form will itself be a valid cancellation, the effect of which cannot be made subject to further steps such as logins or second factor (e.g., email, app) confirmations.

Failing to implement this “cancellation button” or “termination button” correctly will invalidate any minimum subscription terms or termination notice periods in contracts for which such button should have been provided and will be a breach of consumer protection that opens providers to cease and desist claims.

Read the full client alert.

Clubhouse, the former invitation-only social media darling that captured the attention of investors, social media early adopters, and competitors since its introduction in April 2020, now faces significant challenges as it strives to remain relevant and attract new and engaged users.

Since our previous report on Clubhouse in March 2021, the social media app has released some significant changes and upgrades on its platform.

Based on trends from other social media apps and platforms, both changes would seem to predict a significant uptick in downloads, new users, and activity.

While Clubhouse added more than 1 million Android users within weeks of its launch in May 2021, more recent numbers indicate that the app’s popularity may be declining. Wired, citing a report from analytics group SensorTower, reported that Clubhouse had only 484,000 new installs globally during the five-day period between July 21-25 2021, compared to the 10 million iOS downloads it received in February 2021. (To date, Clubhouse has 30.2 million installations, with 18.7 million of those on the iOS platform, also according to SensorTower as reported by TechCrunch.)

Yet, this slowdown in installs and usage may simply be an indication of the rapid maturation and leveling off of an app that is competing for attention against a plethora of other offerings that provide similar experiences, such as Spotify. ScreenRant, the online review publication, speculated that the significant decrease in downloads could be attributed to “the early novelty of the app wearing off, competitors offering their own takes on the curated audio rooms concept, and maybe even people leaving their homes a little more as COVID-19 restrictions are lifted.

Even with those concerns, Clubhouse continues to capture the interest of investors and users. It enjoys some key advantages over larger competitors: early mover status, a smaller size, and a more nimble model that will enable it to introduce additional features and functions, such as its recent introduction of payments (for iOS users) and Backchannel, its direct messaging feature.

These upgrades will remain crucial to its continued growth and success. Even with pressure from tech giant competitors that are considering functionality similar to Clubhouse’s, many predict that Clubhouse will remain a strong competitor.

Some analysts, such as Bloomberg’s Alex Webb, raised a critical question about Clubhouse and its content monetization strategy.

Webb described one model akin to the subscription based SiriusXM digital radio channel, where users would pay for content either on a broad-based plan or individual channels. Clubhouse recently rolled out its payment platform directly to other club members (primarily conversation and room hosts in the form of tips), but only for iOS users. With this feature, Clubhouse will not receive any percentage of those payments, raising additional questions on its future monetization strategies. This move does, however, provide incentives for popular and influential content hosts to join Clubhouse and contribute to the app’s rising popularity.

In addition, brands are also catching on quickly to the potential for marketing products on the app. The targeted focus that Clubhouse provides (with its moderated rooms) could help Clubhouse capitalize on brands’ desire to reach early adopter influencers.

Noted author and technology analyst Nir Eyal unpacked Clubhouse’s appeal and how it follows his hook model, described as “a way of describing a user’s interactions with a product as they pass through four phases: a trigger to begin using the product, an action to satisfy the trigger, a variable reward for the action, and some type of investment that, ultimately, makes the product more valuable to the user. As [they go] through these phases, [they build] habits in the process.” Eyal explains how Clubhouse follows this four-step cycle, citing key aspects of the model that Clubhouse has clearly mastered in its early days, including internal triggers, variable rewards, scarcity, and rewards of the tribe.

Clubhouse will continue to be a social media phenomenon, and one to watch as it moves to its next level of adoption, innovation, and investment. We’ll keep a close eye on the app’s evolution, how it continues to push boundaries, and where it may be headed in light of its competitors’ developments.

With a judgment dated April 27 and published on June 4, 2021, the German Federal Court (Bundesgerichtshof – the “Court”) declared unfair and therefore illegal and unenforceable a common way to make changes to terms and conditions (“T&Cs”) used vis-à-vis consumers in Germany.

For more information, read the full client alert.

While Section 230 of the Communications Decency Act continues to face significant calls for reform or even elimination, the recent Coffee v. Google case illustrates that Section 230 continues to provide broad protection to online service providers.

In Coffee, the Northern District of California invoked Section 230 to dismiss a putative class action against Google alleging various claims premised on the theory that video games in the Google Play store with a gaming feature called “loot boxes” constituted illegal “slot machines or devices” under California state law.

To obtain these loot boxes, players must purchase virtual in-game currency through Google Play’s payment system. Players can then exchange the virtual currency for loot boxes, which give them a chance to obtain rare virtual items. Google charges a 30% commission on purchases of such virtual currency.

The plaintiffs asserted that these loot boxes “entice[d] consumers, including children, to engage in gambling and similar addictive conduct.” Because Google profited from the loot boxes through commission it charged on sales of virtual currency, the plaintiffs argued that Google should be held liable under a variety of state law claims.

In response, Google moved to dismiss the plaintiffs’ claims, arguing that it was immune under Section 230, which provides a safe harbor from claims that treat an online intermediary as the publisher or speaker of any information provided by another party.

The court evaluated Google’s Section 230 defense using the standard three-prong test as enunciated by the Ninth Circuit in Barnes v. Yahoo!, Inc.: Immunity exists for a “(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.”

On the first prong—whether Google is a provider of an interactive computer service—the court determined that Google was a provider of such a service because it maintains a virtual online store where consumers can download various software applications that are generally created by other developers.

On the second prong—whether the plaintiffs seek to treat Google as a publisher or speaker under a state law cause of action—the court cited Fair Hous. Council of San Fernando Valley v. Roommates.Com for the proposition that publication includes “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online.” Because the plaintiffs apparently sought an order requiring Google to screen apps offered through its Google Play store for those containing the loot boxes, the court reasoned that the plaintiffs’ claims did treat Google as the publisher of the video game apps at issue.

The plaintiffs argued that Section 230 only protects publishers of “speech” rather than publishers of other content such as software. But the court rejected that argument, citing Evans v. Hewlett-Packard Co., which held that the defendant enjoyed immunity under Section 230 in connection with the operation of a web-based store that distributed an app developed by a third party.

The plaintiffs also argued that Section 230 did not apply because their claims did not treat Google as a publisher of another’s content but rather sought to “hold Google accountable for permitting and facilitating illegal gambling.” The plaintiffs cited Barnes for the proposition that Section 230 does not insulate interactive computer service providers from liability for their own wrongful conduct that goes beyond merely publishing another’s content.

Unconvinced, the court noted that Barnes denied Section 230 immunity to Yahoo! with respect to a promise that Yahoo! had made to the plaintiff to remove certain third-party content subsequent to, and separately from, the initial publication of the content. Google had made no such promise to the plaintiff in the instant case.

Finally, on the third prong—whether the information was provided by another information content provider—the plaintiffs noted the holding in Roommates that “[a] website operator is immune [under Section 230] only with respect to content created entirely by third parties.” Specifically, a provider that materially contributes to the illegality of the content at issue is not entitled to immunity under Section 230. However, the court held that the plaintiffs failed to allege any conduct by Google that would constitute a material contribution to the video games on its app store.

Accordingly, the court held that Google met all three prongs of the test and was entitled to immunity under Section 230 with respect to the apps in the Google Play store. Accordingly, the court dismissed the case with leave to amend.

Companies contracting with consumers have to take care to ensure their agreement terms are enforceable. In one of the first post-Brexit decisions on issues in an online consumer contract, a UK court recently showed that principles of fairness and transparency remain vital in the terms and conditions of consumer digital contracts.

In Europe, drafting digital consumer contracts requires extra care and thought to be given towards incorporation, meanings, and additional regulations in comparison to B2B contracts. This is equally true in a post-Brexit world as it was back in 2012 when we reported on something similarPlus ça change, plus c’est la même chose – as no Brexiteer would ever say.

Any consumer-facing company doing business online must step back and ask the question: Would this hold up in court? We’ve outlined some key takeaways from this case for organizations to consider when drafting digital consumer contracts that they apply to UK-based customers.

For more information, read the full client alert.

Partner Christiane Stuetzle, senior associate Patricia Ernst, and research assistant Susan Bischoff authored an article for Law360 covering how online content service providers must act to mitigate risks and avoid liability under the European Union’s Copyright Directive, created in an effort to strengthen the rights of copyright holders by making certain platforms that host user-uploaded content (UUC) liable for copyright infringements.

This article was first published on Law360 on May 14, 2021. It is also available as a download on our website. (Please note that Law360 may require a subscription to access this article.)

===

Until now, throughout the European Union, platforms hosting user-uploaded content have profited from the safe harbor privilege under the EU E-Commerce Directive, which has been shielding platforms from liability for copyright-infringing user uploads for more than 20 years.[1]

This safe harbor privilege implies that online content service providers, or OCSSP, only need to remove copyright-infringing content upon notice to avoid liability for copyright infringement.[2]

In an effort to strengthen the rights of copyright holders, the EU legislator recently decided, however, that certain platform providers will be on the hook for copyright infringements pertaining to user-uploaded content.[3] The directive’s rationale is to close the so-called value gap, a term used to describe rights holders’ missing remuneration if their works are uploaded and shared online by users.

While rights holders have a remuneration claim against such users — though it is difficult to enforce and rarely valuable commercially — they did not have a claim against OCSSPs until now. The directive’s new liability overhauls this substantially: OCSSPs will be considered to commit copyright infringement by making illegal user-uploaded content available.

Liability means, among other things, that OCSSPs can be subject to substantial remuneration and damage claims. Companies need to be aware that this even applies if the OCSSP has properly instructed its users within its standard terms and conditions that upload of non-infringing content only is permitted.

Not all EU member states are supportive of the overhaul. In fact, the Polish government has even challenged the directive before the European Court of Justice.[4] While there is a lot of debate on the details of Article 17, the new liability regime is just around the corner.

The directive requires implementations into national law by June 7. National implementations vary, both as regards timing, some will not meet the deadline, as well as the liability exemptions. Failure by an EU member state to transpose within the deadline will mean that the directive applies directly.

Therefore, it is high time companies had a plan of action to mitigate risks and avoid liability.

Who Is Affected and What Is the Exposure?

The new liability scheme is relevant to OCSSPs only, i.e., service providers, whose main purpose or at least one of the main purposes is to store and give the public access to a large amount of copyright-protected works or subject matters uploaded by its users, which the provider organizes and promotes for profit-making purposes.

For example, categorizing the content and using targeted promotion within would qualify as such organizing. This definition includes certain social media platforms.

For international companies, it is important to keep in mind that these new obligations will apply to any OCSSP hosting copyright-infringing user-uploaded content targeted at the European market regardless of whether its seat is situated within or outside the EU. An OCSSP headquartered in the U.S. but also doing business in Europe will therefore be subject to the new liability scheme.

Sanctions for the copyright infringement committed by an OCSSP will take the form of legal remedies and claims determined by the respective EU member state. For Germany for instance, such sanctions not only include cease-and-desist orders and a penalty in case of recurring infringement, but also information claims and claim for damages in the amount of a hypothetical license fee for the infringing use.

How to Avoid Liability

Under the new liability regime, OCSSPs will have to observe proactive obligations in order to avoid a direct liability for copyright-infringing user-uploaded content.

First and foremost, an OCSSP can avoid direct liability by obtaining a license for the third-party content uploaded by its user.

If no license has been secured, the OCSSP must be able to prove that it complied with the following three obligations:

  • The OCCSP must make “best efforts” to obtain a license. The directive’s wording is less precise than some of the national draft implementations. Still, on this basis alone, it is likely that an OCSSP will be required to actively seek licenses from collective rights societies and large rights holders. License offers do not have to be accepted at any price, but a rejection implies that the OCSSP’s liability for an infringement of the relevant works as part of user-uploaded content remains — subject to the conditions described below.
  • The OCSSP must make best efforts to block copyright-infringing content for which the rights holder has provided the relevant information. Rights holders can provide the OCSSP with information on works that they wish not to be included in user-uploaded content. In practice, OCSSPs will likely have to implement some sort of reference database fed with that information. Content to be uploaded by the OCSSP’s user will then have to be checked against the database. It is to be expected that OCSSPs will only be able to comply with this obligation by employing advanced filter technologies.
  • Finally, the OCSSP must act expeditiously to disable access to or to remove content upon receiving a corresponding request from the rights holder. Subsequently, the OCSSP must block future attempts to upload this removed content, stay-down obligation, by suitable technical means. While the directive does not call such means “automated upload filter,” the majority of stakeholders qualifies this obligation as precisely that. This underlines the economical dimension of Article 17: There is no “one size fits all” filter technology and the directive leaves it open how much companies have to invest into filter technologies.

Adherence to these obligations by the OCSSP shall be determined in accordance with high industry standards of professional diligence and the principle of proportionality. This broad approach leaves EU member states with significant leeway when implementing the directive.

27 EU Member States, 27 Different Sets of Rules

As some of the member states’ implementations contain detailed specifications of the obligations imposed on OCSSPs, providers must consider the national approaches.

For companies operating internationally, it goes without saying that adhering to 27 different national compliance concepts will not be feasible. Instead, they will likely require a compliance concept that aligns with the strictest national set of rules or with their main market.

As one of the most economically relevant markets within the EU, the German implementation draft of the directive, the German Draft Act,[5] is therefore one of the implementations that OCSSPs should closely monitor when developing their risk strategy.

The German Draft Act at a Glance

With regard to licensing best efforts, the German Draft Act adds the additional requirement that OCSSPs must accept licenses available through a collecting management organization or a dependent collecting body established in Germany, as well as individually offered licenses by rights holders, provided that the licenses:

  • Concern content that the OCSSP typically makes available in more than minor quantities, e.g., audio-visual content, music;
  • Cover a considerable repertoire of works and rights holders as well as the German territory; and
  • Allow for use under reasonable terms and conditions, including a reasonable remuneration.

In addition, the OCSSP has to proactively seek licenses from rights holders known to the OCSSP from prior business relationships or other circumstances. The German Draft Act does not stop here though. It also includes a direct compensation claim of authors and performers against the OCSSP to be asserted by collecting societies only.

This even applies where the OCSSP obtains the license from a rights holder such as a record label or a publisher. Even though the OCSSP has not entered into a contract with the author in that case, the author can claim appropriate remuneration from the OCSSP via its collecting society. Business insiders expect that the validity of such double payments will be among the first questions to be presented to the courts.

As regards the obligation to make best efforts to block pre-notified content, the German Draft Act provides for a nuanced procedure.

The user must be given the opportunity to flag the content as statutorily (parody, quote, etc.) or contractually permitted use. In addition, the German Draft Act introduces a new statutory copyright exemption for minor uses — up to 15 seconds of video, 160 characters of text or 125 kilobytes of graphic — against a statutory license fee that the OCSSP has to pay.

This approach is remarkable for a number of reasons. At present, fundamental copyright exemptions such as the right of parody, under German law, do not require extra payment. That principle stays valid — except for uses of parody on OCSSPs. Further, the minor use exemption is a provision by the German legislator without basis in the Copyright Directive and the Information Society Directive.

The European Court of Justice had only recently determined in its “Metal on Metal” decision that copyright exemptions were to be conclusively determined by the Information Society Directive.[6] It remains to be seen if the German legislator’s current inconsistent and much criticized approach will make it into the final implementation.

Another German specificity is the “red buzzer” — a term that until recently was more likely to be associated with gaming shows than with the law. If content is flagged by the user or qualifies as minor use, the OCSSP must upload the content and inform the rights holder.

The rights holder may file a complaint with the OCSSP, starting a maximum one-week-long decision process. In such a case, trustworthy rights holders can make use of the red buzzer procedure, with the German Draft Act lacking a definition for “trustworthy.”

Once the red buzzer is pushed, the OCSSP is then required to immediately block the content until the conclusion of the complaints procedure. In theory, this may sound appealing to rights holders — especially for live broadcasts or content premieres since a few days of illegal exploitation can have a huge commercial impact.

In practice, the hurdle of the red buzzer is that the decision to use it needs to be made by a natural person, not an algorithm. That means rights holders need to staff up in order to benefit from the red buzzer.

If no red buzzer procedure applies, content will stay online until the complaint procedure’s conclusion.

Where content neither is flagged nor qualifies as minor use, the OCSSP must block the content and inform the user, which in turn may file a complaint with the OCSSP.

If works in user-uploaded content do not match any blocking requests, the content will be uploaded.

Practical Considerations

The EU Commission announced that it would publish guidelines on the interpretation and implementation of Article 17.[7] Yet, with less than a month to go before the deadline, these guidelines are still in the making.

Since the directive will become directly applicable upon expiry of the implementation deadline, OCSSPs are well advised to have a strategy in place.

Regardless of national nuances, it seems very likely that the use of advanced, or filter, technologies to meet the stay-down requirements will be common requirement across the EU. Depending on the business model of the respective OCSSP, in an ideal scenario, such filter technology already exists and could be licensed from a vendor. Some companies already offer filter solutions for music for instance. In other instances, there may be a need to develop tailor-made filter technology.

With regard to licensing best efforts, it makes sense for OCSSPs to prioritize the most relevant and most frequently used categories or even rights catalogs and actively approach these rights holders first as to secure licenses or, at least, evidence best efforts.

Footnotes

[1] Article 14 DIRECTIVE 2000/31/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL – of June 8, 2000 – on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (europa.eu).

[2] Under the nuanced and well-established case-law of the European Court of Justice (“ECJ”), the Safe Harbor privilege of the E-Commerce-Directive (see footnote 1) implies a neutral position of the host provider. This requires that the platform confines itself to providing its service neutrally and passively by a merely technical and automatic processing of the data provided by its customers (Case C-324/09, Judgment July 12, 2011, point 113; Joint Cases C-236/08 to C-238/08, Judgment March 23, 2010, point 114). Where a service provider plays an active role, e.g., by optimizing or promoting user content (Case C-324/09, point 116), it has presumed knowledge of or control over unlawful content stored and does thus not profit from the Safe Harbor protection.

[3] Article 17 of the EU Copyright Directive. DIRECTIVE (EU) 2019/ 790 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL — of April 17, 2019 — on copyright and related rights in the Digital Single Market and amending Directives 96/ 9/ EC and 2001/ 29/ EC (europa.eu).

[4] Action brought on May 24, 2019, Case C-401/19. The opinion of the Advocate General that precedes each decision of the ECJ is scheduled for July 15, 2021.

[5] An English language version of the latest draft “Act on the Copyright Liability of Online Sharing Content Service Providers” can be found here. The draft is currently discussed in the relevant committees of the Parliament.

[6] Case C-476/17, Judgment July 29, 2019.

[7] The directive requires the European Commission to issue a guidance on the understanding of the directive’s provisions, including best practices for its implementation by the member states. The guidance shall be based on a stakeholder dialogue process. Accordingly, the Commission held several stakeholder dialogue meetings between October 2019 and February 2020 (further information on the meetings can be found here) and called for written statements of the stakeholders.

After the presentation of a general “European Approach to Artificial Intelligence” by the EU Commission in March 2021, a detailed draft regulation aimed at safeguarding fundamental EU rights and user safety was published today (“Draft Regulation”). The Draft Regulation’s main provisions are the following:

  • A binding regulation for AI Systems (defined below) that directly applies to Providers and Users (both defined below), importers, and distributors of AI Systems in the EU, regardless of their seat.
  • A blacklist of certain AI practices.
  • Fines of up to EUR 30 million or up to 6% of annual turnover, whichever is higher.
  • Transparency obligations and, for High-Risk AI Systems (defined below), registration and extensive compliance obligations

For more information, read the full client alert.