Photo of Aaron Rubin

As regular readers of Socially Aware already know, there are many potential traps for companies that use photographs or other content without authorization from the copyright owners. For example, companies have faced copyright infringement claims based on use of photos pulled from Twitter. Claims have even arisen from the common practice of embedding tweets on blogs and websites, and we have seen a flurry of stories recently about photographers suing celebrities for posting photos of themselves.

Now there is another potential source of liability: the appearance of murals in the background of photographs used in advertisements. In at least two recent cases, automotive companies have faced claims of copyright infringement from the creators of murals painted on buildings that appear in the backgrounds of ads.

Most recently, in a federal district court in the Eastern District of Michigan, Mercedes Benz sought a declaratory judgment that its photographs, taken in Detroit (with permits from the city) and later posted on Instagram, did not infringe the copyrights of three defendants whose murals appeared in the backgrounds of those photographs.
Continue Reading

Singapore has enacted a law granting government ministers the power to require social media platforms to completely remove or place warnings alongside posts the authorities designate as false.

Unlike the compensation earned by child stars who perform on television, in films, or on other traditional media in California, the income generated by children who

It is likely no surprise to regular readers of Socially Aware that posting content to social media can, in some cases, generate significant income. But those who make their living on social media may find their livelihood threatened if they fail to comply with the law and with the relevant platform’s terms of use.

For example, we often see trouble arise when social media users fail to follow the Federal Trade Commission’s disclosure rules in connection with receiving compensation in exchange for a promotional post, or when users purchase followers—a practice that violates most social media platforms’ terms of use, and might be illegal. As we have noted previously, the social media platform and not the user sets the rules. If your business model is built on a social media platform, you have play by the platform’s rules.

Earning an honest living is what Instagram user “Ben” (the pseudonym assigned to him by MarketWatch) claims to have been doing when he was taking in approximately $4,000 per month by operating and curating several accounts containing memes originally created by third parties. (For those who have somehow managed to avoid this ubiquitous Internet phenomenon, Wikipedia describes a meme as a “piece of media that spreads, often . . . for humorous purposes, from person to person via the Internet.” The article at this link contains some examples.)
Continue Reading

A federal district court dismissed a case against supermodel Gigi Hadid for posting to Instagram a photo of herself that was taken by a paparazzo. The reason for the court’s decision was simple: The party claiming copyright ownership of the photo failed to get it registered with the U.S. Copyright Office, a prerequisite to filing

A federal district court in Illinois recently held in Anand v. Heath that a digital marketing company could not force a user to arbitrate because a “Continue” button on its website did not provide clear notice that clicking the button constituted assent to the hyperlinked terms and conditions that contained the arbitration provision.

As we have noted previously, website operators who wish to enforce their online terms against users will have a higher likelihood of success if they do two things. First, the website should display the terms to users in a conspicuous fashion. Second, and applicable here, the website should affirmatively and unambiguously require users to assent to the terms. Anand demonstrates that online agreements risk unenforceability when the terms are presented in a manner that does not make clear to users that they are agreeing to be bound.

The website www.retailproductzone.com offers users free gift cards in exchange for their responses to surveys and for their consent to be contacted for marketing purposes. Reward Zone USA LLC, a subsidiary of Fluent Inc., maintains the website. In June 2017, plaintiff Narantuya Anand registered on www.retailproductzone.com and completed a survey to receive a free gift card. According to Anand, she then received several unwanted telemarketing voicemails and text messages. 
Continue Reading

As we noted in our recent post on the Second Circuit case Herrick v. Grindr, LLC, Section 230 of the Communications Decency Act (CDA) continues to provide immunity to online intermediaries from liability for user content, despite pressure from courts and legislatures seeking to chip away at this safe harbor. The D.C. Circuit case Marshall’s Locksmith Service Inc. v. Google, LLC serves as another example of Section 230’s resiliency.

In Marshall’s Locksmith, the D.C. Circuit affirmed the dismissal of claims brought by 14 locksmith companies against search engine operators Google, Microsoft and Yahoo! for allegedly conspiring to allow “scam locksmiths” to inundate the online search results page in order to extract additional advertising revenue.

The scam locksmiths at issue published websites targeting heavily populated locations around the country to trick potential customers into believing that they were local companies. These websites provided either a fictitious address or no address at all, and falsely claimed that they were local businesses. The plaintiffs asserted various federal and state law claims against the search engine operators relating to false advertising, conspiracy and fraud based on their activities in connection with the scam locksmiths’ websites.
Continue Reading

A California Superior Court’s recent ruling in Murphy v. Twitter held that Section 230 of the Communications Decency Act shielded Twitter from liability for suspending and banning a user’s account for violating the platform’s policies. As we have previously noted, Section 230 has come under pressure in recent years from both courts and legislatures. But we have also examined other cases demonstrating Section 230’s staying power. The ruling in Murphy again shows that, despite the challenges facing Section 230, the statute continues to serve its broader purpose of protecting social media platforms from the actions of their users while allowing those platforms to monitor and moderate their services.

From January to mid-October 2018, Meghan Murphy posted a number of tweets that misgendered and criticized transgender Twitter users. After first temporarily suspending her account, Twitter ultimately banned her from the platform for violating its Hateful Conduct Policy. Twitter had amended this policy in late October 2018 to specifically include targeted abuse and misgendering of transgender people.
Continue Reading

As we have frequently noted on Socially Aware, Section 230 of the Communications Decency Act protects social media sites and other online platforms from liability for user-generated content. Sometimes referred to as “the law that gave us the modern Internet,” Section 230 has provided robust immunity for website operators since it was enacted in 1996. As we have also written previously, however, the historically broad Section 230 immunity has come under pressure in recent years, with both courts and legislatures chipping away at this important safe harbor.

Now, some lawmakers are proposing legislation to narrow the protections that Section 230 affords to website owners. They assert that changes to the section are necessary to protect Internet users from dangers such as sex-trafficking and the doctored videos known as “deep fakes.”

The House Intelligence Committee Hearing

Recently, a low-tech fraudulent video that made House Speaker Nancy Pelosi’s speech appear slurred was widely shared on social media, inspiring Hany Farid, a computer-science professor and digital-forensics expert at the University of California, Berkeley, to tell The Washington Post, “this type of low-tech fake shows that there is a larger threat of misinformation campaigns—too many of us are willing to believe the worst in people that we disagree with.”
Continue Reading

A federal district court in California has added to the small body of case law addressing whether it’s permissible for one party to use another party’s trademark as a hashtag. The court held that, for several reasons, the 9th Circuit’s nominative fair use analysis did not cover one company’s use of another company’s trademarks as

Often hailed as the law that gave us the modern Internet, Section 230 of the Communication Decency Act generally protects online platforms from liability for content posted by third parties. Many commentators, including us here at Socially Aware, have noted that Section 230 has faced significant challenges in recent years. But Section 230 has proven resilient (as we previously noted here and here), and that resiliency was again demonstrated by the Second Circuit’s recent opinion in Herrick v. Grindr, LLC.

As we noted in our prior post following the district court’s order dismissing plaintiff Herrick’s claims on Section 230 grounds, the case arose from fake Grindr profiles allegedly set up by Herrick’s ex-boyfriend. According to Herrick, these fake profiles resulted in Herrick facing harassment from over 1,000 strangers who showed up at his door over the course of several months seeking violent sexual encounters.
Continue Reading