An advertising executive who lost his job after being named on an anonymous Instagram account is suing the now-defunct account for defamation. The suit names as defendants not only the account—Diet Madison Avenue, which was intended to root out harassment and discrimination at ad agencies—but also (as “Jane Doe 1,” “Jane Doe 2,” et cetera) several of the anonymous people who ran it. Whether Instagram will ultimately have to turn over the identities of the users behind the account will turn on a couple of key legal issues.

A bill recently passed by the New York State Senate makes it a crime for “a caretaker to post a vulnerable elderly person on social media without their consent.” At least one tech columnist thinks the legislation is so broadly worded that it violates the U.S. Constitution. That might be so, but—in light of several news reports about this unfortunate form of elder abuse over the last few years—that same columnist may not be correct about the bill likely having been passed in response to a one-time incident.

A new law in Egypt that categorizes social media accounts and blogs with more than 5,000 followers as media outlets allows the government in that country to block those accounts and blogs for publishing fake news. Some critics aren’t buying the government’s explanation for the law’s implementation, however, and are suggesting it was inspired by a very different motivation.

Critics of the most recent version of the European Copyright Directive’s Article 13, which the European Parliament rejected in early July, brought home their message by arguing that it would have prevented social media users from uploading and sharing their favorite memes.

In a criminal trial, social media posts may be used by both the prosecution and the defense to impeach a witness but—as with all impeachment evidence—the posts’ use and scope is entirely within the discretion of the trial court. The New York Law Journal’s cybercrime columnist explains.

To thwart rampant cheating by high school children, one country shut down the Internet nationwide during certain hours and had social media platforms go dark for the whole exam period.

Snapchat now allows users to unsend messages. Here’s how.

Employees of Burger King’s Russian division recently had to eat crow for a tasteless social media campaign that offered women a lifetime supply of Whoppers as well as three million Russian rubles ($47,000) in exchange for accomplishing a really crass feat.

We’ve all heard of drivers experiencing road rage, but how about members of the public experiencing robot rage? According to a company that supplies cooler-sized food-delivery robots, its’s a thing.

 

 

 

 

 

Based on copyright infringement, emotional distress and other claims, a federal district court in California awarded $6.4 million to a victim of revenge porn, the posting of explicit material without the subject’s consent. The judgment is believed to be one of the largest awards relating to revenge porn. A Socially Aware post that we wrote back in 2014 explains the difficulties of using causes of action like copyright infringement—and state laws—as vehicles for fighting revenge porn.

The highest court in New York State held that whether or not a personal injury plaintiff’s Facebook photos are discoverable does not depend on whether the photos were set to “private,” but rather “the nature of the event giving rise to the litigation and the injuries claimed, as well as any other information specific to the case.”

A federal district court held that Kentucky’s governor did not violate the free speech rights of two Kentucky citizens when he blocked them from commenting on his Facebook page and Twitter account. The opinion underscores differences among courts as to the First Amendment’s application to government officials’ social media accounts; for example, a Virginia federal district court’s 2017 holding reached the opposite conclusion in a case involving similar facts.

Having witnessed social media’s potential to escalate gang disputes, judges in Illinois have imposed limitations on some juvenile defendants’ use of the popular platforms, a move that some defense attorneys argue violates the young defendants’ First Amendment rights.

A bill proposed by California State Sen. Bob Hertzberg would require social media platforms to identify bots—automated accounts that appear to be owned by real people but are actually computer programs capable of simulating human dialog. Bots can spread a message across social media faster and more widely than would be humanly possible, and have been used in efforts to manipulate public opinion.

This CIO article lists the new strategies, job titles and processes that will be popular this year among businesses transforming into data-driven enterprises.

A solo law practitioner in Chicago filed a complaint claiming defamation and false light against a former client who she alleges posted a Yelp review calling her a “con artist” and a “legal predator”  after, allegedly pursuant to the terms of his retainer, she billed $9,000 to his credit card for a significant amount of legal work.

Carnival Cruise Line put up signs all over the hometown of the 15-year-old owner of the Snapchat handle @CarnivalCruise in order to locate him and offer him and his family a luxurious free vacation in exchange for the transfer of his Snapchat handle—and the unusual but innovate strategy paid off. Who knew that old-school billboards could be so effectively used for one-on-one marketing?

In a decision that has generated considerable controversy, a federal court in New York has held that the popular practice of embedding tweets into websites and blogs can result in copyright infringement. Plaintiff Justin Goldman had taken a photo of NFL quarterback Tom Brady, which Goldman posted to Snapchat. Snapchat users “screengrabbed” the image for use in tweets on Twitter. The defendants—nine news outlets—embedded tweets featuring the Goldman photo into online articles so that the photo itself was never hosted on the news outlets’ servers; rather, it was hosted on Twitter’s servers (a process known as “framing” or “inline linking”). The court found that, even absent any copying of the image onto their own servers, the news outlets’ actions had resulted in a violation of Goldman’s exclusive right to authorize the public display of his photo.

If legislation recently introduced in California passes, businesses with apps or websites requiring passwords and enabling Golden State residents younger than 18 to share content could be prohibited from asking those minors to agree to the site’s or the app’s terms and conditions of use.

After a lawyer was unable to serve process by delivering court documents to a defendant’s physical and email addresses, the Ontario Superior Court granted the lawyer permission to serve process by mailing a statement of claim to the defendant’s last known address and by sending the statement of claim through private messages to the defendant’s Instagram and LinkedIn accounts. This is reportedly the first time an Ontario court has permitted service of process through social media. The first instance that we at Socially Aware heard of a U.S. court permitting a plaintiff to serve process on a domestic, U.S.-based defendant through a social media account happened back in 2014.

Videos that impose celebrities’ and non-famous people’s faces onto porn performers’ to produce believable videos have surfaced on the Internet, and are on the verge of proliferating. Unlike the non-consensual dissemination of explicit photos that haven’t been manipulated—sometimes referred to as “revenge porn”—this fake porn is technically not a privacy issue, and making it illegal could raise First Amendment issues.

By mining datasets and social media to recover millions of dollars lost to tax fraud and errors, the IRS may be violating common law and the Electronic Communications Privacy Act, according to an op-ed piece in The Hill.

A woman is suing her ex-husband, a sheriff’s deputy in Georgia, for having her and her friend arrested and briefly jailed for posting on Facebook about his alleged refusal to drop off medication for his sick children on his way to work. The women had been charged with “criminal defamation of character” but the case was ultimately dropped after a state court judge ruled there was no basis for the arrest.

During a hearing in a Manhattan federal court over a suit brought by seven Twitter users who say President Trump blocked them on Twitter for having responded to his tweets, the plaintiffs’ lawyer compared Twitter to a “virtual town hall” where “blocking is a state action and violates the First Amendment.” An assistant district attorney, on the other hand, analogized the social media platform to a convention where the presiding official can decide whether or not to engage with someone. The district court judge who heard the arguments refused to decide the case on the spot and encouraged the parties to settle out of court.

Have your social media connections been posting headshots of themselves alongside historical portraits of people who look just like them? Those posts are the product of a Google app that matches the photo of a person’s face to a famous work of art, and the results can be fun. But not for people who live in Illinois or Texas, where access to the app isn’t available. Experts believe it’s because laws in those states restrict how companies can use biometric data.

The stock market is apparently keeping up with the Kardashians. A day after Kim Kardashian’s half-sister Kylie Jenner tweeted her frustration with Snapchat’s recent redesign, the company’s market value decreased by $1.3 billion.

As we have noted previously, YouTube users sometimes object when the online video giant removes their videos based on terms-of-use violations, such as artificially inflated view counts. In a recent California case, Bartholomew v. YouTube, LLC, the court rejected a user’s claim that the statement YouTube posted after it removed her video, which allegedly gave the impression that the video contained offensive content, was defamatory.

Joyce Bartholomew is a musician who creates what she calls “original Christian ministry music.” Ms. Bartholomew produced a video for the song “What Was Your Name” and posted the video on YouTube in January 2014. YouTube assigned a URL to the video, which Ms. Bartholomew began sharing with her listeners and viewers. By April 2014, she claims that the video had amassed over 30,000 views.

Shortly afterwards, however, YouTube removed the video and replaced it with the image of a “distressed face” and the following removal statement: “This video has been removed because its content violated YouTube’s Terms of Service.” The removal statement also provided a hyperlink to YouTube’s “Community Guideline Tips,” which identifies 10 categories of prohibited content: “Sex and Nudity,” “Hate Speech,” “Shocking and Disgusting,” “Dangerous Illegal Acts,” “Children,” “Copyright,” “Privacy,” “Harassment,” “Impersonation” and “Threats.” Continue Reading California Court Holds That YouTube’s Removal Notice Is Not Defamatory

A defamation suit brought by one reality television star against another—and naming Discovery Communications as a defendant—could determine to what extent (if any) media companies may be held responsible for what their talent posts on social media.

In a move characterized as setting legal precedent, UK lawyers served an injunction against “persons unknown” via an email account linked to someone who was posting allegedly defamatory “fake news” stories on social media.

European regulators fined Google $2.7 billion for violating antitrust law by allegedly tailoring algorithms for product-related queries to promote its own comparison shopping service. If the search company doesn’t change how its search engine works in the EU in the next few months, it risks fines of up to 5% of its parent company Alphabet Inc.’s daily revenue.

A newly formed trade group, called the Influencer Marketing Council, is representing social influencers in discussions with regulators and Internet platforms, and is leading an effort to outline best practices for complying with the FTC’s endorsement guidelines.

Pinterest’s commercial progress has reportedly been hampered by several factors, including the format of its advertisements, which must mimic user posts—something that requires brands to design content specifically for the platform.

Members of law enforcement have expressed concerns regarding the safety risks posed by a Snapchat update that lets users see the exact location of their Snapchat “friends.” An article on The Verge has some useful tips on how to use the function, which is called Snap Map, and how to turn it off.

Because the First Amendment limits the ability of the U.S. government to regulate search companies’ and social media platforms’ policies and guidelines, companies like Google and Twitter might eventually be de facto regulated even within the United States by foreign nations whose governments are entitled to regulate what happens on the Internet in order to protect their citizens according to their own laws.

Several A-list musicians have stepped away from social media at least partly because their incredible popularity has made them an attractive target for trolls.

Here are tips on how to limit online service providers from collecting information about you in using social media and surfing the web.

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?

 

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.

 

The Newspaper Association of America has filed a first-of-its-kind complaint with the FTC over certain ad blocking technologies.

Is it “Internet” or “internet”? The Associated Press is about to change the capitalization rule.

Lots of people criticized Instagram’s new logo, but, according to a design-analysis app, it’s much better than the old logo at doing this.

Twitter has finally realized that people don’t use it to buy things.

Facebook wants to help sell every ad on the web.

A Russian law enforcement agency is investigating controversial groups alleged to have encouraged more than 100 teenage suicides on social media.

A self-proclaimed “badass lawyer” lost a defamation suit against a Twitter account that parodied him.

The Internet of every single thing must be stopped.

*        *       *

To stay abreast of social media-related legal and business developments, please subscribe to our free newsletter.

 

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.

 

0428_BLOG_iStock_000034491882_LargeVirginia’s highest court recently held that Yelp could not be forced to turn over the identities of anonymous online reviewers that a Virginia carpet-cleaning owner claimed tarnished his business.

In the summer of 2012, Joseph Hadeed, owner of Hadeed Carpet Cleaning, sued seven anonymous Yelp reviewers after receiving a series of critical reviews. Hadeed alleged that the reviewers were competitors masking themselves as Hadeed’s customers and that his sales tanked after the reviews were posted. Hadeed sued the reviewers as John Doe defendants for defamation and then subpoenaed Yelp, demanding that it reveal the reviewers’ identities.

Yelp argued that, without any proof that the reviewers were not Hadeed’s customers, the reviewers had a First Amendment right to post anonymously.

A Virginia trial court and the Court of Appeals sided with Hadeed, ordering Yelp to turn over the reviewers’ identities and holding it in contempt when it did not. But in April 2015, the Virginia Supreme Court vacated the lower court decisions on procedural grounds. Because Virginia’s legislature did not give Virginia’s state courts subpoena power over non-resident non-parties, the Supreme Court concluded, the Virginia trial court could not order the California-headquartered Yelp to produce documents located in California for Hadeed’s defamation action in Virginia.

Although the decision was a victory for Yelp, it was a narrow one, resting on procedural grounds. The Virginia Supreme Court did not address the broader First Amendment argument about anonymous posting and noted that it wouldn’t quash the subpoena because Hadeed could still try to enforce it under California law.

After the ruling, Yelp’s senior director of litigation, Aaron Schur, posted a statement on the company’s blog stating that, if Hadeed pursued the subpoena in California, Yelp would “continue to fight for the rights of these reviewers under the reasonable standards that California courts, and the First Amendment, require (standards we pushed the Virginia courts to adopt).” Schur added, “Fortunately the right to speak under a pseudonym is constitutionally protected and has long been recognized for the important information it allows individuals to contribute to public discourse.”

In 2009, a California law took effect, allowing anonymous Internet speakers whose identity is sought under a subpoena in California in connection with a lawsuit filed in another state to challenge the subpoena and recover attorneys’ fees if they are successful. In his Yelp post, Schur added that Hadeed’s case “highlights the need for stronger online free speech protection in Virginia and across the country.”

Had Hadeed sought to enforce the subpoena in California, the result may have been the same but possibly on different grounds. In California, where Yelp and many other social media companies are headquartered, the company would have been subject to a court’s subpoena power. Still, Yelp may have been protected from having to disclose its users’ identities. California courts have offered protections for anonymous speech under the First Amendment to the U.S. Constitution and the state constitutional right of privacy.

Nevertheless, there is no uniform rule as to whether companies must reveal identifying information of their anonymous users. In 2013, in Chevron v. Danziger, federal Magistrate Judge Nathanael M. Cousins of the Northern District of California concluded that Chevron’s subpoenas seeking identifying information of non-party Gmail and Yahoo Mail users were enforceable against Google and Yahoo, respectively, because the subpoenas did not seek expressive activity and because there is no privacy interest in subscriber and user information associated with email addresses.

On the other hand, in March 2015, Magistrate Judge Laurel Beeler of the same court held, in Music Group Macao Commercial Offshore Ltd. v. Does, that the plaintiffs could not compel nonparty Twitter to reveal the identifying information of its anonymous users, who, as in the Hadeed case, were Doe defendants. Music Group Macao sued the Doe defendants in Washington federal court for anonymously tweeting disparaging remarks about the company, its employees, and its CEO. After the Washington court ruled that the plaintiffs could obtain the identifying information from Twitter, the plaintiffs sought to enforce the subpoena in California. Magistrate Judge Bheeler concluded that the Doe defendants’ First Amendment rights to speak anonymously outweighed the plaintiffs’ need for the requested information, citing familiar concerns that forcing Twitter to disclose the speakers’ identities would unduly chill protected speech.

Courts in other jurisdictions have imposed a range of evidentiary burdens on plaintiffs seeking the disclosure of anonymous Internet speakers. For example, federal courts in Connecticut and New York have required plaintiffs to make a prima facie showing of their claims before requiring internet service providers (ISPs) to disclose anonymous defendants’ identities. A federal court in Washington found that a higher standard should apply when a subpoena seeks the identity of an Internet user who is not a party to the litigation. The Delaware Supreme Court has applied an even higher standard, expressing concern “that setting the standard too low will chill potential posters from exercising their First Amendment right to speak anonymously.”

These cases show that courts are continuing to grapple with social media as a platform for expressive activity. Although Yelp and Twitter were protected from having to disclose their anonymous users’ identities in these two recent cases, this area of law remains unsettled, and companies with social media presence should be familiar with the free speech and privacy law in the states where they conduct business and monitor courts’ treatment of these evolving issues.