Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Your Votes Can Help Us Share Our Expertise at SXSW Interactive 2016!

Posted in Event

Our managing editors John Delaney and Aaron Rubin will be attending SXSW Interactive on March 11th through 16th, 2016. In connection with the event, John and Aaron have proposed two presentations based on topics that have been covered on this blog: Key Moments in Social Media Law and The Grand Unifying Theory of Today’s Tech Trends.

Socially Aware readers can help to ensure that these two topics end up on the SXSW Interactive agenda by voting for the presentations. To vote, simply click on the links provided above and create an account.

Voting is free and open to everyone—not just prospective SXSW Interactive attendees. But hurry! The polls close at 11:59 PM CDT on Friday, September 4th.

Also, if you plan to attend SXSW Interactive next year, please let us know—we’d love to get together with you in Austin.

The Top Social Media Platforms’ Efforts To Control Cyber-Harassment

Posted in Cyberbullying, First Amendment, Terms of Use

iStock_000040880696_LargeSocial networking platforms have long faced the difficult task of balancing the desire to promote freedom of expression with the need to prevent abuse and harassment on their sites. One of social media’s greatest challenges is to make platforms safe enough so users are not constantly bombarded with offensive content and threats (a recent Pew Research Center study reported that 40% of Internet users have experienced harassment), yet open enough to foster discussion of complex, and sometimes controversial, topics.

This past year, certain companies have made some noteworthy changes. Perhaps most notably, Twitter, long known for its relatively permissive stance regarding content regulation, introduced automatic filtering and stricter language in its policies regarding threatening language. Also, Reddit, long known as the “wild wild west” of the Internet, released a controversial new anti‑harassment policy and took unprecedented proactive steps to regulate content by shutting down some of the site’s more controversial forums.

According to some, such changes came as a result of several recent, highly publicized instances of targeted threat campaigns on such platforms, such as “Gamergate,” a campaign against female gaming journalists organized and perpetrated over Twitter, Reddit and other social media platforms. Below we summarize how some of the major social networking platforms are addressing these difficult issues.

Facebook

Facebook’s anti-harassment policy and community standards have remained relatively stable over time. However, in March 2015, Facebook released a redesign of its Community Standards page in order to better explain its policies and make it easier to navigate. This was largely a cosmetic change.

According to Monika Bickert, Facebook’s head of global policy management, “We’re just trying to explain what we do more clearly.”

The rules of conduct are now grouped into the following four categories:

  1. “Helping to keep you safe” details the prohibition of bullying and harassment, direct threats, criminal activity, etc.
  2. “Encouraging respectful behavior” discusses the prohibition of nudity, hate speech and graphic content.
  3. “Keeping your account and personal information secure” lays out Facebook’s policy on fraud and spam.
  4. “Protecting your intellectual property” encourages users to only post content to which they own the rights.

Instagram

After a series of highly publicized censorship battles, Instagram updated its community standards page in April 2015 to clarify its policies. These more-detailed standards for appropriate images posted to the site are aimed at curbing nudity, pornography and harassment.

According to Nicky Jackson Colaco, director of public policy, “In the old guidelines, we would say ‘don’t be mean.’ Now we’re actively saying you can’t harass people. The language is just stronger.”

The old guidelines comprised a relatively simple list of do’s and don’ts—for example, the policy regarding abuse and harassment fell under Don’t #5: “Don’t be rude.” As such, the new guidelines are much more fleshed out. The new guidelines clearly state, “By using Instagram, you agree to these guidelines and our Terms of Use. We’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in a disabled account.”

According to Jackson Colaco, there was no one incident that triggered Instagram’s decision. Rather, the changes were catalyzed by continuous user complaints and confusion regarding the lack of clarity in content regulation. In policing content, Instagram has always relied on users to flag inappropriate content rather than actively patrolling the site for offensive material.

The language of the new guidelines now details several explicit rules, including the following:

  1. Nudity. Images of nudity and of an explicitly sexual nature are prohibited. However, Instagram makes an exception for “photos of post‑mastectomy scarring and women actively breastfeeding.”
  2. Illegal activity. Offering sexual services, buying or selling drugs (as well as promoting recreational use) is prohibited. There is a zero-tolerance policy for sexual images of minors and revenge porn (including threats of posting revenge porn).
  3. Harassment. “We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages…We carefully review reports of threats and consider many things when determining whether a threat is credible.”

Twitter

Twitter has made two major rounds of changes to its content regulation policies in the past year. These changes are especially salient given the fact that Twitter has previously been fairly permissive regarding content regulation.

In December 2014, Twitter announced a set of new tools to help users deal with harassment and unwanted messages. These tools allow users to more easily flag abuse and describe their reasons for blocking or reporting a Twitter account in more specific terms. While in the past Twitter had allowed users to report spam, the new tools allow users to report harassment, impersonations, self‑harm, suicide and, perhaps most interestingly, harassment on behalf of others.

Within “harassment,” Twitter allows the user to report multiple categories: “being disrespectful or offensive,” “harassing me” or “threatening violence or physical harm.” The new tools have also been designed to be more mobile-friendly.

Twitter also released a new blocked accounts page during this round of changes. This feature allows users to more easily manage the list of Twitter accounts they have blocked (rather than relying on third-party apps, as many did before). The company also changed how the blocking system operates. Before, blocked users could still tweet and respond to the blocker; they simply could not follow the blocker. Now, blocked accounts will not be able to view the profile of the blocker at all.

In April 2015, Twitter further cracked down on abuse and unveiled a new filter designed to automatically prevent users from seeing harassing and violent messages. For the first time, all users’ notifications will be filtered for abusive content. This change came shortly after an internal memo from CEO Dick Costolo leaked, in which he remarked, “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.”

The new filter will be automatically turned on for all users and cannot be turned off. According to Shreyas Doshi, head of product management, “This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of the Tweet to other content that our safety team has in the past independently determined to be abusive.”

Beyond the filter, Twitter also made two changes to its harassment policies. First, the rules against threatening language have been strengthened. While “direct, specific threats of violence against others” were always banned, that prohibition is now much broader and includes “threats of violence against others or promot[ing] violence against others.”

Second, users who breach the policies will now face heavier sanctions. Previously, the only options were to either ban an account completely or take no action (resulting in much of the threatening language not being sanctioned at all). Now, Twitter will begin to impose temporary suspensions for users who violate the rules but whose violation does not warrant a full ban.

Moreover, since Costolo’s statements, Twitter has tripled the size of its team handling abuse reports and added rules prohibiting revenge porn.

Reddit

In March 2015, Reddit prohibited the posting of several types of content, including anything copyrighted or confidential, violent personalized images and unauthorized photos or videos of nude or sexually excited subjects.

Two months later, Reddit unveiled a controversial new anti-harassment policy that represented a significant shift from Reddit’s long‑time reputation as an online free-for-all. The company announced that it was updating its policies to explicitly ban harassment against users. Some found this move surprising, given Reddit’s laissez-faire reputation and the wide range of subject matter and tone it had previously allowed to proliferate on its site (for example, Reddit only expressly banned sexually explicit content involving minors three years ago after much negative PR).

In a blog post titled “promote ideas, protect people,” Reddit announced it would be prohibiting “attacks and harassment of individuals” through the platform. According to Reddit’s former CEO Ellen Pao, “We’ve heard a lot of complaints and found that even our existing users were unhappy with the content on the site.”

In March 2015, Reddit also moved to ban the posting of nude photos without the subjects’ consent (i.e., revenge porn). In discussing the changes in content regulation, Alexis Ohanian, executive chairman, said, “Revenge porn didn’t exist in 2005. Smartphones didn’t really exist in 2005…we’re taking the standards we had 10 years ago and bringing them up to speed for 2015.” Interestingly, rather than actively policing the site, Reddit will rely on members to report offensive material to moderators.

Reddit’s new policy defines harassment as: “systematic and/or continued actions continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.”

As a result of the new policies, Reddit permanently removed five subreddits (forums) from the site: two dedicated to fat-shaming, one to racism, one to transphobia and one to harassing members of a progressive website. Apart from the expected criticisms of censorship, some commentators have condemned Reddit for the seemingly random selection of these specific subreddits. Even though these subreddits have been removed, many other offensive subreddits remain, including a violently anti-black subreddit and one dedicated to suggestive pictures of minors.

Google

In June 2015, Google took a major step in the battle against revenge porn, a form of online harassment that involves publishing private, sexually explicit photos of someone without that person’s consent. Adding to the damage, such photos may appear in Google search results for the person’s name. Google has now announced that it will remove such images from search results when the subject of the photo requests it.

Amit Singhal, senior vice president of Google Search, stated, “This is a narrow and limited policy, similar to how we treat removal requests for other highly sensitive personal information, such as bank account numbers and signatures, that may surface in our search results.” Some have questioned, though, why it took so long for Google to treat private sexual information similarly to other private information.

As social media grows up and becomes firmly ensconced in the mainstream, it is not surprising to see the major players striving to make their platforms safer and more comfortable for the majority of users. It will be interesting, though, to watch as the industry continues to wrestle with the challenge of instituting these new standards without overly restricting the free flow of content and ideas that made social media so appealing in the first place.

Status Updates: Appeals court upholds anti-cyberbullying law; better marketing through neural networks; restaurant owner turns the tables on Yelp critic

Posted in Cyberbullying, Defamation, First Amendment, Marketing, Section 230 Safe Harbor, Status Updates

Cruel intentions. Laws seeking to regulate speech on the Internet must be narrowly drafted to avoid running afoul of the First Amendment, and limiting such a law’s applicability to intentional attempts to cause damage usually improves the law’s odds of meeting that requirement. Illustrating the importance of intent in free speech cases, an anti-revenge-porn law in Arizona was recently scrapped, in part because it applied to people who posted nude photos to the Internet irrespective of the poster’s intent. Now, a North Carolina Court of Appeals has held that an anti-cyberbullying law is constitutional because it, among other things, only prohibits posts to online networks that are made with “the intent to intimidate or torment a minor.” The court issued the holding in a lawsuit brought by a 19-year-old who was placed on 48 months’ probation and ordered stay off social media websites for a year for having contributed to abusive social media posts that targeted one of his classmates. The teen’s suit alleged that the law he was convicted of violating, N.C. Gen. Stat. §14-458.1, is overbroad and unconstitutional. Upholding his conviction, the North Carolina Court of Appeals held, “It was not the content of Defendant’s Facebook comments that led to his conviction of cyberbullying. Rather, his specific intent to use those comments and the Internet as instrumentalities to intimidate or torment (a student) resulted in a jury finding him guilty under the Cyberbullying Statute.”

Positive I.D. The tech world recently took a giant step forward in the quest to create computers that accurately mimic human sensory and thought processes, thanks to Fei-Fei Li and Andrej Karpathy of the Stanford Artificial Intelligence Laboratory. The pair developed a program that identifies not just the subjects of a photo, but the action taking place in the image. Called NeuralTalk, the software captioned a picture of a man in a black shirt playing guitar, for example, as “man in black shirt is playing guitar,” according to The Verge. The program isn’t perfect, the publication reports, but it’s often correct and is sometimes “unnervingly accurate.” Potential applications for artificial “neural networks” like Li’s obviously include giving users the ability to search, using natural language, through image repositories both public and private (think “photo of Bobby getting his diploma at Yale.”). But the technology could also be used in potentially life-saving ways, such as in cars that can warn drivers of potential hazards like potholes. And, of course, such neural networks would be incredibly valuable to marketers, allowing them to identify potential consumers of, say, sports equipment by searching through photos posted to social media for people using products in that category. As we discussed in a recent blog post, the explosive of growth of the Internet of Things, wearables, big data analytics and other hot new technologies is being fueled at least in part by marketing uses—are artificial neural networks the next big thing to be embraced by marketers?

A dish best served cold. Restaurants and other service providers are often without effective legal recourse against Yelp and other “user review” websites when they’re faced with negative—even defamatory—online reviews because Section 230 of the Communications Decency Act (CDA)—47 U.S. Code § 230insulates website operators from liability for content created by users (though there are, of course, exceptions). That didn’t stop the owner of KC’s Rib Shack in Manchester, New Hampshire, from exacting revenge, however, when an attendee of a 20-person birthday celebration at his restaurant wrote a scathing review on Yelp and Facebook admonishing the owner for approaching the party’s table “and very RUDELY [telling the diners] to keep quiet [since] others were trying to eat.” The review included “#boycott” and some expletives. In response, the restaurant’s owner, Kevin Cornish, replied to the self-identified disgruntled diner’s rant with his own review—of her singing. Cornish reminded the review writer that his establishment is “a family restaurant, not a bar,” and wrote, “I realize you felt as though everybody in the entire restaurant was rejoicing in the painful rendition of Bohemian Rhapsody you and your self-entitled friends were performing, yet that was not the case.” He encouraged her to continue her “social media crusade,” including the hashtag #IDon’t NeedInconsiderateCustomers. Cornish’s retort has so far garnered close to 4,000 Facebook likes and has been shared on Facebook more than 400 times.

“Notes” Update Shows Facebook’s Continued Efforts to Increase Already Impressive User Engagement

Posted in Marketing

08_26_Timepiece_iStock_000070160599_LargeAs the number of social media platforms continues to grow, users’ online activity is becoming increasingly divided, requiring social media companies to prove to potential advertisers that they not only have a lot of registered users, but that those users are engaged and spending a lot of time on their platforms.

Having accumulated nearly 230 billion minutes of user-time, Facebook is several lengths ahead of the competition in the user engagement race; its users have spent 18x more time on the platform than users of the next-biggest social network, Instagram (which, of course, is owned by Facebook). Despite its clear lead, Facebook seems to be keeping user engagement at the top of its priority list, introducing features that reduce its users’ need to access resources outside the Facebook ecosystem.

Take, for example, Facebook’s introduction of “native video.” Native videos are videos that are posted directly to Facebook rather than first being uploaded to another site such as YouTube and then shared on Facebook as links. Native videos on Facebook have been shown to significantly outperform videos shared on Facebook from other sites in terms of engagement.

A Facebook feature known as auto-play further increases user engagement by ensuring that Facebook native videos—and only Facebook native videos—automatically play as users scroll down their newsfeeds. After one quarter with the auto-play in place, Facebook experienced a 58% increase in engagement.

Now, by testing an update of its “Notes” feature, Facebook may be indicating a desire to keep its users from venturing off the platform to use third-party blogging platforms and personal websites, too.

Before 2011, when Facebook statuses were limited to 500 characters, the Notes feature allowed Facebook users to create longer posts that, like their photo albums and favorite book choices, would always be attached to their profiles. Since Facebook has significantly loosened up its character limits, the purpose of Notes has been unclear.

But Facebook recently updated Notes to allow users to create posts with a more sophisticated look and an accompanying picture. The updated Notes was described by a Facebook spokesperson as the company’s attempt “to make it easier for people to create and read longer-form stories on Facebook.” Some social media industry observers have suggested that this update is intended to provide users with an alternative to Medium, a blogging platform favored by those in the technology and media industries.

“But that might be too early an assessment,” writes Motherboard’s Clinton Ngyeun, “as [the new Notes feature is] a work in progress, the revamp is only available for a handful of users.”

Ngyeun is right; it’s too early to tell whether social media enthusiasts will want create and read lengthy personal essays on Facebook. One thing is for sure, however: Facebook is not letting up on its efforts to remain the user-engagement king.

Social Media E-Discovery: Are Your Facebook Posts Discoverable in Civil Litigation?

Posted in Discovery, E-Discovery, Litigation

iStock_000056895088_FullJudge Richard J. Walsh began his opinion in Largent v. Reed with the following question: “What if the people in your life want to use your Facebook posts against you in a civil lawsuit?” With the explosive growth of social media, judges have had to confront this question more and more frequently. The answer to this question is something you’ll hear quite often from lawyers: “It depends.”

Courts generally have held that there can be no reasonable expectation of privacy in your profile when Facebook’s homepage informs you that “Facebook helps you connect and share with the people in your life.” Even when you decide to limit who can see your photos or read your status updates, that information still may be discoverable if you’ve posted a picture or updated a status that is relevant to a lawsuit in which you’re involved. The issue, then, is whether the party seeking access to your social media profile has a legitimate basis for doing so.

If you’ve updated your Facebook status to brag about your awesome new workout routine after claiming serious and permanent physical injuries sustained in a car accident—yes, that information is relevant to a lawsuit arising from that accident and will be discoverable. The plaintiff in Largent v. Reed learned that lesson the hard way when she did just that and the court ordered her to turn over her Facebook log-in information to the defense counsel. On the other hand, your Facebook profile will not be discoverable simply because your adversary decides he or she wants to go on a fishing expedition through the last eight years of your digital life.

Courts in many jurisdictions have applied the same standard to decide whether a litigant’s Facebook posts will be discoverable: The party seeking your posts must show that the requested information may reasonably lead to the discovery of admissible evidence.

For example, the plaintiff in Zimmerman v. Weis Markets, Inc. claimed that he suffered permanent injuries sustained from operating a fork lift—and then went on to post that his interests included “ridin” and “bike stunts” on the public portion of his Facebook page. The court determined that his public posts placed the legitimacy of his damages claims in controversy and that his privacy interests did not outweigh the discovery requests.

In contrast, in Tompkins v. Detroit Metropolitan Airport, the plaintiff in this slip-and-fall case claimed back injuries in connection with an accident at the Detroit Metropolitan Airport. The defendant checked the plaintiff’s publicly available Facebook photos (i.e., photos not subject to any of Facebook’s available privacy settings or restrictions), and stumbled upon photos of the plaintiff holding a small dog and also pushing a shopping cart. The court determined that these photos were in no way inconsistent with the plaintiff’s injury claims, stating that if “the Plaintiff’s public Facebook page contained pictures of her playing golf or riding horseback, Defendant might have a stronger argument for delving into the nonpublic section of her account.”

The Tompkins court recognized that the plaintiff’s information was not discoverable because parties do not “have a generalized right to rummage at will through information” a person has posted. Indeed, the defendants sought the production of the plaintiff’s entire Facebook account. Their overbroad and overreaching discovery request was—and is—common among parties seeking access to their opponents’ Facebook data.

In response to these overbroad requests, courts routinely deny motions to compel the production of a person’s entire Facebook profile because such requests are nothing more than fishing expeditions seeking what might be relevant information. As the court in Potts v. Dollar Tree Stores, Inc. stated, the defendant seeking Facebook data must at least “make a threshold showing that publicly available information on [Facebook] undermines the Plaintiff’s claims.”

The Tompkins and Potts decisions mark important developments in Facebook e-discovery cases. They establish that a person’s entire Facebook profile is not discoverable merely because a portion of that profile is public. In turn, Facebook’s privacy settings can provide at least some protection against discovery requests—assuming that the user has taken efforts not to display photos publicly that blatantly contradict his or her legal claims.

When it is shown that a party’s Facebook history should be discoverable, however, the party must make sure not to tamper with that history. Deactivating your Facebook account to hide evidence can invite the ire of the court. Deleting your account outright can even result in sanctions. The takeaway is that courts treat social media data no differently than any other type of electronically stored information; what you share with friends online may also be something you share with your adversary—and even the court.

Federal District Court: “Browsewrap” Terms and Conditions Provide Sufficient Notice to Defeat False Advertising Class Action

Posted in E-Commerce, Fraud, Terms of Use

0813_CCIMAGE_iStock_000036595676_LargeWebsites sometimes present their terms of use (“TOU”) to users merely by including a link to those TOU on the website without requiring users to affirmatively accept the terms by, for example, checking a box or clicking an “I accept” button. As we have written previously, Courts tend to look disfavorably on such website TOU presentations, which have become somewhat misleadingly known as “browsewrap agreements,” when determining whether a TOU constitutes an enforceable contract between the website operator and a user. According to a recent federal district court opinion, however, browsewrap TOU might be sufficient to help websites achieve another legal end: providing sufficient notice to defeat a false advertising claim based on an allegedly fraudulent omission.

In the case, Handy v. LogMeIn, Inc., the U.S. District Court for the Eastern District of California held that a software vendor’s online terms and conditions provided notice that the company might discontinue its app, and that such notice was sufficient to defeat a customer’s claims under California’s false advertising and unfair competition laws, regardless of whether the customer had affirmatively accepted the TOU.

The defendant, LogMeIn, Inc., sells software for accessing computer files remotely from separate computers or mobile devices. LogMeIn previously provided its software as two separate products: LogMeInFree, a free service that allowed users to log into remote computers from a desktop or laptop; and Ignition, a paid service that allowed users to log into computers using mobile devices. Before 2011, the plaintiff, Darren Handy, downloaded LogMeInFree and then paid for Ignition. In 2014, LogMeIn introduced a new paid product called “LogMeInPro,” which merged the features of LogMeInFree and Ignition. Eventually, LogMeIn posted a message on its website stating it would begin migrating users of LogMeInFree and Ignition to the new platform while ending support and maintenance on the older platforms. This required users of LogMeInFree and Ignition to pay for LogMeInPro in order to receive continued support and maintenance for Ignition and to continue to use the functionality previously provided for free as part of LogMeInFree.

In response, Mr. Handy brought a class action suit alleging he would never have purchased Ignition if he had known that the company would discontinue support for Ignition or require additional payment for continued access to the LogMeInFree functionality. His suit claimed that LogMeIn violated California Business and Professions Code §§ 17200 and 17500 by fraudulently failing to disclose that the company might discontinue support and change its pricing model for the software.  LogMeIn argued, among other things, that its online TOU reserved the right for LogMeIn “to modify or discontinue any Product for any reason or no reason.” But Handy argued that this statement was not binding on him because he never affirmatively accepted the TOU.

The court disagreed, however, holding that “whether the Terms and Conditions constituted an enforceable contract is irrelevant to whether the Terms and Conditions related to LogMeInFree provided notice to prospective purchasers of the Ignition app that LogMeInFree could be discontinued.” The court went on to note that, while LogMeIn’s TOU may not have been “forced on Plaintiff through a clickwrap,” the TOU nonetheless showed that LogMeIn had “publish[ed] the fact that it reserved the right to terminate the free app, LogMeInFree.” Therefore, the court held that there was “an insufficient showing that information related to the future termination of LogMeInFree constituted a material omission when selling the Ignition app.”

Clients often ask us whether a “browsewrap” TOU serves any purpose at all, in light of the fact that courts are often disinclined to construe such TOU presentations as creating an enforceable contract. Handy v. LogMeIn, Inc. shows that, in at least some circumstances, the answer is yes: even if a browsewrap does not constitute a contract, it may serve a useful purpose by providing legally significant notices to users.

Washington State Court Refuses to Unmask Anonymous Online Reviewer

Posted in First Amendment, Litigation, Online Reviews

iStock_000026397732_SmallIn a precedent-setting ruling, the Washington Court of Appeals in Thomson v. Doe refused to grant a motion to compel brought by a defamation plaintiff who had subpoenaed the lawyer-review site Avvo.com seeking the identity of an anonymous online reviewer, holding that, for a defamation plaintiff to unmask an anonymous defendant, that “plaintiff must do more than simply plead his case.”

The plaintiff in the case, Florida divorce attorney Deborah Thomson, filed a defamation suit against an anonymous poster of Avvo reviews. Claiming to be a former client, the reviewer stated that Thomson, among other things, failed to live up to her fiduciary duties, failed to subpoena critical documents, and failed to adequately represent the reviewer’s interests.

After Avvo refused Thomson’s subpoena seeking the anonymous reviewer’s identity, Thomson moved to compel compliance with the subpoena. The Washington State trial court denied Thomson’s motion and she appealed, presenting the Washington State Court of Appeals with what the court acknowledged was an issue of first impression in the Evergreen state: What evidentiary standard should a court apply when deciding a defamation plaintiff’s motion to reveal an anonymous speaker’s identity?

The court began its analysis by describing the holdings of the two leading cases on the issue: New Jersey’s Dendrite Int’l, Inc. v. Doe No. 3, which held that, to unmask anonymous defendants in defamation cases, the plaintiff must “produce sufficient evidence supporting each element of its cause of action on a prima facie basis; and Delaware’s Doe v. Cahill, which established that plaintiffs seeking to uncover the identities of anonymous speakers/defendants must clear a slightly higher evidentiary threshold—proof that their claims would survive a summary judgment motion.

The court also discussed the one court that “has significantly strayed from Dendrite and Cahill”: the Virginia Court of Appeals. In Yelp, Inc. v. Hadeed Carpet, another case we recently covered at Socially Aware, the Virginia Court of Appeals “declined to adopt either test, instead applying a state statute that required a lower standard of proof.” Specifically, Hadeed held that, in the Thomson court’s words, “a defamation plaintiff seeking an anonymous speaker’s identity must establish a good faith basis to contend that the speaker committed defamation.”

The Thomson court then cited with approval the Ninth Circuit’s approach in In re Anonymous Online Speakers. In that case, the Ninth Circuit determined that, when deciding whether to require disclosure of an anonymous speaker’s identity, the nature of the speech at issue should inform the choice of evidentiary standard. Holding that an online review of an attorney’s services is not merely commercial speech—which, the court explained, would warrant the lowest level of protection—the court rejected the Hadeed (good faith) standard. Since the Avvo review did not qualify as political speech either, the court also discounted the highest level of protection. The court then determined that the “motion to dismiss standard” was “inadequate to protect this level of speech” because, in a notice pleading state like Washington, “a defamation plaintiff would need only to allege the elements of the claim, without supporting evidence.”

Finally, the Thomson court addressed the “two remaining standards”: prima facie (Dendrite) and summary judgment (Cahill). The court ultimately decided that the prima facie standard was appropriate because the anonymous reviewer had yet to appear in the case and the plaintiff, therefore, was not in a position to file a summary judgment motion.

The court nevertheless observed that “the important feature” of both the prima facie and the summary judgment standards “is to emphasize that the plaintiff must do more than simply plead his case.” In other words, both standards require “supporting evidence … before the speaker is unmasked.” Under that standard, the court held, “Thomson’s motion must fail. As Thomson freely admits, she presented no evidence to support her motion.”

Hot Off the Press: The July/August Issue of Our Socially Aware Newsletter Is Now Available

Posted in Bankruptcy, Cloud Computing, Copyright, First Amendment, FTC, Infographic, Internet of Things, IP, Livestreaming, Online Reviews, Privacy, Trademark, Wearable Computers

150728SociallyAware_Page_01The latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we present a “grand unifying theory” of today’s leading technologies and the legal challenges these technologies raise; we discuss whether hashtags can be protected under trademark law; we explore the status of social media accounts in bankruptcy; we examine the growing tensions between content owners and users of livestreaming apps like Meerkat and Periscope; we highlight a recent discovery dispute involving a deactivated Facebook account; we discuss a bill before Congress that would protect consumers’ rights to post negative reviews on websites like Yelp; and we take a look at the Federal Trade Commission’s crackdown on in-store tracking activities.

All this—plus an infographic exploring the popularity of livestreaming sites Meerkat and Periscope.

Read our newsletter.

FCC Clarifies Its Interpretations of the Telephone Consumer Protection Act, Provoking Strong Objections From the Business Community

Posted in Compliance, FCC

Cell_iStock_000024872497XLargeOn July 10, 2015, the Federal Communications Commission (FCC) released a 140-page Omnibus Declaratory Ruling and Order in response to more than two dozen petitions from businesses, attorneys general, and consumers seeking clarity on how the FCC interprets the Telephone Consumer Protection Act (TCPA). As noted in vigorous dissents by Commissioners Pai and O’Rielly, several of the rulings seem likely to increase TCPA litigation and raise a host of compliance issues for businesses engaged in telemarketing or other practices that involve calling or sending text messages to consumers.

Since the FCC issued the order, trade associations and companies have filed multiple petitions for review in courts of appeals challenging the order (for example, see here and here). It will thus ultimately be up to the courts of appeals to decide whether the FCC’s new interpretations of the TCPA are reasonable.

What is an “Automatic Telephone Dialing System”?

The TCPA generally prohibits certain calls to cell phones made with an Automatic Telephone Dialing System (ATDS). As defined by statute, an ATDS is “equipment which has the capacity (A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” In the absence of statutory or FCC guidance, some courts have construed “capacity” broadly to encompass any equipment that is capable of automatically dialing random or sequential numbers, even if it does not actually do so, or even if it must be altered to make it capable of doing so.

In light of these decisions, a number of entities asked the FCC to clarify that equipment does not qualify as an ATDS unless it has the present capacity to generate and dial random or sequential numbers.

In its ruling, the FCC found that an ATDS includes equipment with both the present and potential capacity to generate and dial random or sequential numbers, even if such potential would require modification or additional software in order to do so. An ATDS also includes equipment with the present or potential capacity to dial numbers from a database of numbers.

The FCC, however, did state that “there must be more than a theoretical potential that the equipment could be modified to satisfy the [ATDS] definition.”  Per this limitation, the FCC explicitly excluded from the definition of an ATDS a “rotary-dial phone.”

Consent of the Current Subscriber or User

The TCPA exempts from liability calls to mobile phones “made with the prior express consent of the called party.” It does not, however, define “called party” for purposes of this provision, and courts have divided over how to construe that term.

Some courts have construed the term to mean the actual subscriber to the called mobile number at the time of the call, while others have construed it to mean the intended recipient of the call. The distinction is critical because consumers often give up their mobile phone numbers and those numbers are reassigned to other people, meaning that the actual subscriber and the intended recipient may not be the same person.

Faced with lawsuits from owners of such reassigned numbers, a number of entities petitioned the FCC, asking it to clarify that calls to reassigned mobile numbers were not subject to TCPA liability where the caller was unaware of the reassignment, and to adopt the interpretation that “called party” means the intended recipient of the call.

In response to petitions seeking clarity on this issue, the FCC ruled that the “called party” for purposes of determining consent under the TCPA’s mobile phone provisions is “the subscriber, i.e., the consumer assigned the telephone number dialed and billed for the call, or the non-subscriber customary user of a telephone number included in a family or business calling plan.”

Consistent with its interpretation of “called party,” the FCC further ruled that where a wireless phone number has been reassigned, the caller must have the prior express consent of the current subscriber (or current non-subscriber customary user of the phone), not the previous subscriber. Businesses, however, may have properly obtained prior express consent from the previous wireless subscriber and will not know that the number has been reassigned. The FCC thus allows a business to make one additional call to a reassigned wireless number without incurring liability, provided the business did not know the number had been reassigned and had a reasonable basis to believe the business had the intended recipient’s consent.

Is Consent Revocable?

The TCPA is silent as to whether, or how, a called party can revoke his or her prior express consent to be called. Given that silence, one entity petitioned the FCC to request that the Commission clarify that prior consent to receive non-telemarketing calls and text messages was irrevocable or, in the alternative, set forth explicit methods of revocation. In response, the FCC ruled that consent is revocable (with regard to both telemarketing and non-telemarketing calls), and that such revocation may be made “in any manner that clearly expresses a desire not to receive further messages.” Consumers may use “any reasonable method, including orally or in writing,” to communicate that revocation and callers may not designate an exclusive means of revocation.

The “Urgent Circumstances” Exemption to Consent Requirement Notwithstanding the FCC’s rulings regarding prior express consent, the FCC took this opportunity to create several new exemptions to that requirement with regard to certain non-marketing calls made to cellular phones. The FCC exempted the following types of calls:

  • Calls concerning “transactions and events that suggest a risk of fraud or identity theft”;
  • Calls concerning “possible breaches of the security of customers’ personal information”;
  • Calls concerning “steps consumers can take to prevent or remedy harm caused by data security breaches”;
  • Calls concerning “actions needed to arrange for receipt of pending money transfers”; and
  • Calls “for which there is exigency and that have a healthcare treatment purpose, specifically: appointment and exam confirmations and reminders, wellness checkups, hospital pre-registration instructions, pre-operative instructions, lab results, post-discharge follow-up intended to prevent readmission, prescription notifications, and home healthcare instructions.”First and foremost, the consumer must not be charged for the calls.
  • Further, such calls must be limited to no more than three calls over a three-day period, must be concise (generally 1 minute or 160 characters, if sent via text message), cannot include marketing or advertising content (or financial content, in the case of healthcare calls), and must have some mechanism for customer opt-out be provided.
  • The FCC reasoned that all of the aforementioned types of calls involved urgent circumstances where quick, timely communication with a consumer was critical to prevent financial harm or provide health care treatment. Although prior express consent is not required, such calls are still subject to a number of limitations.

Other Consent Issues

In addition to the points above concerning consent, the FCC also ruled on a number of specific consent issues, described here in brief:

  • Provision of Phone Number to a Health Care Provider. Clarifying an earlier ruling, the FCC ruled that the “provision of a phone number to a healthcare provider constitutes prior express consent for healthcare calls subject to HIPAA by a HIPAA-covered entity and business associates acting on its behalf, as defined by HIPAA, if the covered entities and business associates are making calls within the scope of the consent given, and absent instructions to the contrary.”
  • Third-Party Consent on Behalf of Incapacitated Patients. The FCC ruled that consent to contact an incapacitated patient may be obtained from a third-party intermediary, although such consent terminates once the patient is capable of consenting on his or her behalf.
  • Ported Phone Numbers. In response to a request for clarification, the FCC ruled that porting a telephone number from wireline service (i.e., a land line) to wireless service does not revoke prior express consent.
  • Consent Obtained Prior to the Current Rules. In response to petitions requesting relief from or clarification of the prior-express-written-consent rule that went into effect on October 16, 2013, the FCC ruled that “telemarketers should not rely on a consumer’s written consent obtained before the current rule took effect if that consent does satisfy the current rule.”
  • Consent via Contact List. In response to a petition concerning the use of smartphone apps to initiate calls or text messages, the FCC ruled that the mere fact that a contact may appear in a user’s contact list or address book does not establish consent to receive a message from the app platform.
  • On Demand Text Offers. In response to a petition concerning so-called “on demand text offers,” the FCC ruled that such messages do not violate the TCPA as long as they (1) are requested by the consumer; (2) are a one-time message sent immediately in response to that request; and (3) contain only the requested information with no other marketing information. Under such conditions, the messages are presumed to be within the scope of the consumer’s consent.

Calls Placed by Users of Apps and Calling Platforms

The FCC also addressed a number of petitions seeking guidance as to who “makes” or “initiates” a call under the TCPA (and is thus liable for TCPA violations) in a variety of scenarios involving calls or text messages made by smartphone apps and calling platforms.

The FCC offered no clear rule, and instead held that to answer this question “we look to the totality of the facts and circumstances surrounding the placing of a particular call to determine: 1) who took the steps necessary to physically place the call; and 2) whether another person or entity was so involved in placing the call as to be deemed to have initiated it, considering the goals and purposes of the TCPA.”

The FCC noted that relevant factors could include “the extent to which a person willfully enables fraudulent spoofing of telephone numbers or assists telemarketers in blocking Caller ID” as well as “whether a person who offers a calling platform service for the use of others has knowingly allowed its client(s) to use that platform for unlawful purposes.”

Authorization of “Do Not Disturb” Technology

Finally, at the request of petitioning state attorneys general, the FCC affirmed that nothing in the Communications Act or FCC rules or orders prohibits telephone carriers or VoIP providers from implementing call-blocking technology to stop unwanted “robocalls.” The FCC explained that such carriers “may legally block calls or categories of calls at a consumer’s request if available technology identifies incoming calls as originating from a source that the technology, which the consumer has selected to provide this service, has identified.”  The FCC “strongly encourage[d]” carriers to develop such technology to assist consumers.

Status Updates: AZ’s anti-revenge-porn law scrapped; civil rights claim against blogging prosecutor dismissed; Match buys PlentyOfFish

Posted in First Amendment, Status Updates

There oughta be a law? As we’ve reported previously, states all around the country have enacted laws that criminalize the posting of revenge porn—nude photographs published without the subject’s consent, often by an ex-lover seeking retribution. To avoid running afoul of the First Amendment, these laws are typically fairly limited in scope and provide for relatively minor penalties. California’s anti-revenge-porn law, for example, categorizes posting revenge-porn as a misdemeanor, and contains several exceptions. Among other things, California’s law only applies if the poster intended to cause the victim emotional distress—a characteristic that improves the law’s chances of surviving a First Amendment challenge. Arizona’s anti-revenge porn law, in contrast, contains no such limitation and provides that violations constitute a felony. As a result, the ACLU argued that Arizona Revised Statute §13-1425 could lead to a felony conviction for posting a photograph “even if the person depicted had no expectation that the image would be kept private and suffered no harm,” such as in the case of “a photojournalist who posted images of victims of war or natural disaster.” Based on such alleged overreach, a group of Arizona booksellers, publishers, librarians and photographers filed Antigone Books v. Brnovich—a lawsuit to halt enforcement of the Arizona law. A joint final settlement between the Arizona attorney general and the plaintiffs in that case resulted in a July 2015 federal court order that does, in fact, scrap §13-1425.  In her discussion of the settlement, an ACLU staff attorney said that the organization nevertheless views revenge porn as a serious concern. She lauded social media platforms’ and online search companies’ decisions to heed revenge-porn victims’ take-down requests as victories “achieved without a new criminal law and without a new inroad against the First Amendment.”

Blogs of war. The U.S. Court of Appeals for the Ninth Circuit affirmed the dismissal of a civil rights claim brought by a woman who was the subject of negative articles and social media updates written by a Los Angeles county prosecutor and posted to the prosecutor’s personal blog and Twitter account. According to the opinion, the prosecutor, Patrick Frey, posted to his blog eight unfavorable articles about the plaintiff, Nadia Naffe, and “tweeted several dozen threatening and harassing statements” about her. The blog posts and tweets called Naffe, among other things, a “smear artist” and a “liar,” and accused Naffe of having filed frivolous lawsuits against James O’Keefe, a friend of Frey’s with whom Naffe had had a falling out. The Ninth Circuit held that Frey had not violated Naffe’s First Amendment constitutional right to petition the government for redress of grievances pursuant to 42 U.S.C. § 1983 because the posts and tweets weren’t related to his work as a county prosecutor. The court noted, among other things, the fact that Frey’s disparaging comments were sent from Frey’s personal Twitter account and blog, both of which specify that they reflect Frey’s “personal opinions” and that they do not contain statements made in an “official capacity.” The Ninth Circuit also noted that the posts and tweets were time stamped outside of Naffe’s office hours.

A good catch. While the options for online dating hopefuls continue to multiply—there are now dating services specifically for farmers, people living gluten-free lifestyles and fire-fighter aficionados—it seems many of the most popular personals sites are merging under the same umbrella. IAC/InterActiveCorp’s Match Group subsidiary, the owner of Match.com, Tinder and OKCupid, among others, just snapped up PlentyOfFish for $575 million. PlentyOfFish, a British Columbia-based dating site that’s free to use but offers upgrades for a fee, currently has 3.6 million active daily users. Its founder and creator, 36-year-old Markus Frind, built the site without any venture capital funding and still owns 100% of it. IAC, meanwhile, owned 20% of the online dating market even before the PlentyOfFish acquisition, which is expected to close in the fourth quarter.