One year since agreeing with the European Commission to remove hate speech within 24 hours of receiving a complaint about it, Facebook, Microsoft, Twitter and YouTube are removing flagged content an average of 59% of the time, the EC reports.

The U.S. Court of Appeals for the Second Circuit held that a catering company violated the National Labor Relations Act when it fired an employee for posting to Facebook a profane rant about his supervisor in response to that supervisor admonishing him for “chitchatting” days before the employee and his coworkers were holding a vote to unionize.

The value of the digital currency Ether could surpass Bitcoin’s value by 2018, some experts say.

The Washington Post takes a look at how the NBA is doing a particularly good job of leveraging social media and technology in general to market itself to younger fans and international consumers.

A judge in Israel ruled in favor of a landlord who took down a rental ad based on his belief that a couple wanted to rent his apartment after they sent him a text message containing festive emoji and otherwise expressing interest in the rental. The landlord brought a lawsuit against the couple for backing out on the deal, and the court held the emoji in the couple’s text “convey[ed] great optimism.” The court further determined that, although the message “did not constitute a binding contract between the parties, [it] naturally led to the Plaintiff’s great reliance on the defendants’ desire to rent his apartment.” For a survey of U.S. courts’ treatment of emoji entered into evidence, read this post on Socially Aware.

The owner of a recipe site is suing the Food Network for copyright infringement, alleging that a video the network posted on its Facebook page ripped off her how-to video for snow globe cupcakes.

Twitter’s popularity with journalists has made it a prime target for media manipulators, The New York Times’s Farhad Manjoo reports. As a result, Manjoo claims, the microblogging platform played a key role in many of the past year’s biggest misinformation campaigns.

The Knight First Amendment Institute at Columbia University claims that the @realDonaldTrump Twitter account’s blocking of some Twitter users violates the First Amendment because it suppresses speech in a public forum protected by the Constitution.

Pop singer Taylor Swift, who pulled her back catalogue of music from free streaming services in 2014 saying the services don’t fairly compensate music creators, has now made her entire catalogue of music accessible via Spotify, Google Play and Amazon Music.

To encourage young people in swing constituencies to vote for Labour in the UK’s general election, some Tinder users turned their profiles over to a bot that sent other Tinder users between the ages of 18 and 25 automated messages asking if they were voting and focusing on key topics that would interest young voters.

A court ruled that a particular 98-character tweet wasn’t sufficiently creative to warrant protection under German copyright law.

Inspired by a recording posted to Snapchat of a physical attack on a 14-year-old boy, a California bill would make it illegal to “willfully record a video of the commission of a violent felony pursuant to a conspiracy with the perpetrator.”

Instagram just made it easier to identify sponsored content —something required by the FTC’s endorsement guides.

Thirty-five states and the District of Columbia now have laws that make it illegal to distribute sexually explicit photos online without the subject’s permission—content known as “revenge porn” or “non-consensual pornography.” This article explores the efficacy of those laws and other legal-recourse options.

A proposed state law would prohibit employers in Texas from discriminating against employees and prospective employees based on the political beliefs they express on their personal social media accounts (and in any other non-work-related place).

A drone helped New York City fire fighters to extinguish a building fire for the very first time.

As part of its crusade against fake news, Facebook teamed up with non-partisan fact-checkers including Snopes to flag stories that are “disputed.”

The Wall Street Journal interviewed industry experts about the challenges and opportunities artificial intelligence will present for businesses.

A Facebook Messenger chatbot created by 20-year-old helps refugees seeking asylum by asking them a series of jargon-free questions to determine which application they need to submit.

The addition of a live-streaming feature helped a dating app in China to generate $194.8 million in revenue during Q4 alone.

While we’re on the subject of dating, is flirting on LinkedIn a faux pas?

In the wake of a successful social media conference in San Francisco, Socially Aware co-editors John Delaney and Aaron Rubin are revved up and ready to chair (John) and present (Aaron and John) at another Practicing Law Institute (PLI) 2017 Social Media conference! This one will be held in New York City on Wednesday, February 15, and will be webcasted.

Attendees and webcast listeners will learn how to leverage social-media-marketing opportunities while minimizing their companies’ risks from entirely new panels of industry experts, lawyers and regulators.

Topics to be addressed will include:

  • Key developments shaping social media law
  • Emerging best practices for staying out of trouble
  • Risk mitigation strategies regarding user-generated content and online marketing
  • Legal considerations regarding use of personal devices and other workplace issues

Other special features of the conference include:

  • Regulators panel: guidance on enforcement priorities for social media and mobile apps
  • In-house panel: practical tips for handling real-world issues
  • Potential ethical issues relating to the use of social media by attorneys

The conference will end with a networking cocktail reception—a great way to meet others who share your interest in social media, mobile apps and other emerging technologies.

Don’t miss this opportunity to get up-to-date information on the fast-breaking developments in the critical area of social media and mobile apps so that you can most effectively meet the needs of your clients.

For more information or to register, please visit PLI’s website here.  We hope to see you there!

Social media is transforming the way companies interact with consumers. Learn how to make the most of these online opportunities while minimizing your company’s risk at Practicing Law Institute’s (PLI) 2017 Social Media conference, to be held in San Francisco and webcasted on Thursday, February 2nd.  The conference will be chaired by Socially Aware co-editor John Delaney, and our other co-editor, Aaron Rubin, will also be presenting at the event.

Topics to be addressed will include:

  • Key developments shaping social media law
  • Emerging best practices for staying out of trouble
  • Risk mitigation strategies regarding user-generated content and online marketing
  • Legal considerations regarding use of personal devices and other workplace issues

Other special features of the conference include:

  • Regulators panel: guidance on enforcement priorities for social media and mobile apps
  • In-house panel: practical tips for handling real-world issues
  • Potential ethical issues relating to the use of social media by attorneys

Don’t miss this opportunity to get up-to-date information on the fast-breaking developments in the critical area of social media and mobile apps so that you can most effectively meet the needs of your clients.

For more information or to register, please visit PLI’s website here.  We hope to see you there!

Google is cracking down on mobile pop-up ads by knocking down the search-result position of websites that use them.

The National Labor Relations Board decided a social media policy that Chipotle had in place for its employees violates federal labor law.

A group of lawmakers plans to introduce legislation that would criminalize revenge porn—explicit images posted to the web without the consent of the subject—at the federal level.

The Truth in Advertising organization sent the Kardashians a letter threatening to report them for violating the FTC’s endorsement guides. This isn’t the first time the legality of the famous family’s social media posts has been called into question. If only Kim would read our influencer marketing blog posts.

According to one study, 68% percent of publishers use editorial staff to create native ads.

Twitter launched a button that a company can place on its website to allow users to send a direct message to the company’s Twitter inbox.

The Center for Democracy & Technology criticized the Department of Homeland Security’s proposal to ask visa-waiver-program applicants to disclose their social media account information.

UK lawmakers issued a report calling on the big social media companies to do more to purge their platforms of hate speech and material that incites violence.

Social media is playing bigger role in jury selection, Arkansas prosecutors and criminal defense lawyers say.

A day in the life of the Economist‘s head of social media.

Seven things smart entrepreneurs do on Instagram.

Four ways to get busy people to read the email you send them.

Want to know how Facebook views your political leanings? Here’s the way to find out.

CaptureThe latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award winning guide to the law and business of social media, we take a look at courts’ efforts to evaluate emoticons and emojis entered into evidence; we describe the novel way one court addressed whether counsel may conduct Internet research on jurors; we examine a recent decision finding that an employee handbook provision requiring employees to maintain a positive work environment violates the National Labor Relations Act; we discuss an FTC settlement highlighting legal risks in using social media “influencers” to promote products and services; we explore the threat ad blockers pose to the online publishing industry; we review a decision holding that counsel may face discipline for accessing opposing parties’ private social media accounts; we discuss a federal court opinion holding that the online posting of copyrighted material alone is insufficient to support personal jurisdiction under New York’s long-arm statute; and we summarize regulatory guidance applicable to social media competitions in the UK.

All this—plus an infographic illustrating the growing popularity of emoticons and emojis.

Read our newsletter.

The Internet is abuzz over the Facebook algorithm change. Here are the implications for marketers and publishers and for regular users.

U.S. Customs wants to start collecting the social media accounts for foreign travelers.

Court: Woman fired for posting to her Facebook page that she would quit her job before doing “something stupid like bash in” her co-worker’s “brains with a baseball bat” is entitled to unemployment benefits.

Does artificial intelligence have a “white guy” problem?

It appears consumers have an appetite for branded emoji: Harper’s Bazaars emoji keyboard was downloaded 30,000 times in 24 hours.

Meet YesJulz, Snapchat royalty.

And here’s a list of several more impossibly popular social media celebs who you’ve likely never heard of.

Seven things everyone should know about cybersecurity and social media.

Do anonymity and social mix? An interesting Q&A with the founder of Secret, an anonymous social messaging platform that was valued at $100 million before it shuttered as a result of bullying.

Hitting the Tarmac this holiday weekend? Here’s a list of free or cheap travel apps worth downloading.

lines of binary codes traveling through the virtual tunnel

Deluged with an unprecedented amount of information available for analysis, companies in just about every industry are discovering increasingly sophisticated ways to make market observations, predictions and evaluations. Big Data can help companies make decisions ranging from which candidates to hire to which consumers should receive a special promotional offer. As a powerful tool for social good, Big Data can bring new opportunities for advancement to underserved populations, increase productivity and make markets more efficient.

But if it’s not handled with care, Big Data has the potential to turn into a big problem. Increasingly, regulators like the Federal Trade Commission (FTC) are cautioning that the use of Big Data might perpetuate and even amplify societal biases by screening out certain groups from opportunities for employment, credit or other forms of advancement. To achieve the full potential of Big Data, and mitigate the risks, it is important to address the potential for “disparate impact.”

Disparate impact is a well-established legal theory under which companies can be held liable for discrimination for what might seem like neutral business practices, such as methods of screening candidates or consumers. If these practices have a disproportionate adverse impact on individuals based on race, age, gender or other protected characteristics, a company may find itself liable for unlawful discrimination even if it had no idea that its practices were discriminatory. In cases involving disparate impact, plaintiffs do not have to show that a defendant company intended to discriminate—just that its policies or actions had the discriminatory effect of excluding protected classes of people from key opportunities.

As the era of Big Data progresses, companies could expose themselves to discrimination claims if they are not on high alert for Big Data’s potential pitfalls. More than ever, now is the time for companies to adopt a more rigorous and thoughtful approach to data.

Consider a simple hypothetical: Based on internal research showing that employees who live closer to work stay at the company longer, a company formulates a policy to screen potential employees by their zip code. If the effect of the policy disproportionately excludes classes of people based on, say, their race—and if there is not another means to achieve the same goal with a smaller disparate impact—that policy might trigger claims of discrimination.

Making matters more complex, companies have to be increasingly aware of the implications of using data they buy from third parties. A company that buys data to verify the creditworthiness of consumers, for example, might be held liable if it uses the data in a way that has a disparate impact on protected classes of people.

Expanding Uses of Disparate Impact

For decades, disparate-impact theories have been used to challenge policies that excluded classes of people in high-stakes areas such as employment and credit. The Supreme Court embraced the theory for the first time in a 1971 employment case called Griggs v. Duke Power Co., which challenged the company’s requirement that workers pass intelligence tests and have high school diplomas. The court found that the requirement violated Title VII of the Civil Rights Act of 1964 because it effectively excluded African-Americans and there was not a genuine business need for it. In addition, courts have allowed the disparate-impact theory in cases brought under the Americans with Disabilities Act and the Age Discrimination in Employment Act.

The theory is actively litigated today and has been expanding into new areas. Last year, for example, the Supreme Court held that claims using the disparate-impact theory can be brought under the Fair Housing Act.

In recent years, the FTC has brought several actions under the disparate-impact theory to address inequities in the consumer-credit markets. In 2008, for example, the agency challenged the policies of a home-mortgage lender, Gateway Funding Diversified Mortgage Services, which gave its loan officers autonomy to charge applicants discretionary overages. The policy, according to the FTC, had a disparate impact on African-American and Hispanic applicants, who were charged higher overages than whites, in violation of the Federal Trade Commission Act and the Equal Credit Opportunity Act.

The Good and Bad Impact of Big Data

As the amount of data about individuals continues to increase exponentially, and companies continue to find new ways to use that data, regulators suggest that more claims of disparate impact could arise. In a report issued in January, the FTC expressed concerns about how data is collected and used. Specifically, it warned companies to consider the representativeness of their data and the hidden biases in their data sets and algorithms.

Similarly, the White House has also shown concern about Big Data’s use. In a report issued last year on Big Data and its impact on differential pricing—the practice of selling the same product to different customers at different prices—President Barack Obama’s Council of Economic Advisers warned: “Big Data could lead to disparate impacts by providing sellers with more variables to choose from, some of which will be correlated with membership in a protected class.”

Meanwhile, the European Union’s Article 29 Data Protection Working Party has cautioned that Big Data practices raise important social, legal and ethical questions related to the protection of individual rights.

To be sure, government officials also acknowledge the benefits that Big Data can bring. The FTC in its report noted that companies have used data to bring more credit opportunities to low-income people, to make workforces more diverse and provide specialized health care to underserved communities.

And in its report, the Council of Economic Advisers acknowledged that Big Data “provides new tools for detecting problems, both before and perhaps after a discriminatory algorithm is used on real consumers.”

Indeed, in the FTC’s action brought against the mortgage lending company Gateway Funding Diversified Mortgage Services, the agency said the company had failed to “review, monitor, examine or analyze the loan prices, including overages, charged to African-American and Hispanic applicants compared to non-Hispanic white applicants.” In other words, Big Data could have helped the company spot the problem.

Policy Balancing Act

The policy challenge of Big Data, as many see it, is to root out discriminatory effects without discouraging companies from innovating and finding new and better ways to provide services and make smarter decisions about their business.

Regulators will have to decide which Big Data practices they consider to be harmful. There will inevitably be some gray areas. In its report, the FTC suggested advertising by lenders could be one example. It noted that a credit offer targeted at a specific community that is open to all will not likely trigger violations of the law. But it also observed that advertising campaigns can affect lending patterns, and the Department of Justice in the past has cited a creditor’s advertising choices as evidence of discrimination. As a result, the FTC advised lenders to “proceed with caution.”

As the era of Big Data gets under way, it’s not bad advice for all companies.

*    *    *

This post originally appeared as an op-ed piece in MarketWatch.

For more on potential legal issues raised by Big Data usage, please see our Socially Aware post, Big Data, Big Challenges: FTC Report Warns of Potential Discriminatory Effects of Big Data.

 

 

Unlucky businessman being wet from raining instead he holding umbrella, misfortune or in trouble concept.

A few months ago, we noted that a Yelp employee’s online “negative review” of her employer might be protected activity under the National Labor Relations Act (NLRA), given that the National Relations Labor Board (NLRB) has become increasingly aggressive in protecting an employee’s right to discuss working conditions in a public forum, even when that discussion involves obscenities or disparaging the employer. This trend has prompted us to report previously on the death of courtesy and civility under the NLRA.

Now the NLRB has confirmed that it is not only courtesy and civility that have passed away—a “positive work environment” has perished with them.

A recent NLRB decision found that T-Mobile’s employee handbook violated the NLRA by requiring employees “to maintain a positive work environment by communicating in a manner that is conducive to effective working relationships with internal and external customers, clients, co-workers, and management.”

According to the NLRB, employees could reasonably construe such a rule “to restrict potentially controversial or contentious communications,” including communications about labor disputes and working conditions that are protected under the NLRA. The NLRB concluded that employees rightly feared their employer would consider such communications to be inconsistent with a “positive work environment.” Similarly, the NLRB struck down T-Mobile’s rules against employees “arguing” and making “detrimental” comments about the company.

The main sticking point appears to be requiring employees to be “positive” towards co-workers and management. Earlier NLRB cases have indicated that requiring employees to be courteous only towards customers may not set off as many NLRB alarm bells. Nonetheless, employers should tread carefully—and try not to be too cheerful. Encouraging a positive attitude among employees could have negative results.

*      *      *

For more on NLRA-related considerations for employers, please see the following Socially Aware posts:

A Negative Review May Be Protected Activity Under U.S. Employment Law

The Second Circuit Tackles Employee Rights, Obscenities & Social Media Use

 The Death of Courtesy and Civility Under the National Labor Relations Act

 Employee Social Media Use and the NLRA

04_21_Apr_SociallyAware_v6_Page_01The latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award winning guide to the law and business of social media. In this edition, we discuss what a company can do to help protect the likes, followers, views, tweets and shares that constitute its social media “currency”; we review a federal district court opinion refusing to enforce an arbitration clause included in online terms and conditions referenced in a “wet signature” contract; we highlight the potential legal risks associated with terminating an employee for complaining about her salary on social media; we explore the need for standardization and interoperability in the Internet of Things world; we examine the proposed EU-U.S. Privacy Shield’s attempt to satisfy consumers’ privacy concerns, the European Court of Justice’s legal requirements, and companies’ practical considerations; and we take a look at the European Commission’s efforts to harmonize the digital sale of goods and content throughout Europe.

All this—plus an infographic illustrating the growing popularity and implications of ad blocking software.

Read our newsletter.