Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Status Updates

Posted in Status Updates

Home(page) renovation. In an effort to encourage return visits from the 150 million Internet users who visit Twitter every month without signing in, the social media giant has revamped its home page. Now, instead of just “a background photo, a few lines of text, and a prompt to sign up or log in,” Twitter’s home page features boxes with the names of several of the platform’s most popular content topics, including “Actors & Actresses,” “Cute Animals” and “General News Sources.” A click on one of the boxes will take you to a timeline of tweets from some of the most popular commentators who tweet on that topic. Whether the new homepage will be enough to help Twitter expand its active-user base remains to be seen. LexBlog’s Kevin O’Keefe thinks that, to get more attorneys to join, the platform will need to go a few steps further by breaking down its content into niche areas of law.

Teen traffic. A new Pew Research study shows that Facebook is still the most popular social media platform among the members of an age group that many companies consider a crucial target market: 13- to 17-year-olds. Of the 1060 teens surveyed, 71% reported using Facebook. Snapchat, the vanishing messaging app that seems to be make news almost every day (for some reason or other) is up there (41%), too, right behind the photo sharing site Instagram (52%). Interestingly, teens from households with incomes greater than $75,000 are much more likely than teens whose families earn less than $30,000 to call Snapchat their top social media platform (14% compared to 7%). And teenage girls are more likely than teenage boys to use what Pew classified as “visually-oriented social media platforms”: Instagram (61% of girls vs. 44% of boys); Snapchat (51% of girls vs. 31% of boys); online pinboards like Pinterest (33% of girls vs. 11% of boys); and Tumblr (23% of girls vs. 5% of boys).

Rules for the nude and rude. Instagram has amended its Community Guidelines, which formerly simply asked users to be polite and respectful, to specify that the photo sharing platform will remove “content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages.” Instagram also amended its original guidelines to amplify its blanket prohibition on nudity. The new guidelines specify that Instagram will allow nudity in photos of paintings and sculptures, and “photos of post-mastectomy scarring and women actively breastfeeding,” but will not allow “close-ups of fully nude buttocks.” And, in the interest of discouraging users from posting images that they have “copied or collected from the Internet,” something the platform’s guidelines always proscribed, the new guidelines contain a link to a page that informs users about their intellectual property rights. But Techcrunch notes that because Instagram, unlike Youtube, still “doesn’t offer any copyright fingerprinting system to automatically remove infringing media,” there remains a gap between the photo sharing platform’s intellectual property policy and its enforcement.

Status Updates

Posted in Status Updates

Searching social. At long last, tweets will appear in Google search results as soon as they’re sent, as the result of a deal that the two Internet giants recently struck. As part of its efforts to increase user growth and attract more eyeballs to its social media platform, Twitter is finally giving Google immediate access to the content produced by its 284 million users. Previously, Google had to crawl through Twitter’s data, allowing Google to include in search results only a relatively small percentage of tweets. The announcement of the deal, which Mashable suggests should cause “trigger-happy users” to “think twice now before they tweet,” was followed by a 1.3%  rise in Twitter’s share price to $41.26. And this week Twitter shares rose as high as $52 each amid speculation that Google or some other company is trying to buy the social media giant.

Don’t worry, be app-y. Feeling blue? There’s an app for that. At least there will be, come this fall. A man named Robert Morris developed a prototype for the world’s first social network for people suffering from depression while he was a psychology PhD candidate at MIT, where he felt like everyone—except him—was a “brilliant coder.” Crowdsourcing answers to his computer programming conundrums on Stack Overflow inspired Morris to create a similar online resource for people struggling with mental health issues. On Koko, the iPhone app that Morris is developing for release in the autumn, users will be able to post their negative feelings (e.g., depression and anxiety) and the problems to which they attribute those feelings (e.g., a job loss). The poster’s Koko social network will then presumably respond to the post by pointing out the bright side of the situation or errors in the poster’s thinking. Fast Company reports that the Koko community will be “coached at every turn to pin their answers down so that they fall within the guidelines of cognitive therapy techniques that are proven to work,” but it remains to be seen how such coaching will work in practice.

Meer-terial girl. In what The Guardian has dubbed “a sign of the music industry’s keen interest in the popularity of social apps,” Madonna has decided to premier her Ghosttown video on the fledgling video broadcasting app Meerkat, which currently has only around 1,000 subscribers. Launched in February, Meerkat is an iPhone app that makes livestreaming easier than ever by allowing users to link their live videos to their Twitter accounts, thereby giving a Twitter user the ability to live stream a video he’s shooting on his phone. (Twitter’s own Periscope app performs essentially the same function). Since the pop star likely won’t be live streaming the video, exactly how she’ll use Meerkat for its premier is unclear. Meerkat is the fourth social media platform that Madonna has used to promote material on her current album, Rebel Heart; she’s already run campaigns for it on Grindr, Instagram, and Snapchat.

Status Updates

Posted in Status Updates

Bad ads. New research shows that 5% of the people visiting Google-related websites are using computers infected with programs that insert illegitimate ads onto web pages; as a result, these web surfers see ads that site operators haven’t been paid to run and that may even be promoting products or services that are objectionable to such site operators. Known as ad injectors, these programs are often bundled with other software that Internet users download for free. Ad injectors inconvenience Internet users in several ways: They sometimes place ads over a website’s text or otherwise make websites unpleasant to read; they can put web surfers’ cybersecurity at risk; and they can negatively affect a computer’s performance. Moreover, ad injectors also deprive website operators of ad revenue, and undermine the ability of advertisers to control where their ads are running (as a result of the complicated system of intermediaries in the Internet ad sales business, advertisers often don’t know that their ads are being injected). For these reasons, Google has disabled 192 Chrome extensions that resulted in illegitimate ad launches affecting 14 million users. According to TechCrunch, however, “unless Google and other browser and advertising vendors find a technical solution to this problem, chances are it’ll never fully go away.”

Hot button. Well, the future is officially here. Amazon has introduced the Dash Button, a wireless, WiFi-enabled, doorbell-like button that Amazon customers can depress to order products ranging from smartwater to Bounty Paper Towels. Each Dash Button bears a single brand’s product logo. Consumers are meant to stick the buttons—they have adhesive on the back—near the products themselves (in the pantry or, in the case laundry detergent, for example, the laundry room) so that they can order more product the second they realize their supply is low. The button makes purchasing even easier than the one-click-to-buy feature on Amazon’s app—consumers don’t have to use their devices—and is part of Amazon’s effort to become a player in the Internet of Things market, a world in which network connected devices anticipate consumers’ needs. Critics are split on whether the Dash Button is brilliant or a bad (if inevitable) idea. Here at Socially Aware, we’re waiting for a version that allows for pizza delivery.

Bon app-e-tip. New payment platforms are making the act of tipping more awkward and more expensive. Popping up everywhere from taxis to hair salons, these payment platforms often make it easiest for the customer to choose from among a few pre-selected percentages, the lowest of which is usually higher than the gratuity the average tipper would otherwise add to the price of the item or service. (In New York City taxicabs, for example, a “large portion” of riders tip 20%, 25%, or 30% because those are the percentages suggested on the automatic tipping buttons on the payment platform displayed on a screen in the taxis’ back seats.) These platforms also make it difficult to skip the tip altogether—unless you’re not easily embarrassed. Ignoring the tip jar on the counter at your local coffee house was one thing, but clicking “no tip” on the screen that the cashier turns toward you is quite another. Now, in an effort to appease the consumers who are fed up with these new tipping trends, some higher-end restaurants have banned tipping, and start-ups are introducing apps that automatically calculate the bill and a preset tip. These apps, which include Reserve and Cover, allow diners to “simply walk out at the end of the meal” with their card being charged through the app, according to the New York Times. Only time will tell whether the seamlessness of this technology will make automatically paying an extra 20% more palatable for restaurant patrons.

Big Data and Human Resources—Letting the Computer Decide?

Posted in Employment Law

Employees are a company’s greatest asset, but if the company gets hiring decisions wrong, employees could also be the company’s greatest expense. Accordingly, recruiting the right people and retaining and promoting the best, while identifying and addressing under-achievers, is critical. Many organizations spend a lot of time and effort on human resources issues but do not have sufficiently detailed data to help them fully understand their employees and the challenges that can affect workforce planning, development and productivity.

Big data analytics can help to address these challenges, which explains why more and more HR departments are turning to them for a variety of purposes, for example, to: (i) identify potential recruits; (ii) measure costs per hire and return on investment; (iii) measure employee productivity; (iv) measure the impact of HR programs on performance; (v) identify (and predict) attrition potential leaders. Supporters also argue that big data analytics can help to provide evidence to de-bunk commonly held assumptions about employees that are wrong and based on biases.

Accordingly, the use of analytics promises many potential benefits for organizations, not only in terms of making improvements in talent identification and recruitment, but also in terms of workforce management. However, the use of data analytics in the HR sphere also raises some specific risks and challenges that companies need to consider, including increased exposure to discrimination claims, breaches of privacy law and reputational/brand damage. In this article, we will discuss some of the key factors companies need to bear in mind.

What Is Big Data?

Organizations have always accumulated information but, in this digital age, the amount of data being generated and retained is growing exponentially. IBM has calculated that 90 per cent of the digital data that exists today was created in the last two years. In addition, historically, organizations may not have been able to draw value from the data that they held, particularly where such data were unstructured (and Gartner Inc. estimates that roughly 80 per cent of all corporate data is unstructured). However, new technologies now enable the analysis of large, complex and rapidly changing data sets comprised of structured, semi-structured or unstructured data. In short, ‘‘big data’’ is just data. It’s simply that we have more of it and we can do more with it.

I. Recruitment

Organizations are using big data analytics, for example, to identify candidates with the right skills and experience. New talent management systems can help organizations quickly search and analyze huge volumes of applicant data, e.g., using concepts, not just key words. Organizations are also using analytics to analyze hiring data to help make changes in hiring strategy and recruitment collateral to attract more candidates and minimize attrition. There are two key stages that need to be considered in managing legal compliance with respect to these activities. First, there is the collection of data, and second, there is the analysis of the data and the formulation of resulting decisions.

          A. Collecting and Processing Personal Data for Big Data Analytics

In terms of the collection of data, companies are increasingly mining candidate data from online sources, including job sites and social media sites, for the purpose of talent identification and recruitment. Privacy issues loom large because information collected about a proposed candidate will be considered personal data and may even contain sensitive personal information (e.g., health data, ethnic origin and sexual orientation).

In Europe, where any recruitment activities involve the processing of potential recruits’ personal data (and big data analysis of personal data will constitute processing), companies must give notice to potential recruits of the purposes for which data are intended to be processed and any other information that is necessary to ensure that processing is fair (e.g., the names of data recipients).  Companies also must have a legal basis for processing the personal data (e.g., consent). If a third party is engaged to carry out any processing, the potential employer will need to put in place with the third party a written contract with appropriate data protection provisions. There are some regional variations across Europe of which companies need to be aware. For example, in some countries (e.g., Germany), even with an individual’s consent, a potential employer is restricted in the background checks that it can carry out.  As a general rule, all background checks should be limited to the information strictly necessary to determine whether an applicant is suitable for a particular position, even if the applicant has consented. Additionally, through an online background check, information may only be collected if it is publicly available and the applicant does not have an apparent and justified interest in the exclusion of the information. Local employment laws may impose additional restraints. Accordingly, a company’s processes may need to be modified from country to country.

Concerns over automated decision-making are sometimes raised and, certainly, automated decision processing is particularly problematic under European Union data protection law.  Accordingly, employers that use big data analytics in recruitment need to ensure that there is an element of human judgment involved in decision-making. It should not be (and typically is not) just a question of ‘‘computer says yes’’ but rather an informed decision based on the available data and the interpretation of the data.

In the U.S., if a company purchases background reports about candidates, the company will need to be mindful of the Fair Credit Reporting Act and state consumer reporting laws. These laws may come into play any time a company procures information about a candidate or employee from a third party that is in the business of supplying such information on a commercial basis, even if that information may be publicly available. Federal and state laws also limit the types of information that an employer may lawfully request or consider in making employment-related decisions, even if the information has been obtained lawfully.

Across Asia, rules regarding the use of personal data in terms of recruitment vary.

  • In China, individuals are subject to a general right to privacy, and employers have certain obligations of confidentiality. In general, employers are viewed as having a fairly broad ability to conduct background checks, although illegal or intrusive means may be viewed as a breach of privacy. However, third-party sources of information should be used with caution as few legitimate channels of information are available. The use of personal data from illegal channels can attract civil and sometimes criminal liability, and there have been a number of high-profile cases in recent months involving the illegal provision or acquisition of personal data.
  • In Hong Kong, under the Personal Data (Privacy) Ordinance, personal information must be collected by lawful and fair means, and, if personal information will be used for a purpose other than that for which the data were originally posted (or a directly related purpose), consent will be required. It may be acceptable to use without specific consent personal information that is published on a job-seeking or professional references social media site such as LinkedIn. However, personal information published on a personal social media site (such as a personal Facebook page) will generally require express consent.
  • In Japan, personal information about applicants must be collected by appropriate and fair means. As a rule, personal information about applicants must be collected directly from applicants or from third parties with the applicant’s consent. Collection of sensitive personal information without express consent is generally prohibited. There is one exception: an employer may collect such sensitive information when such information is definitely necessary to achieve the employer’s business, the employer has notified the applicant of the purposes of collection of such information and the employer collects such information directly from an applicant.

          B. Avoiding Discriminatory Impact

Of course, as with all talent identification and recruitment activities, organizations also need to ensure that they do not act in a manner that could be considered discriminatory. In Europe, Directive 2000/78/EC establishes a general framework for equal treatment in employment and occupation, forbidding discrimination based on religion, belief, disability, age and sexual orientation.  Separate directives also forbid discrimination on the grounds of sex and race. The principle of equal treatment means that there must be no direct or indirect discrimination on any of these grounds.

Likewise, in the U.S., laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the Americans with Disabilities Act and a variety of other federal and state laws prohibit discrimination against applicants and employees based on protected characteristics such as race, age, sex, national origin, religion and disability. Employers may face liability under these laws if they unlawfully consider protected characteristics in their hiring or employment decisions. Employers may also face liability if

they rely on screening or hiring practices that appear neutral on the surface but have a disparate impact on workers in protected classifications, such as disproportionately screening out older candidates or candidates with disabilities. This liability may arise even if the employer had no intent to discriminate or no knowledge of the discriminatory impact.

Organizations are generally aware of their obligations in this area in the context of traditional recruitment activities. However, they now need to appreciate their application in this new age of big data analytics. When organizations are identifying key words and concepts for a data collection exercise, they need to apply the same rigor that they would use when creating job advertisements, i.e., avoid any terms that could be considered directly or indirectly discriminatory (e.g., ‘‘recent graduate,’’ ‘‘highly experienced,’’ ‘‘energetic’’). Organizations also need to be careful not to discriminate in terms of where they collect data from. Otherwise it could be a case of data that are ‘‘discriminatory in, discriminatory out.’’

In terms of an organization’s analysis of the data collected, again it will need to ensure that its analysis and the decisions that it makes as a result of such analysis are not deemed discriminatory—in particular decisions that are based on predictive decision-making about candidates.  Of course, it is very important that organizations do not blindly accept data without challenge.  Given the size of the potential data pool, conclusions may well be based on correlations, rather than being determinative. Proper interpretation and assessment of the results of a big data exercise is essential. For example, organizations should be wary of any predictive decision-making that gives results that appear skewed in favor of certain types of candidates. For example, if a big data analytics exercise brings up a short list of potential candidates that have the same race, gender or other characteristic, that may suggest that there has been a discriminatory input at some point in the big data process. Although it may be difficult for a candidate to establish that a big data analytical exercise has been discriminatory, particularly given the potentially complicated algorithmic calculations involved and lack of transparency about those algorithms, organizations need to be mindful of the risks. In some cases, if a practice is determined to have a discriminatory impact, the burden may shift back to the employer to defend its methodology. Employers may also be required to disclose detailed information about their big data methodologies in the event of employment litigation or a government investigation. As a result, employers will want to be prepared to explain and, if necessary, justify their big data analytics methods.

          C. Third-Party Rights

However, it is not just a question of compliance with privacy and HR issues because mining data from third party sites, such as online job sites, could be a breach of their terms of use and, potentially, an infringement of intellectual property rights. Web scraping may also be considered a breach of applicable local cybersecurity laws that prohibit unauthorized access to computer systems (e.g., the U.K. Computer Misuse Act 1990 and the U.S. Computer Fraud and Abuse Act). Accordingly, organizations need to ensure that they have adequately addressed all potential legal risks prior to embarking on any data collection activities.

II. Workforce Management

The second area where analytics are being increasingly harnessed by HR departments involves the monitoring and analysis of data relating to employees. Again, this use of analytics throws up some particular issues that companies need to be aware of.

Many organizations already use analytics to obtain insights into their customers and target customers. Organizations are now seeking to obtain the same insights into their employees, which they can use to improve organizational efficiencies and drive productivity. This can help organizations to objectively evaluate their current people management practices. Of course, if HR is going to become a more data-driven department, it will need to identify what data it holds on its employees and whether such data simply need to be joined up or more data need to be collected.

The collection of more data is very likely to involve increased monitoring of employees. The applicable rules relating to such monitoring vary across the world and, therefore, if a company is rolling out an HR analytics project, it will need to address monitoring and data collection on a country-by-country basis.

In Europe, employees have certain protections under the European Convention of Human Rights as incorporated into national law (e.g., the right to respect for private life (Article 8), freedom of speech (Article 10) and freedom of association (Article 11)). Employees also have protections under applicable data protection law.  However, there are regional variations that employers need to address. For example, in certain countries, privacy regulators have issued specific guidance relating to the extent to which employers can monitor their staff (e.g., see Part 3 of the U.K. Information Commissioner Office’s ‘‘Employment Practices Code’’). In countries such as Germany, work councils rules apply to the monitoring of staff.  Areas of particular concern include managing employees’ legitimate expectations of monitoring, having appropriate notices/policies in place with employees and protecting employees’ rights against discrimination for certain off-duty activities, e.g., religious activities and trade union and political activities.

In the U.S., restrictions on monitoring arise under federal laws including the Electronic Communications Privacy Act, Stored Communication Act and Computer Fraud and Abuse Act, and state laws that restrict certain types of monitoring activities, such as seeking to gain access to personal social media of applicants or employees.

In Asia, there are similar restrictions on monitoring.

  • In China, while employers are not restricted from monitoring publicly available information about employees, monitoring employees’ computer use in the workplace may be more susceptible to legal challenge. However, an employees’ right to privacy would be balanced against an employer’s statutory duties.
  • In Hong Kong, applicable law requires that monitoring must serve a legitimate purpose that relates to the function and activities of the employer. Monitoring measures must be necessary to meet that purpose and must be confined to an employee’s work. Personal data collected must be kept to the minimum necessary to protect the interests of the employer or to effectively address those risks inherent in the lawful activities of the employer. Monitoring must be carried out by the least intrusive means and with the least harm to the privacy interests of the employees. Employers are also required to document monitoring in a formal privacy policy setting out the employer’s purpose, and employers must notify employees of the policy before commencing monitoring.
  • In Japan, applicable law requires that if monitoring is implemented, an employer should: (i) establish in advance the in-house rules that stipulate the implementation of monitoring; (ii) specify in advance the purpose of the monitoring and notify workers of such purpose plus the relevant inhouse regulations; (iii) establish the responsible official for the implementation of monitoring and its authority; and (iv) check that monitoring is properly implemented.

When carrying out big data analysis, employers will need to ensure that they avoid automated decision making and otherwise process such employee data fairly and in accordance with applicable privacy and employment laws. Again, inputs and algorithms need to be carefully set up to ensure that they do not discriminate, and organizations need to avoid any decision making (predictive or otherwise) that could be considered discriminatory.

Of course, big data analytics are not a panacea. Organizations are complex, and human judgment is always going to be needed to interpret the data in context, taking into account relevant factors such as local market conditions. Complex algorithms may help to identify an organization’s highest performing employees who may be likely to leave the organization in the next 12 months, but HR departments will still need to tread carefully in deciding how to respond (or not) to such data.

Also, it is clear that better, more informed data about your workforce can help drive change in the business, but only if the business is actually prepared to embrace that change. Organizations have to be open to accept what the data are telling them, be prepared to change their systems and processes to take account of the data science and acknowledge that a period of adoption is likely to be needed. In addition, companies cannot underestimate the expense and effort of any training programs that may be required to roll out an operational change that may be inconsistent with traditional thinking.

Of course, it is not only a question of legal compliance. In this age of international business, the war for talent in certain sectors has never been greater, and companies want to attract and retain the best people. Accordingly, companies need to strike a balance between monitoring staff for the purpose of people management analytics and the organization being seen as a ‘‘creepy’’ employer, where employee movements and communications are extensively monitored Big Brother style. From the employee’s perspective, much may depend on the nature and extent of data being collected and what the employer plans to do with the data. In order to foster employee engagement and trust in analytics, organizations also need to explain to their workforce how those analytics will directly benefit the employees, for example, in terms of better engagement, transparency and empowerment.


Big data analytics may offer HR departments the ability to make better, more objective, data-driven decisions about recruitment and employees. However, the value of a big data project will depend very much on the quality of the inputs and project parameters and the careful interpretation of the results. HR departments will need to have appropriate analysis in-house or hire appropriate service providers to help them design the appropriate big data program and interpret the resulting data.  Of course, if a company uses a third-party provider for the provision of HR big data technology and analytics services, there will be other legal issues it will need to consider, in particular in respect to commercial arrangements (e.g., many HR analytics providers offer analytics on the basis of cloud-based Software as a Service) and intellectual property rights and data ownership.


Status Updates

Posted in Status Updates

Dueling for ad dollars. U.S. companies will spend $52.8 billion on digital advertising this year, 2.5% more than they spent on it in 2014. And, while television advertising is still king—corporate marketers will invest almost $79 billion in TV commercials in 2015—researchers predict that spending on digital ads will outpace spending on TV commercials by 2019. Digital ads, after all, can be an effective way to reach Millennial target audiences. They’re also significantly less expensive and easier to track than TV commercials. Now, in an effort to intercept advertisers’ exodus to digital platforms, the CBS television network has launched a campaign designed to help maximize the effectiveness of television ad campaigns and prove the ads’ return on investment. As part of the initiative, dubbed the Campaign Performance Audit, CBS will help advertisers to create their messages using cutting edge editing equipment. CBS will also help advertisers to determine the most appropriate shows on which to air the commercials, and “then test [the ads] in front of live audiences using tools including biometric feedback and neurotesting.” The network’s executives have admitted to sinking a lot of money into the campaign, but they won’t specify exactly how much.

Platforms against porn. For a while now we’ve been tracking the legal landscape of revenge porn—the public dissemination of nude photographs without the subject’s consent, usually by a jilted paramour seeking retribution. More than a dozen states now have laws criminalizing the posting of revenge porn, the victims of which suffer untold harm to their careers, reputations, personal lives, and psyches. These laws are no doubt helpful in deterring revenge porn postings, especially as they are effectively used to convict perpetrators. Once an image is posted online, however, stopping its dissemination can be extremely difficult. Now, there’s a new—potentially more effective—obstacle to the proliferation of revenge porn posts: social media platforms’ anti-revenge porn policies. Three of the most popular social media platforms—Facebook, Twitter and Reddit—have recently amended their terms of use to state that they will remove digital images of nudes that have been posted without the subjects’ permission. “Twitter executives have said the company will lock the accounts of users who post content that violates their user policy,” Mashable reports. These policies are critical weapon in the war against revenge porn because they can be used to remove revenge porn photos before they have been widely disseminated.

Stars of the (really) small screen. Have you caught any of the first four episodes of “Literally Can’t Even,” the first-ever scripted series created especially for the disappearing messaging app Snapchat? If you haven’t, we have some bad news for you: Like everything else on that incredibly hip platform, the episodes vanish. Each four-minute installment of the show, which has been airing on Saturday nights since late January, is viewable for just 24 hours. According to the New York Times, the show’s writers and stars, Sasha Spielberg and Emily Goldwyn, say that “they like the social media platform because it is very of-their-generation” and also because it is far removed from the work of their famous fathers, the film director Steven Spielberg and the producer John Goldwyn. Ms. Goldwyn also told the Times, “My dad always says it’s great to be at the forefront of change, but to spend so much time working on something and to have it disappear after a day, my parents were very shocked.”

Status Updates

Posted in Status Updates

Out with the inbox? The overwhelming popularity of workplace-specific platforms that facilitate coworker communication—commonly referred to as “enterprise social media”—is undeniable. But are these platforms poised to someday supplant business email accounts altogether? New York Times technology columnist Farhad Manjoo thinks so. The one big advantage that enterprise social media platforms like Slack have over regular email is their potential for workplace transparency; as Mr. Manjoo notes, by making employees’ communications archivable and visible to the entire company, they facilitate the flow of information and make electronic exchanges a resource for employees looking for background information on a project, or what Slack’s co-founder and chief executive, Stewart Butterfield, refers to as “soft knowledge”: how the employees at the company approach group projects, for example. While privacy advocates will undoubtedly raise concerns at the prospect of employees having to communicate in a fish bowl, many of the workers at companies that use enterprise social media platforms appreciate that such platforms inhibit the hoarding of information, thereby facilitating collaboration and resulting in less hierarchical workplaces.

Tweet carefully. Financial services firms operating in the United Kingdom need to be careful of running afoul of that country’s regulations when they use social media. According to new guidance issued by the UK’s Financial Conduct Authority (FCA), re-tweeting a customer’s comment can be enough to trigger the rules that apply to financial promotions if the tweet “comments on or endorses the benefits of a regulated financial product or service.” Among the many other guidelines set forth by the FCA in its social media communications guidelines is the admonition that a certain platforms’ restrictions—Twitter’s 140-character limit, for example—can make it especially difficult to for financial services firms to ensure that their communications are compliant.

All is fair in love… and movie promotions? A controversial social media ad campaign generated as much attention as any up-and-coming rock band at this year’s SXSW festival. To promote a science-fiction film—Ex Machina—that debuted at the festival, the film’s producers set up a fake Tinder account for “Ava,” a character featured in the movie. Ava’s profile incorporated a photograph of Alicia Vikander, the Swedish actress who plays Ava in the film. Once a Tinder user gave Ava’s (Vikander’s) photo the right-swipe-of-approval on the popular dating app, the computer-generated Ava asked the user a series of questions that, only in hindsight, are appropriate for both a young woman quizzing a guy she just met on a dating app and a robot trying to figure out what it’s like to be human—the role Vikander’s character Ava plays in the movie. If Ava approved of a Tinder suitor’s answers, she offered up her Instagram account, @meetava, for his perusal. Upon visiting Ava’s Instagram account, the avatar’s wanna-be boyfriends—perhaps to their chagrin—found videos and pictures promoting Ex Machina. Is this the beginning of a new era in online advertising—computer-generated fake friends and love interests being used to pressure us into buying stuff that we didn’t think we needed? If so, let’s hope that advertisers keep the FTC’s Endorsement Guides in mind…

UK’s Financial Services Regulator: No Hashtags in Financial Promotions

Posted in FCA Regulations

Earlier this month the UK’s financial services regulator, the Financial Conduct Authority (FCA), issued its final guidance on financial promotions made via social media channels.

As we reported last year, the FCA issued long-awaited draft guidance in August 2014 on the use of social media in financial promotions by regulated financial institutions. Following the publication of the draft guidance, the FCA held a consultation exercise which closed on Nov. 6, 2014. In response to feedback from regulated firms and industry bodies, in the final guidance the FCA has clarified a few areas and amended portions of the text, as well as added more visual examples.

Very little has changed in the final guidance with respect to the FCA’s approach to regulating promotions in social media. The overarching principle for all communications with consumers is that they must be “fair, clear and not misleading” and the FCA’s view remains that its rules are, and should be, media neutral. It believes that to take any other approach would create a more complex and costly regime.

However, there is one notable amendment in the final guidance. In the draft guidance, the FCA had suggested using a hashtag #ad to help identify promotions. In the final guidance, the FCA has done an about turn and stated that the use of hashtags is not an appropriate way to identify promotional content.


The recommendations detailed in the final guidance include the following.

  •  Form of communication

Any form of communication made by a firm is capable of being a financial promotion – the key is whether it includes an invitation to engage in financial activity. All communications must be fair, clear and not misleading, even if the communication ends up in front of a non-intended recipient (e.g., due to a re-tweet).

  • In the course of business

Some communications will not include an invitation to engage in financial activity – for example, communications solely relating to the firm’s community work. Only financial promotions made “in the course of business” will be subject to the FCA guidance. The definition laid down in the guidance effectively requires a commercial interest on the part of the firm. The FCA provides a couple of examples to illustrate the issue.

Firstly, if a company is already operating, it will be acting in the course of business when seeking to generate additional capital. However, if the company has not yet been formed, and the proposed founders approach friends and family to obtain start-up capital, they will not generally be acting in the course of business. Secondly, where a personal social media account is used by someone associated with a firm, that firm and individual should take care to distinguish clearly personal communications from those that are, or are likely to be understood to be, made in the course of that business. During the consultation exercise, further guidance was requested as to the difference between personal and business communications. The FCA clarified that if an employee of a firm uses their own social media account to send communications that could be considered an inducement or invitation then this may constitute a financial promotion and will therefore be subject to the same rules that apply to the firm. Accordingly, firms will need to ensure that their social media policies and training cover the risks of personal social media use.

  • Hashtags

All financial promotions made via digital media must be clearly identified as such. In the original draft guidance the FCA suggested using a hashtag #ad to help identify promotions. However, in the final guidance, in response to feedback, the FCA has reversed its stance and stated that hashtags are not an appropriate way to identify promotional content. This is based on a few factors.

Firstly, the FCA believes that most promotions on social media will be self-evident. For example, paid-for advertising on various social media platforms already indicates that a communication is promotional (e.g., on Twitter ‘promoted by’, on Facebook ‘SPONSORED’). Secondly, the nature of a hashtag means that if the consumer looks up that hashtag the consumer will be presented with a whole series of communications unrelated to the firm. The FCA believes that this could lead to consumer confusion.

The FCA has also explicitly stated that hashtags would be inappropriate for the inclusion of risk warnings (e.g., #capitalatrisk) or to highlight jurisdictional limitations (e.g., #UKinvestors). The FCA has suggested that signposting of a tweet will only be appropriate where the promotion is obscured or combined with other content (e.g., a celebrity endorsement or native advertisement).

It seems surprising that the FCA did not appreciate how hashtags worked until now. In addition, given that a consumer who regularly uses Twitter is likely to be familiar with how hashtags work, would a consumer really be confused by the use of #ad? Nevertheless, given the position taken in the guidance, if companies have been widely using hashtags in connection with financial promotions they will need to rethink their approach.

  • Re-tweets

The FCA has confirmed that when a communication is re-tweeted or shared, the responsibility lies with the person who sends the communication. Accordingly, if a consumer re-tweets a firm’s promotional communication and is not acting in the course of business, then it is only the original communication that will need to be compliant with the promotion rules. The firm would not be responsible for the re-tweet.

Where a firm re-tweets, shares, or likes a consumer’s communication, whether this is a financial promotion or not will depend on the content of the tweet. For example, if the tweet praises the firm for good customer service, the FCA has confirmed that would not be a promotion because customer service is not a controlled activity. Whereas, if a customer is endorsing the benefits of a particular product, then re-tweeting, sharing or liking that tweet would constitute a promotion. Accordingly, firms will need to ensure that training for their social media operators deals with the potential risks of sharing positive customer comments.

  • Images

Risk warnings must be suitably prominent in social media promotions. If a risk warning is set out in too small a font size and/or lost in surrounding text, the promotion will not be compliant with the guidance. Of course, social media often poses particular challenges because of space and character limitations. The FCA has suggested that one solution is to insert images (such as infographics into tweets) as long as the image itself is compliant. The FCA acknowledges that the functionality which allows a Twitter image to be permanently visible may be switched off so that the image appears simply as a link. Accordingly, any risk warning or other information required by the rules cannot appear solely in the image.

  • Signposting

It may be possible to signpost a product or service with a link to more comprehensive information, provided that the signpost remains compliant in itself. The FCA has rejected that compliance should be assessed based on the combination of a tweet and the website to which it links. This form of ‘click-through’ approach was proposed by a number of respondents during the consultation period. The FCA is of the opinion that the tweet and the website are separate financial promotions and so each tweet needs to be compliant, even if the tweet has been created to point the consumer to the firm’s website.

  • Image advertising

Firms may be able to advertise through image advertising, which is less likely to cause compliance issues. An image advertisement (i.e., an advert that only includes the name of the firm, a logo or other image associated with the firm, contact point, and a reference to types of regulated activities provided by the firm or its fees or commissions) may be exempt from financial promotion rules, but will still need to be fair, clear and not misleading.

  • Likes

Being a follower of a regulated firm on Twitter or having “liked” its Facebook page does not constitute an “existing client relationship” or “express request” for a communication under applicable rules. Issuing a financial promotion to such an individual would therefore be considered unsolicited.

  • Systems

Firms need to put in place adequate systems for signing off digital media communications. Sign-off should be by a person of appropriate competence and seniority within the organisation. 3 © 2015 Morrison & Foerster LLP | mofo.com Attorney Advertising lient Alert


The final guidance does not introduce any major surprises. By and large, it follows very closely existing guidance relating to financial promotions and includes some pretty clear-cut examples of compliant and non-compliant communications.

Some firms may have been holding back from becoming fully engaged on social media in anticipation of this final guidance. Such firms may be disappointed that the guidance is not more detailed and does not give them the regulatory certainty that they were hoping for, but that’s really not surprising – FCA guidance is rarely prescriptive.

In any case, firms can’t afford to wait any longer to take the plunge. Increasingly consumers want to engage via social media and with the rise of FinTech, we are seeing a whole host of new competitors moving into the financial services market, many of whom are potentially more agile and better equipped in terms of a digital strategy than the traditional finance brands. While organisations need to be careful to comply with the relevant laws and regulations, they also need to get on board with social media if they do not want to be left behind.

Social media may be a new method of communicating with customers, but compliance risks are not insurmountable. Firms need to exercise the same risk-balancing that they use with other types of media. It’s a case of putting in place appropriate guidance, policies and procedures to adequately address the risks, while not overly restricting the firm’s ability to be up-to-date in terms of its promotional campaigns.

The New Frontier in Interest Based Advertising: FTC Shifts Focus to Cross-Device Tracking

Posted in FTC, Internet of Things, Privacy

As consumers increasingly connect to the Internet using multiple devices—such as mobile phones, tablets, computers, TVs and wearable devices—advertising technology companies have rapidly developed capabilities to reach the same consumers across their various devices. Such “cross-device” tracking enables companies to target ads to the same consumer regardless of the platform, device, or application being used. Last week, the Federal Trade Commission (FTC) announced that it will host a workshop on November 16, 2015, to explore the privacy issues arising from such practices—signaling that interest based advertising (IBA) is still at the forefront of its agenda.

For a long time, advertisers and publishers have tracked consumers’ online activities using HTTP cookies stored in web browsers on desktop and laptop computers. In response to the FTC’s concerns over consumers’ visibility into and control over such tracking for IBA purposes, industry responded with widely adopted ways for publishers and advertisers to provide consumers with enhanced notice and cookie-based choice with respect to such tracking.

As consumers’ behavior has shifted, however, traditional cookie-based technologies are becoming less effective. Most consumers now access the Internet through apps on various platforms, in addition to web browsers, and they tend to use different devices throughout the day. This presents challenges for advertisers, publishers and others who want a complete picture of how individual consumers interact with their websites, services, and advertisements over time— as well as for those who want to know where and how they can reach such consumers. In response, companies have developed various solutions for identifying the same consumer across devices. One approach, for example, is to use “deterministic” methods that link the consumer’s devices to a single account as the consumer logs into websites and services on different devices. Another is through “probabilistic” methods that infer links among devices that share similar attributes, such as location derived from IP address. In some cases, companies may combine multiple techniques for greater accuracy.

In its announcement, the FTC explained that these new practices may raise privacy issues if consumers are not provided with adequate notice and control—and the workshop will address, among other topics, how companies can make their tracking more transparent and give consumers greater control over it. If history is a guide, the FTC will likely publish a staff report some months after the workshop, to highlight the privacy issues it sees with cross-device tracking and to offer industry guidance on addressing them.

The FTC’s announcement is a natural extension of its recent workshops on mobile privacy disclosures, the Internet of Things, and mobile device tracking. It also follows recent news from the Digital Advertising Alliance (DAA) that it has launched tools to provide in-app notice and choice to consumers about IBA practices and that it expects enforcement of the DAA Self-Regulatory Principles in the mobile environment to begin this summer.

Five Vital Questions on the Implications of UK Law on Social Media

Posted in Employment Law, Online Promotions, Privacy

Chevy Kelly, a partner in the UK-based Social Media Leadership Forum, recently sat down with Socially Aware’s own Sue McLean, a Social Media Leadership Forum member, to discuss the legal implications of UK companies’ use of social media as part of their marketing strategies.

Chevy Kelly: In your opinion, what are the top three legal risks that organizations in the United Kingdom face when engaging in social media?

Sue McLean: Compliance with relevant advertising and marketing rules is a key priority. All relevant rules, whether it’s the CAP Code, unfair trading regulations, FCA promotions rules, are concerned with organizations treating the customer fairly and being transparent. Companies will be experienced with applicable rules in terms of traditional media but, of course, social media brings its own challenges, including space/character limitations and the immediacy element of social media bypassing the time for review and approval protocols built into “old media” usage.

Data protection is also a key challenge. Whether you’re collecting personal information from customers via your social media channels, mining data from social media platforms or carrying out Big Data analytics, you need to ensure that you comply with relevant privacy laws. If you’re a global business, unfortunately that means a myriad of different laws. It’s not just a question of compliance. Showing that you take customers’ data seriously will help build trust; it may even help give you a competitive advantage.

Lastly, companies need to continue to focus on social media policies and the education and training of employees. Given the rate of change, companies really need to regularly review their policies and practices. New platforms can trigger new issues, as we have seen with instant messaging, as well as visual, anonymous, self-deleting platforms. Get social media right and employees can be fantastic brand ambassadors; but get it wrong and their activity could result in damage to your reputation and potentially legal or regulatory action.

CK: Are UK lawmakers able to keep up with the rate of change and disruption in the digital era and how are they coping to legislate for every scenario?

SM: No. Given the rate of technological change we have seen over the past decade and are continuing to see (whether it’s social media, Big Data, the Internet of Things, drones, etc.), the law is always playing catch-up; it’s virtually impossible for the lawmakers to keep up.

Also, it often takes so long to bring in a new law, that by the time it’s adopted it may be out of date. By way of example, the long-awaited Data Protection Regulation was proposed back in 2012 to reflect technological changes, including social media—but is still being debated in Europe and, even if it is finalized this year, there will be a transition period of two years before the law applies.

But it’s not always a case of bringing in new laws. Often it’s about interpreting how existing laws can apply to new platforms. That’s certainly the approach the FCA has taken (at least up until now) with respect of the use of social media by financial organizations, the approach that their rules are media neutral and apply to social media in the same way as they apply to traditional media. It’s also the approach the government has taken to trolling and other malicious behavior via social media—that the framework of laws we have are fit for purpose in this digital age (even if they were designed in a world before social media, e.g., to apply to poison-pen letters).

And, of course, while laws are inherently national, social media is a global phenomenon. Unless laws are very closely harmonized (which they are not), social media users face uncertainty because of different approaches to law and regulation in the key countries.

CK: Would you say that large organizations are taking the legal risks surrounding social media as seriously as other traditional communications channels?

SM: I’m not sure it’s a case of not taking the legal risks of social media seriously. I think it’s more a case of organizations being less experienced with social media generally, and that includes legal and compliance departments. If social media is being run out of a marketing/communications team then they will be very experienced with the legal risks of traditional media. But social media triggers new, different types of risk and both the marketing/ communications team and the legal and compliance teams are trying to figure out how to handle those risks.

And, of course, not all social media platforms are the same, and we are getting new platforms all the time. Companies may have become just about comfortable with Facebook and Twitter, but now they have to deal with, say, Pinterest, Instagram, Snapchat. And that’s just in the West; if you are a global organization, it’s likely that you have to deal with a variety of platforms across the different regions.

Of course, it’s not just a question of using social media to promote your business and interact with customers. If you’ve implemented an enterprise social media platform for your employees, that throws up a whole host of other issues.

CK: If you were to reference an example to give a wake-up call to an organization that may be laid-back in their attitude to social media governance, what would it be?

SM: There are a lot of examples I can point to where companies’ social media activity has ended up making headlines for all the wrong reasons. For example, the HMV case where the company didn’t take sufficient control of its Twitter account and employees managed to send a series of angry tweets before the company took control. In fact, I expect that a lot of companies still don’t put enough focus on social media in the context of insolvency and crisis management. It’s not just a question of implementing proper social media governance to avoid legal sanctions. In many cases, it’s equally important to avoid the risk of damage to the company’s reputation.

CK: Have you found that having an in-depth understanding of the law actually makes organizations more risk averse, or are they more averse when they don’t know the boundaries?

SM: A number of companies have taken limited steps into social media because they think that they should be on it, but haven’t fully engaged because of a lack of understanding of social media and a fear of the potential legal risks. But legal risks must be weighed up against the damage that may be caused to the business of not properly engaging. If you appreciate what the risks are, you can weigh up those risks against the business benefit, and also the damage that may be caused to your business of not engaging. Whereas, if you don’t understand the nature or level of the risks, you could be almost paralyzed into inaction. In most cases, the legal risks are not insurmountable. Companies need to exercise the same common sense, judgment and risk-balancing that they use with other media.


First-Ever Award of “Any Damages” for Fraudulent DMCA Takedowns Under Section 512(f)

Posted in Copyright

Under section 512(f) of the Digital Millennium Copyright Act (DMCA), copyright owners are liable for “any damages” stemming from knowingly false accusations of infringement that result in removal of the accused online material. Section 512(f) aims to deter abuse of the DMCA requirement that service providers process takedown requests from purported copyright owners, but such abuses remain rampant. (E.g., as reported here and here.) In fact, until the March 2, 2015, decision in Automattic Inc. v. Steiner (adopting magistrate’s earlier recommendation), no court had awarded damages under section 512(f).

The case concerned a blog by Oliver Hotham, who had contacted a group called “Straight Pride UK,” identifying himself as “a student and freelance journalist” and submitting a list of questions. Nick Steiner responded by identifying himself as the “Press Officer” for Straight Pride UK and providing a PDF file titled “Press Statement – Oliver Hotham.pdf.” The press statement laid out Straight Pride UK’s opposition to “everyone [in the UK] being forced to accept homosexuals” and stated its mission of ensuring “that heterosexuals are allowed to have a voice and speak out against being oppressed.” Hotham posted material from the press statement on his blog.

Steiner, apparently displeased with the subsequent negative attention, sent an email to Automattic, Inc., the blog’s host, invoking section 512(f). Steiner claimed to hold copyright in the posted material and requested that Automattic remove the blog post, and Automattic complied. Hotham, however, again posted material from the Press Statement to his blog, prompting Steiner to send two more removal requests by email to Automattic. Automattic denied those requests, citing their legal insufficiency. Automattic and Hotham then filed a lawsuit to recover damages related to Steiner’s misrepresentation that the blog infringed his copyright.

The court easily found that Steiner had violated section 512(f) because he “could not have reasonably believed that the Press Statement he sent to Hotham was protected under copyright.” Following the precedent of Lenz v. Universal Music Corp., the court then interpreted the statute’s specification of “any damages” to mean that damages are available, no matter how insubstantial. After requesting more detailed evidence concerning damages, the court found that Hotham and Automattic were entitled to certain types of damages.

First, based on the time he was prevented from spending on freelance articles and his expected compensation for such work, Hotham estimated the value of the time he spent on activities related to the incident, including responding to media inquiries. Hotham also requested additional damages for “lost work” due to the “significant distraction” caused by the media coverage and legal disputes. Hotham claimed a total of $960, and the court found his declaration sufficient to support that claim. But the court denied Hotham’s request for reputational harm as speculative, and rejected Hotham’s request for damages based on emotional distress and “chilled speech,” citing the lack of authority that such damages are available under section 512(f).

Automattic was likewise successful in claiming damages of $1,860, calculated based on employee salaries and a 2,000-hour year, for time spent responding to the takedown notices and related press inquiries. The court denied Automattic’s request for damages attributable to time spent by its outside public relations firm, however, because there was insufficient evidence to show how that time constituted a loss to Automattic.

The court also awarded attorneys’ fees, which are expressly allowed by section 512(f). Based on comprehensive billing records submitted along with data indicating the average local billing rate for IP attorneys, the court granted the request for recovery at a rate or $418.50 per hour, for a total of $22,264 in fees.

The court’s analysis is instructive in multiple ways. First, as mentioned, this was the first case resulting in a damages award under section 512(f), so the opinion is likely to serve as a road map for future courts considering such damages. Potential litigants should not read this case, however, as necessarily indicative of the magnitude of damages available in section 512(f) cases. Exposure can certainly be much greater, as demonstrated in Online Policy Group v. Diebold, Inc., a case that reportedly settled for $125,000. A few factors conspired to make damages in this case minimal (a total of $25,084). Steiner’s takedown notice was obviously fraudulent, so practically no resources were expended in meeting the normally demanding burden of proof. (As other commentators have noted, that same demanding burden of proof is one reason why there are not many section 512(f) cases in the first place.) Other cases may involve more protracted conflict over takedown notices and legal threats. Steiner also never appeared in his defense and therefore defaulted, which likely greatly reduced the time and expense of the lawsuit.

This case also reinforces the most crucial strategic consideration for service providers in responding to a DMCA takedown notice: as Socially Aware has previously explained, no damages can be awarded under section 512(f) unless the notice actually prompts the removal of the accused material. Therefore, if it ultimately wants to resist a takedown notice, a service provider can only recover the expenses of doing so if it actually removes the accused material in the first place.

On the other hand, the court applied the takedown requirement loosely in its actual assessment of damages. Steiner issued three purported DMCA takedown notices, but only the first notice resulted in actual removal of accused content. Even though Hotham and Automattic could have incurred a portion of their expenses due to the final two notices, the court did not discuss whether the takedown requirement precluded any portion of their claimed damages. While this bodes well for the availability of damages in cases involving multiple takedown notices, the analysis has questionable weight on this point. Given the absence of any opposition from defendant, future defendants will have a strong argument that the court simply did not consider this nuance.