Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Hot Off the Press: The July/August Issue of Our Socially Aware Newsletter Is Now Available

Posted in Bankruptcy, Cloud Computing, Copyright, First Amendment, FTC, Infographic, Internet of Things, IP, Livestreaming, Online Reviews, Privacy, Trademark, Wearable Computers

150728SociallyAware_Page_01The latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award-winning guide to the law and business of social media, we present a “grand unifying theory” of today’s leading technologies and the legal challenges these technologies raise; we discuss whether hashtags can be protected under trademark law; we explore the status of social media accounts in bankruptcy; we examine the growing tensions between content owners and users of livestreaming apps like Meerkat and Periscope; we highlight a recent discovery dispute involving a deactivated Facebook account; we discuss a bill before Congress that would protect consumers’ rights to post negative reviews on websites like Yelp; and we take a look at the Federal Trade Commission’s crackdown on in-store tracking activities.

All this—plus an infographic exploring the popularity of livestreaming sites Meerkat and Periscope.

Read our newsletter.

FCC Clarifies Its Interpretations of the Telephone Consumer Protection Act, Provoking Strong Objections From the Business Community

Posted in Compliance, FCC

Cell_iStock_000024872497XLargeOn July 10, 2015, the Federal Communications Commission (FCC) released a 140-page Omnibus Declaratory Ruling and Order in response to more than two dozen petitions from businesses, attorneys general, and consumers seeking clarity on how the FCC interprets the Telephone Consumer Protection Act (TCPA). As noted in vigorous dissents by Commissioners Pai and O’Rielly, several of the rulings seem likely to increase TCPA litigation and raise a host of compliance issues for businesses engaged in telemarketing or other practices that involve calling or sending text messages to consumers.

Since the FCC issued the order, trade associations and companies have filed multiple petitions for review in courts of appeals challenging the order (for example, see here and here). It will thus ultimately be up to the courts of appeals to decide whether the FCC’s new interpretations of the TCPA are reasonable.

What is an “Automatic Telephone Dialing System”?

The TCPA generally prohibits certain calls to cell phones made with an Automatic Telephone Dialing System (ATDS). As defined by statute, an ATDS is “equipment which has the capacity (A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” In the absence of statutory or FCC guidance, some courts have construed “capacity” broadly to encompass any equipment that is capable of automatically dialing random or sequential numbers, even if it does not actually do so, or even if it must be altered to make it capable of doing so.

In light of these decisions, a number of entities asked the FCC to clarify that equipment does not qualify as an ATDS unless it has the present capacity to generate and dial random or sequential numbers.

In its ruling, the FCC found that an ATDS includes equipment with both the present and potential capacity to generate and dial random or sequential numbers, even if such potential would require modification or additional software in order to do so. An ATDS also includes equipment with the present or potential capacity to dial numbers from a database of numbers.

The FCC, however, did state that “there must be more than a theoretical potential that the equipment could be modified to satisfy the [ATDS] definition.”  Per this limitation, the FCC explicitly excluded from the definition of an ATDS a “rotary-dial phone.”

Consent of the Current Subscriber or User

The TCPA exempts from liability calls to mobile phones “made with the prior express consent of the called party.” It does not, however, define “called party” for purposes of this provision, and courts have divided over how to construe that term.

Some courts have construed the term to mean the actual subscriber to the called mobile number at the time of the call, while others have construed it to mean the intended recipient of the call. The distinction is critical because consumers often give up their mobile phone numbers and those numbers are reassigned to other people, meaning that the actual subscriber and the intended recipient may not be the same person.

Faced with lawsuits from owners of such reassigned numbers, a number of entities petitioned the FCC, asking it to clarify that calls to reassigned mobile numbers were not subject to TCPA liability where the caller was unaware of the reassignment, and to adopt the interpretation that “called party” means the intended recipient of the call.

In response to petitions seeking clarity on this issue, the FCC ruled that the “called party” for purposes of determining consent under the TCPA’s mobile phone provisions is “the subscriber, i.e., the consumer assigned the telephone number dialed and billed for the call, or the non-subscriber customary user of a telephone number included in a family or business calling plan.”

Consistent with its interpretation of “called party,” the FCC further ruled that where a wireless phone number has been reassigned, the caller must have the prior express consent of the current subscriber (or current non-subscriber customary user of the phone), not the previous subscriber. Businesses, however, may have properly obtained prior express consent from the previous wireless subscriber and will not know that the number has been reassigned. The FCC thus allows a business to make one additional call to a reassigned wireless number without incurring liability, provided the business did not know the number had been reassigned and had a reasonable basis to believe the business had the intended recipient’s consent.

Is Consent Revocable?

The TCPA is silent as to whether, or how, a called party can revoke his or her prior express consent to be called. Given that silence, one entity petitioned the FCC to request that the Commission clarify that prior consent to receive non-telemarketing calls and text messages was irrevocable or, in the alternative, set forth explicit methods of revocation. In response, the FCC ruled that consent is revocable (with regard to both telemarketing and non-telemarketing calls), and that such revocation may be made “in any manner that clearly expresses a desire not to receive further messages.” Consumers may use “any reasonable method, including orally or in writing,” to communicate that revocation and callers may not designate an exclusive means of revocation.

The “Urgent Circumstances” Exemption to Consent Requirement Notwithstanding the FCC’s rulings regarding prior express consent, the FCC took this opportunity to create several new exemptions to that requirement with regard to certain non-marketing calls made to cellular phones. The FCC exempted the following types of calls:

  • Calls concerning “transactions and events that suggest a risk of fraud or identity theft”;
  • Calls concerning “possible breaches of the security of customers’ personal information”;
  • Calls concerning “steps consumers can take to prevent or remedy harm caused by data security breaches”;
  • Calls concerning “actions needed to arrange for receipt of pending money transfers”; and
  • Calls “for which there is exigency and that have a healthcare treatment purpose, specifically: appointment and exam confirmations and reminders, wellness checkups, hospital pre-registration instructions, pre-operative instructions, lab results, post-discharge follow-up intended to prevent readmission, prescription notifications, and home healthcare instructions.”First and foremost, the consumer must not be charged for the calls.
  • Further, such calls must be limited to no more than three calls over a three-day period, must be concise (generally 1 minute or 160 characters, if sent via text message), cannot include marketing or advertising content (or financial content, in the case of healthcare calls), and must have some mechanism for customer opt-out be provided.
  • The FCC reasoned that all of the aforementioned types of calls involved urgent circumstances where quick, timely communication with a consumer was critical to prevent financial harm or provide health care treatment. Although prior express consent is not required, such calls are still subject to a number of limitations.

Other Consent Issues

In addition to the points above concerning consent, the FCC also ruled on a number of specific consent issues, described here in brief:

  • Provision of Phone Number to a Health Care Provider. Clarifying an earlier ruling, the FCC ruled that the “provision of a phone number to a healthcare provider constitutes prior express consent for healthcare calls subject to HIPAA by a HIPAA-covered entity and business associates acting on its behalf, as defined by HIPAA, if the covered entities and business associates are making calls within the scope of the consent given, and absent instructions to the contrary.”
  • Third-Party Consent on Behalf of Incapacitated Patients. The FCC ruled that consent to contact an incapacitated patient may be obtained from a third-party intermediary, although such consent terminates once the patient is capable of consenting on his or her behalf.
  • Ported Phone Numbers. In response to a request for clarification, the FCC ruled that porting a telephone number from wireline service (i.e., a land line) to wireless service does not revoke prior express consent.
  • Consent Obtained Prior to the Current Rules. In response to petitions requesting relief from or clarification of the prior-express-written-consent rule that went into effect on October 16, 2013, the FCC ruled that “telemarketers should not rely on a consumer’s written consent obtained before the current rule took effect if that consent does satisfy the current rule.”
  • Consent via Contact List. In response to a petition concerning the use of smartphone apps to initiate calls or text messages, the FCC ruled that the mere fact that a contact may appear in a user’s contact list or address book does not establish consent to receive a message from the app platform.
  • On Demand Text Offers. In response to a petition concerning so-called “on demand text offers,” the FCC ruled that such messages do not violate the TCPA as long as they (1) are requested by the consumer; (2) are a one-time message sent immediately in response to that request; and (3) contain only the requested information with no other marketing information. Under such conditions, the messages are presumed to be within the scope of the consumer’s consent.

Calls Placed by Users of Apps and Calling Platforms

The FCC also addressed a number of petitions seeking guidance as to who “makes” or “initiates” a call under the TCPA (and is thus liable for TCPA violations) in a variety of scenarios involving calls or text messages made by smartphone apps and calling platforms.

The FCC offered no clear rule, and instead held that to answer this question “we look to the totality of the facts and circumstances surrounding the placing of a particular call to determine: 1) who took the steps necessary to physically place the call; and 2) whether another person or entity was so involved in placing the call as to be deemed to have initiated it, considering the goals and purposes of the TCPA.”

The FCC noted that relevant factors could include “the extent to which a person willfully enables fraudulent spoofing of telephone numbers or assists telemarketers in blocking Caller ID” as well as “whether a person who offers a calling platform service for the use of others has knowingly allowed its client(s) to use that platform for unlawful purposes.”

Authorization of “Do Not Disturb” Technology

Finally, at the request of petitioning state attorneys general, the FCC affirmed that nothing in the Communications Act or FCC rules or orders prohibits telephone carriers or VoIP providers from implementing call-blocking technology to stop unwanted “robocalls.” The FCC explained that such carriers “may legally block calls or categories of calls at a consumer’s request if available technology identifies incoming calls as originating from a source that the technology, which the consumer has selected to provide this service, has identified.”  The FCC “strongly encourage[d]” carriers to develop such technology to assist consumers.

Status Updates: AZ’s anti-revenge-porn law scrapped; civil rights claim against blogging prosecutor dismissed; Match buys PlentyOfFish

Posted in First Amendment, Status Updates

There oughta be a law? As we’ve reported previously, states all around the country have enacted laws that criminalize the posting of revenge porn—nude photographs published without the subject’s consent, often by an ex-lover seeking retribution. To avoid running afoul of the First Amendment, these laws are typically fairly limited in scope and provide for relatively minor penalties. California’s anti-revenge-porn law, for example, categorizes posting revenge-porn as a misdemeanor, and contains several exceptions. Among other things, California’s law only applies if the poster intended to cause the victim emotional distress—a characteristic that improves the law’s chances of surviving a First Amendment challenge. Arizona’s anti-revenge porn law, in contrast, contains no such limitation and provides that violations constitute a felony. As a result, the ACLU argued that Arizona Revised Statute §13-1425 could lead to a felony conviction for posting a photograph “even if the person depicted had no expectation that the image would be kept private and suffered no harm,” such as in the case of “a photojournalist who posted images of victims of war or natural disaster.” Based on such alleged overreach, a group of Arizona booksellers, publishers, librarians and photographers filed Antigone Books v. Brnovich—a lawsuit to halt enforcement of the Arizona law. A joint final settlement between the Arizona attorney general and the plaintiffs in that case resulted in a July 2015 federal court order that does, in fact, scrap §13-1425.  In her discussion of the settlement, an ACLU staff attorney said that the organization nevertheless views revenge porn as a serious concern. She lauded social media platforms’ and online search companies’ decisions to heed revenge-porn victims’ take-down requests as victories “achieved without a new criminal law and without a new inroad against the First Amendment.”

Blogs of war. The U.S. Court of Appeals for the Ninth Circuit affirmed the dismissal of a civil rights claim brought by a woman who was the subject of negative articles and social media updates written by a Los Angeles county prosecutor and posted to the prosecutor’s personal blog and Twitter account. According to the opinion, the prosecutor, Patrick Frey, posted to his blog eight unfavorable articles about the plaintiff, Nadia Naffe, and “tweeted several dozen threatening and harassing statements” about her. The blog posts and tweets called Naffe, among other things, a “smear artist” and a “liar,” and accused Naffe of having filed frivolous lawsuits against James O’Keefe, a friend of Frey’s with whom Naffe had had a falling out. The Ninth Circuit held that Frey had not violated Naffe’s First Amendment constitutional right to petition the government for redress of grievances pursuant to 42 U.S.C. § 1983 because the posts and tweets weren’t related to his work as a county prosecutor. The court noted, among other things, the fact that Frey’s disparaging comments were sent from Frey’s personal Twitter account and blog, both of which specify that they reflect Frey’s “personal opinions” and that they do not contain statements made in an “official capacity.” The Ninth Circuit also noted that the posts and tweets were time stamped outside of Naffe’s office hours.

A good catch. While the options for online dating hopefuls continue to multiply—there are now dating services specifically for farmers, people living gluten-free lifestyles and fire-fighter aficionados—it seems many of the most popular personals sites are merging under the same umbrella. IAC/InterActiveCorp’s Match Group subsidiary, the owner of Match.com, Tinder and OKCupid, among others, just snapped up PlentyOfFish for $575 million. PlentyOfFish, a British Columbia-based dating site that’s free to use but offers upgrades for a fee, currently has 3.6 million active daily users. Its founder and creator, 36-year-old Markus Frind, built the site without any venture capital funding and still owns 100% of it. IAC, meanwhile, owned 20% of the online dating market even before the PlentyOfFish acquisition, which is expected to close in the fourth quarter.

Mobile App Legal Terms & Conditions: Six Key Considerations

Posted in Mobile, Terms of Use

 

MobilePhone_56311774_thumbnailFor corporations, the mobile app is today’s website.

Back in the late 1990s, no self-respecting company, no matter how stodgy and old-fashioned, wanted to be without a website.

Today, the same is true with mobile apps. It doesn’t matter what industry a company is in—it needs to have an app that customers and potential customers can download to their smartphones. Even big, tradition-bound law firms are developing and distributing mobile apps, for crying out loud.

Here at Socially Aware, we have been known to spend our free time downloading and examining mobile apps owned by companies that are new to the software distribution business (after all, a mobile app is just that — distributed software). In doing so, we’ve noticed a number of common missteps by app distributors in connection with the legal terms—or End User License Agreements (EULAs)—governing such apps. Accordingly, here is our list of key issues to address in adopting a EULA for a mobile app.

1.  Adopt Your Own EULA. A EULA is an important part of any company’s strategy to mitigate risks and protect its intellectual property in connection with its mobile apps. Hardly any company would release desktop software without a EULA, and mobile apps—which, as noted above, are software products—warrant the same protection. While Apple, Google and Amazon each provide a “default” EULA to govern mobile apps downloaded from their respective app stores, they also permit developers to adopt their own custom EULAs instead—subject to a few caveats, as mentioned in our fifth item below. Because the default EULAs can be quite limited, and can’t possibly address the unique issues that any particular app is likely to raise, a company should ideally adopt its own EULA to best protect its interests in its apps.

2.  Is Your EULA Binding? The best EULA is a binding EULA. U.S. courts have consistently made clear that a “clickwrap”-style agreement has the best chance of being enforceable; although whether an agreement is enforceable in any particular case may depend on how the agreement is actually presented to users, and how users indicate their assent. Having adopted customized EULAs, companies have several opportunities to present their EULAs to users. In most app stores, for example, a dedicated link called “License Agreement” lets companies link to their EULAs. In addition, companies should ideally include language in their apps’ “Description” field making clear to users that, by downloading and using the app, they are accepting the EULA. But it’s still possible in most app stores for users to purchase and download an app without seeing the EULA; accordingly, for apps that may present significant risk issues—such as banking or e-commerce apps—the most conservative approach is to require an affirmative “click-accept” of the EULA when the app is first opened by a user on his or her device.

3.  Which Parties Will Your EULA Bind? If an app is targeted toward businesses, or toward individuals who will use the app in their business capacities, then the EULA should ideally bind both the individual who uses the app and the individual’s employer. Similarly, if minors will be permitted to use the app, then the EULA should require that a parent or guardian consent on the minor’s behalf. (Of course, if minors under 13 will be allowed to use the app, or if the app will be directed toward such minors, you will need to address Children’s Online Privacy Protection Act issues in connection with the app.)

4.  Where Will Your EULA Reside? As a technical matter, a EULA can reside in one of two places: it can be “hard-coded” into the app itself, so that the EULA is downloaded together with the app, or it can reside on a separate web server maintained by the developer. The former approach ensures that the EULA is always accessible to the user, even if the user’s device is offline. Some users may decide not to download the latest updates, however, and, as a result, those users may not be bound by the updated terms. In contrast, under the latter approach, companies can update their EULAs at any time by simply updating the document on their own web servers, although the EULAs won’t available to the user offline. Companies should think about which approach works best for their specific apps and their associated risk issues.

5.  Does Your EULA Incorporate Terms Required by Third Parties? Some app stores, such as the Apple App Store, understandably require that, if a company adopts a custom EULA for its app, such customized EULA must include terms protecting the applicable app store owner. (Other app stores, such as the Amazon Appstore for Android, place such protective terms in their own user-facing agreements, and require developers to acknowledge that such protective terms will govern.) Other third-party terms may also apply, depending on any third-party functionalities or open-source code incorporated into the app. For example, if a company integrates Google Maps into its app, Google requires the integrating company to pass certain terms on to its end users. The licensors of any open-source code used by an app may also require the company to include certain disclaimers, attributions, usage restrictions or other terms in the EULA.

6. Is your EULA clearly written and reasonable? Traditionally, EULAs have been overlong, filled with impenetrable legal jargon and, frankly, hard to read, sometimes even for lawyers. An emerging best practice, especially for B2C apps, is to draft app EULAs that are understandable to consumers, and to minimize unnecessary legalisms such as “null and void,” “including without limitation” and the reflexive prefacing of sentences with “we hereby reserve the right” or “you hereby acknowledge and agree.” Moreover, because space on a mobile device screen can be limited, thought should be given to eliminating repetition in app EULAs wherever possible. Of course, even if a EULA is written in plain English, extremely one-sided provisions—such as a disclaimer of direct damages (rather than a cap on such damages)—may raise concerns with a court in any subsequent litigation involving the EULA. At the same time, the EULA is ultimately a legal document, and an app developer will want to make sure that any slimmed-down or simplified EULA still provides adequate protection for the developer.

Employer Access to Employee Social Media: Applicant Screening, “Friend” Requests and Workplace Investigations

Posted in Compliance, Employment Law, Privacy

0717_People_iStock_000023365463LargeA recent survey of hiring managers and human resource professionals reports that more than 43 percent of employers use social networking sites to research job candidates. This interest in social networking does not end when the candidate is hired: To the contrary, companies are seeking to leverage the personal social media networks of their existing employees, including for their own marketing purposes, as well as to inspect personal social media in workplace investigations. As employer social media practices continue to evolve, individuals and privacy advocacy groups have grown increasingly concerned about employers intruding upon applicants’ or employees’ privacy by viewing restricted-access social media accounts.

Although federal legislation has been proposed several times (see here and here), efforts to enact a national social media privacy law have not been successful. In the absence of such legislation, states are actively seeking to address employee social media privacy issues. In 2014, six states passed social media laws, and, since the beginning of 2015, four more states have passed or expanded their social media laws. Similar legislation is pending in at least eight more states. In total, 22 states have now passed special laws restricting employer access to personal social media accounts of applicants and employees (“state social media laws”).

These state social media laws restrict an employer’s ability to access personal social media accounts of applicants or employees, to ask an employee to “friend” a supervisor or other employer representative and to inspect employees’ personal social media. The state social media laws also have broader implications for common practices such as applicant screening and workplace investigations, as discussed below.

Key Restrictions Under State Social Media Laws

As a general matter, these state social media laws bar employers from requiring or even “requesting” that an applicant or employee (21 of the 22 state laws protect both current employees and applicants; New Mexico’s law protects only applicants) disclose the user name or password to his or her personal social media account. Some of these state laws also impose other express restrictions, such as prohibiting an employer from requiring or requesting that an applicant or employee:

  • add an employee, supervisor or administrator to the friends or contacts list of his or her personal social media account;
  • change privacy settings of his or her personal social media account;
  • disclose information that allows access to or observation of his or her personal social media account, or otherwise grant access in any manner to his or her personal social media account;
  • access personal social media in the employer’s presence, or otherwise allow observation of the personal social media account; or
  • divulge personal social media.

These laws also prohibit an employer from retaliating against, disciplining or discharging an employee, or refusing to hire an applicant, for failing to comply with a prohibited requirement or request.

For example, a few states, like New Mexico, only cover traditional social networking accounts, while most other state laws broadly apply to any electronic medium or service that allows users to create, share or view user-generated content, including videos, photographs, blogs, podcasts, messages, emails and website profiles generally. Some of these laws only prohibit employers from seeking passwords or other login credentials to the personal social media account, while other states impose broader restrictions described above. For example, Arkansas, Colorado, Oregon and Washington prohibit an employer from requesting that an employee allow the employer access to his or her personal social media accounts; and California, Connecticut, Oklahoma, Michigan, Rhode Island, Tennessee and Washington prohibit an employer from requesting an employee to access his or her personal account in the presence of the employer. Certain states prohibit an employer from requiring an employee to change his or her privacy settings to allow the employer access to his or her private social media accounts, although it is possible that such a restriction might be inferred from at least some of the other state laws as well. Even more confusing are the inconsistencies across state laws with respect to exceptions for workplace investigations, as discussed below.

While state laws differ significantly, however, the general message is clear: Employers must evaluate their current practices and policies to ensure compliance with these laws.

What Every Employer Should Know About State Social Media Laws

A.        Applicant Screening

In general, these state social media laws do not limit an employer’s ability to review public information, such as information that may be available to the general public on an applicant’s social media pages. Instead, these laws limit an employer’s attempts to gain access to the individual’s social media accounts by means such as requesting login credentials, privacy setting changes or permission to view the accounts.

Additionally, most of these laws explicitly state that they do not prohibit viewing information about an applicant that is available to the public; for example, the Michigan law “does not prohibit or restrict an employer from viewing, accessing, or utilizing information about an employee or applicant that can be obtained without any required access information or that is available in the public domain.”All of these state social media laws, however, prohibit employers from seeking access to the nonpublic social media pages of applicants. In practice, this means that employers should avoid asking applicants about the existence of personal social media accounts and requesting, or even suggesting, that an applicant friend the employer or a third party, including a company that provides applicant background investigations.

B.        Friend Requests       

Certain laws expressly restrict an employer’s ability to encourage an employee to friend or add anyone to the list of contacts for his or her personal social media account. This may include, but is not limited to, the employer, its agents, supervisors or other employees.

For example, Colorado’s social media legislation states that an employer shall not “compel an employee or applicant to add anyone, including the employer or his or her agent, to the employee’s or applicant’s list of contacts associated with a social media account,” and many other laws contain this type of prohibition against requesting access via what may be intended as a harmless friend request.

Although these laws do not prohibit a subordinate from friending a manager or supervisor, employers should exercise care not to require, or even request or encourage, employees to friend supervisors or other company representatives.  Employers in states without social media laws or states with laws that allow “friending” should nevertheless proceed with caution when requesting access to an employee’s or applicant’s personal social media pages and think twice about “friending” or “following” employees. If an employer learns about an employee’s legally protected characteristic (such as religion, pregnancy, medical condition or family medical history) or legally protected activity (such as political or labor union activity), the employer may face greater exposure to discrimination claims if it later takes adverse action against the employee.

These restrictions may be particularly significant for employers seeking to leverage employees’ personal social media connections for work-related marketing or business development purposes. Employers should be aware that, even in states without an express restriction on friend requests, a law that generally prohibits an employer from attempting to access an employee’s or applicant’s social media account may effectively limit an employer’s ability to require or encourage employees to friend people.

C.     Account Creation and Advertising

 Recently, Oregon amended its existing social media law to prohibit categories of employer conduct not previously addressed in any of the existing social media laws. Under the new amendment (which takes effect on January 1, 2016), employers are prohibited from requiring or requesting that an applicant or employee establish or maintain a personal social media account or that an applicant or employee authorize the employer to advertise on his or her personal social media account. Notably, the Virginia law, which went into effect July 1, 2015, implies that an employer may be permitted to engage in the type of conduct the Oregon law seeks to prevent. The Virginia law explicitly excludes from covered information an account set up by the employee at the request of the employer.

D.     Investigations

One of the most challenging areas under state social media laws involves an employer’s ability to inspect or gain access to employees’ personal social media in connection with workplace investigations. An employer may wish to access an employee’s social media account, for example, if an employee complains of harassment or threats made by another employee on social media or if the employer receives a report that an employee is posting proprietary or confidential information or otherwise violating company policy. Some of the state social media laws provide at least limited exceptions for workplace investigations, while others do not.

No express exception for investigations: The Illinois and Nevada social media laws do not provide any express exception for workplace investigations that might require access to an employee’s personal social media accounts. This suggests that an employer’s investigation of potential misconduct or legal violations may not justify requesting or requiring an employee to disclose his or her social media login credentials. (We note that, perhaps in an effort to broaden employer investigation efforts and clarify an existing ambiguity, Illinois amended its law so that, where the access sought by the employer relates to a professional account, an employer is not restricted from complying with a duty to screen employees or applicants, or to monitor or retain employee communications as required by law.)

Limited exception for investigations of legal violations:  California’s social media law provides that it does not limit an employer’s ability to request that an employee divulge personal social media in connection with an investigation of employee violations of applicable laws. However, this exception does not appear to extend to other prohibited activities, such as asking an employee to disclose his or her user name and password for a personal social media account. Other states provide exceptions only for investigations of specific types of legal violations. For example, the Colorado and Maryland social media laws only provide an exception for investigating violations of securities laws or potential misappropriation of proprietary information.

Limited exception for misconduct investigations: Some social media laws extend the exception beyond investigations of legal violations to investigations of alleged misconduct. These states include California, Oregon and Washington. In general, these laws allow an employer to ask an employee to divulge content from a personal social media account, but still do not allow the employer to request the employee’s login credentials. In contrast, some states, including Arkansas, Colorado, Maryland and Michigan permit an employer to request any employee’s social media login credentials to investigate workplace misconduct.

Given these differences, employers should be mindful of the broad range of investigative exceptions in state social media laws. Before initiating an investigation that may benefit from or require access to an employee’s personal social media, an employer should first consider the restrictions imposed by the applicable state law and the scope of any investigatory exception offered by that law.

E.     Best Practices

Given the inconsistencies among the different laws, it is challenging for multistate employers to manage compliance with all state social media laws. Even if it is not the employer’s practice to seek access to its employees’ or applicants’ private social media pages, there are less obvious components of the laws that will affect almost every employer, and employers should consider the following measures.

Review hiring practices for compliance with social media laws: Employers should ensure that all employees involved in the hiring process are aware of the restrictions imposed by these state social media laws. For example, recruiters and hiring managers should refrain from inquiring about an applicant’s personal social media pages or requesting access to such pages. While these state social media laws do not prohibit employers from accessing publicly available personal social media sites, employers will also want to evaluate whether this practice is advisable, given the risk of stumbling across legally protected information that cannot be used in employment decisions.

Implement social media guidelines: Employers should implement social media guidelines to mitigate potential risks posed by employee social media postings, being mindful of restrictions arising under the National Labor Relations Act and other federal and state laws. Employers also should ensure that their social media guidelines do not run afoul of these state social media laws.

Educate and train personnel: Personnel involved in internal investigations, such as human resources and internal audit personnel, need to be aware of the growing restrictions on employer access to employee personal social media accounts. Prior to seeking access to an employee’s personal social media account, or content from such an account, the internal investigators should check any applicable restrictions. In general, given the general trends in these laws, employers should avoid requesting login credentials to employees’ personal social media accounts, even in the investigation context, unless they have first consulted legal counsel.

“Never Say Never”: Lessons From RadioShack’s Sale of Customer Information

Posted in Bankruptcy, Privacy

07_15_ONEWAYiStock_000016745483LargeWhen a bankrupt company’s most valuable assets include consumer information, a tension arises between bankruptcy policy aimed at maximizing asset value, on the one hand, and privacy laws designed to protect consumers’ personal information, on the other. Such tension played out recently in the Chapter 11 bankruptcy case of RadioShack, where the bankrupt retailer’s attempt to sell customer data invoked objections from 38 state attorneys general, the Federal Trade Commission (FTC) and others who claimed the sale would violate RadioShack’s stated privacy policy of never selling customers’ personal information. These issues are not new.

Consumer Data in Dot Com Era—Toysmart

Back in the dot-com era, online toy retailer Toysmart sought bankruptcy court approval to sell customer data. Toysmart’s privacy policy expressly told customers that they could “rest assured” that their information would “never be shared with a third party.” Nevertheless, once it ceased operations and entered bankruptcy in May 2000, Toysmart solicited bids for the sale of such personal information, including its customers’ names, addresses, billing information, shopping preferences and family profile information. The FTC opposed the sale, arguing that the breaking of the promise to never share information would be deceptive, in violation of Section 5 of the FTC Act.

The FTC and Toysmart reached a deal that, along with other restrictions, would limit the sale of customer data to a family-friendly company that would agree to be bound by Toysmart’s privacy policy. Even so, 46 states objected to such a resolution, arguing that any sale of customer data that did not provide an opt-out for customers would violate Toysmart’s privacy policy and, as such, would constitute an unfair or deceptive business practice, in violation of state “little FTC Acts.” Ultimately, Toysmart withdrew the customer information from the auction and destroyed it.

RadioShack—Following the Toysmart Example

RadioShack’s sale process replayed several of the Toysmart themes and similarly met a negotiated—not judicially determined—resolution. Following the sale of its 1,743 store leases this spring to General Wireless, an affiliate of hedge fund Standard General, RadioShack initiated an auction process for the sale of its intellectual property, including the RadioShack name and a collection of customer information.

During the course of its long tenure as a consumer electronics retailer, RadioShack collected names, email addresses, physical addresses, telephone numbers, credit card numbers and purchase history data for over 117 million customers. All such information had been collected under a privacy policy that promised RadioShack would “not sell or rent your personally identifiable information to anyone at any time.” Indeed, in a privacy policy on display in RadioShack’s retail stores, the company noted: “We pride ourselves on not selling our private mailing list.”

RadioShack’s customer information, however, is a valuable asset. Accordingly, as part of the bankruptcy process, the RadioShack trustee sought court approval to sell a subset of such information in its database, including 67 million complete customer names and physical addresses, and around 8.3 million email addresses, to General Wireless for $26.2 million dollars.

The proposed sale drew objections from state attorneys general, the FTC and companies such as AT&T and Verizon. Fundamentally, the FTC and state objectors argued that the sale would contradict RadioShack’s privacy policy and, as such, would constitute a deceptive business practice. AT&T, Verizon and others asserted that the sale would violate the agreements signed between RadioShack and each of the objectors, as well as RadioShack’s own privacy policy.

The debtor and various objectors mediated these issues and ultimately reached a deal modeled on the Toysmart approach. In the end, Bankruptcy Court Judge Brendan L. Shannon approved the parties’ settlement, authorizing the sale subject to certain conditions, including that General Wireless must:

  • Send emails to all included email addresses notifying customers of the purchase and offering them seven days to opt-out of the transfer of their personal information;
  • Mail those customers for whom it has a physical address, but no e-mail address, a notification that it has purchased the assets of RadioShack and offering such customers 30 days to opt-out of the transfer of their information;
  • Provide a notice on the RadioShack website, with both an online opt-out option and a toll-free telephone number to call to exercise the option; and
  • Agree to be bound by the existing RadioShack privacy policy with regard to purchased customer information.

Furthermore, the deal prohibits RadioShack from transferring sensitive information, such as debit or credit card numbers, dates of birth, Social Security numbers or other government-issued identification numbers.

Commentary

In the intervening 15 years since the Toysmart brouhaha, very little legal guidance has developed to define the contours of pre-bankruptcy privacy promises in bankruptcy sales. As in the Toysmart situation, the privacy-related objections raised to the RadioShack sale were consensually resolved, leaving parties without a judicial resolution to these issues. Nevertheless, certain themes are emerging.

First, by virtue of settling, the FTC and states seem to recognize that consumer privacy rights are not absolute—they must be balanced with the best interests of a debtor’s estate and creditors in bankruptcy.

Second, a theme in both settlements is honoring consumers’ original expectations—that is, requiring the purchaser to adopt the privacy policy in place at the time the information was collected.

Third, the ability for customers to opt-out of the transfer of their personal information seems to be key. This was a sticking point in the Toysmart matter, leading to the ongoing controversy even after resolution with the FTC.

More broadly, however, perhaps the main lesson from RadioShack is this: Privacy policies ideally should anticipate bankruptcy scenarios and alert consumers that their information may be sold in bankruptcy or other divestitures. Such a direct acknowledgement would serve consumers by advising them of the possible fate of their personal information, thereby allowing them to make an informed decision about what information to volunteer. It would also serve the eventual debtor and its creditors, simplifying the sale process and maximizing the sale value of collected information.

 

 

Status Updates: Driving while social; a small country’s Facebook ban; and Pinterest’s new purchasing feature

Posted in E-Commerce, Status Updates

Driven to distraction. A frightening number of people are interacting with social media when they should be watching the road. As part of a public service campaign to stop distracted driving, the phone company AT&T recently conducted a poll of 2,067 U.S. residents from 16 to 65 years old who own a smartphone and drive at least once a day. While texting is still the most prevalent preoccupation—61% of the respondents reported doing it—a whopping 27% of the drivers polled admitted to checking their Facebook newsfeeds while they’re behind the wheel, and 14% said they are spending time on Twitter. Ten percent of the motorists polled are using Instagram while they drive, and the same number of drivers is allowing the newsworthy and popular vanishing messaging app Snapchat to divert their attention from the road. Perhaps most shocking is the fact that one in ten of the respondents admitted to video chatting behind the wheel. Laws attempting to stop this kind of recklessness have been enacted; 46 states now prohibit texting while driving, Oklahoma most recently. New York Times writer Matt Richtel, who has covered the “texting while driving” issue for years, theorizes that people continue to look at their electronic devices while they drive out of habit, cockiness (we overestimate our own ability to multitask while criticizing the ability of others to do the same) and strong social and marketing pressure to stay connected.

Social outcast. The government of one of the world’s smallest countries is imposing limitations on social media use, too, but the restriction doesn’t just apply to people behind the wheel of a car, and it’s not being imposed in the interest of driver safety. Facebook was recently banned in Nauru, an island in the Central Pacific with approximately 10,000 inhabitants. Despite the social media giant’s policy of removing most nudity-containing content, the Nauruan government maintains that the Facebook ban is necessary to stop “criminals and sexual perverts.” Nauruan parliament opposition members, meanwhile, believe the ban is an act of dictatorship intended to stop the platform’s use as a vehicle for political protest. At least six other countries around the world block social media, to varying degrees: China, Iran, North Korea, Pakistan, Turkey and Vietnam.

Pin money. The social media site Pinterest, a 5-year-old Internet powerhouse with an $11 billion valuation, is implementing another feature intended to bring in some cash: Buyable Pins. Soon, the site’s users—who, according to demographics reports, are often affluent women—will be able to purchase the items they “pin” or bookmark on the Pinterest platform from Pinterest’s many new partners, which, so far, include Macy’s and Nordstrom. The feature will streamline the online purchasing process by allowing Pinterest users to buy pinned items from several stores without having to leave the Pinterest site or enter their payment information more than once (unless it’s an especially big purchase). Buyable Pins will presumably also facilitate purchase-decision-making by allowing users to sort the items they’ve pinned by price. For example, if a user is interested in buying a new briefcase, he can pin all the ones that caught his eye while surfing the Web onto one Pinterest board, where he can compare pictures of the briefcases side-by-side and sort them from least- to most-expensive. The plan seems worthy of a company that recently raised $186 million in funding and is rumored to be revving up for an IPO.

 

Status Updates: Artist sues Pinterest; texting for teens without data plans; quit smoking with social media

Posted in Copyright, IP, Mobile, Status Updates

Pin pain. As a primarily visual social media platform whose self-described purpose is to help users bookmark and save “good stuff you find anywhere around the web,” Pinterest has raised copyright infringement questions since it became explosively popular in 2012. In many cases, copyright owners are happy to have their images “pinned” on the site, particularly where the copyright owner posted the image for advertising purposes in the first place. Pinned images appear in the Pinterest feeds of the pinner’s followers, and can drive traffic to the copyright owners’ own websites. But it doesn’t always work that way, says Christopher Boffoli, a fine art photographer who is suing Pinterest. “Much of my work is pinned to Pinterest without attribution, which throws out the window the common trope about this kind of use gaining me ‘exposure’,” says Boffoli, whose images have been pinned more than 5,000 times. Boffoli further argues that, contrary to the platform’s promise to “respond expeditiously to claims of copyright infringement,” the site is still riddled with his copyrighted images. The photography blog PetaPixel reports that a trial date for Boffoli’s suit is set for early 2016.

Connecting the data deprived. A new messaging app called Jott is targeting text-message-loving teenagers who have iOS or Android mobile devices, but no data plans. And, judging by the app’s success—Jott is attracting up to 20,000 new users a day—the ranks of the data deprived are legion. Launched in March and already boasting half a million users, Jott allows users who have the app installed on their devices to text each other within 100 feet. Jott’s founder, Jared Allgood, started testing the app at a few schools and it took off, serving as an oasis for tech-deprived students in some of the many U.S. public schools that are practically mobile data deserts. The app eliminates the need for cell towers and Wi-Fi routers by turning users’ individual devices into “de facto cell towers,” Forbes explains, using a technology called mesh networking. Jott isn’t the first messaging app to circumvent the need for an Internet connection or data plan—FireChat, for example, does that too and has been used at events like Burning Man. But Jott is unique in that it allows users to send direct messages to individuals rather than to whole groups.

Only a social smoker? Here’s some news for those naysayers who are convinced that no good can come of society’s obsession with social media: According to a study from the University of Waterloo, 32% of the 19- to 29-year-old Canadian smokers polled were able to stop lighting up for 30 days after three months of using an app or online tool meant to help them kick the habit. Experts predicted the potential helpfulness of these smoking-cessation apps, whose features are varied. Some of the apps appeal to a smoker’s competitive side by pitting the user against fellow smokers who are also trying to quit, while others send signals to the smoker’s friends who can then intervene when the smoker approaches a potential trigger. Success, the experts say, may depend on finding an app that’s a good fit, so hopefuls should try several. The websites of both niche publications and mainstream media periodicals list and review several of the best. And, in other news, The Onion reports that smoking is fine as long as you only do it when you drink.

 

Toward a Grand Unifying Theory of Today’s Tech Trends

Posted in Cloud Computing, Internet of Things, Marketing, Privacy, Wearable Computers

04_17_Big Data Analytics diagram_v8As a technology law blogger and co-editor of Socially Aware, I monitor emerging developments in information technology. What’s hot in IT today? Any shortlist would have to include social media, mobile, wearable technology, the Internet of Things (IoT), cloud computing and big data.

That list is all over the map, right? Or is it? On closer inspection, these technologies are far more closely intertwined than they may appear to be at first glance.

So what’s the connection between, say, social media and the Internet of Things? Or wearable tech and cloud computing?

Here’s my theory: These technologies all reflect the ceaseless drive by businesses to collect, store and exploit ever more data about their customers. In short, these technologies are ultimately about selling more stuff to us.

With this “grand unifying theory” in mind, one sees how these seemingly disparate technologies complement one another. And the legal challenges and risks they pose become clear.

Collection of Data

Let’s start with the collection of consumer data. Of the six key trends identified above, four relate directly to such collection: social media, mobile, wearable technology and IoT.

When we use the Internet, marketers are tracking our activities; the data generated by our online behavior is collected and then used to target ads that will be more relevant to us.

If we spend time on movie sites, we’re more likely to see ads promoting new film releases. If we visit food blogs, we’re going to be served ads selling cookware.

Creepy? It can be. But such tracking and targeting make it possible for many website operators to offer online content and services for free. Indeed, many believe that such tracking and targeting are essential to the vibrancy of our Internet ecosystem. (Although Google is reportedly experimenting with an offering where one would pay not to see ads while surfing the Web.)

In the past, serious limitations existed on the ability of marketers to track and target us. We might have given our name, email and home address to a website, but not much else; now, with social media, we routinely volunteer loads of personal information—our jobs, hobbies, special skills, taste in music and movies, even our “relationship status.” And not just information about ourselves, but our families, friends and colleagues as well. As a result, social media companies have compiled huge databases about us—in Facebook’s case, nearly 1.4 billion of us.

Also, not long ago, we surfed the Web from either home or office—limiting the ability to be tracked and targeted while away from those locations. The rise of Internet-connected mobile devices has changed all that, of course—now we can access the Web from anywhere, and mobile devices can pinpoint our location, even when we’re not browsing. Marketers can track our daily journey to and from home to work and back again, even serving us “just in time” discount offers as we pass a clothing store or restaurant.

From a marketer’s perspective, social media and mobile are all about expanding the amount and type of customer data that can be collected. Thanks to mobile devices and apps, tracking and targeting are no longer desk-bound and can occur even if a customer is not connected to the Internet.

Wearable tech? Like cell phones, wearables make tracking and targeting possible while one is away from a traditional computer or not actively using the Web. These devices can also collect information that cell phones can’t – our heart rate or body temperature, or the number of hours one slept last week.

For marketers, the Internet of Things is especially exciting because it raises the possibility of being able to track and target consumers anywhere in their homes, even while they are away from their desktop computers or mobile devices.

Imagine your “smart” refrigerator not only determining when you’re low on milk, but offering a 15 percent discount if you were to buy today a quart of milk at your local market. Or your Internet-connected washing machine recommending a new laundry detergent based on its monitoring of your laundry loads.

Another hot technology trend – commercial drones – is relevant here. Although unmanned aerial vehicles (UAVs) have generated attention for their ability to facilitate package delivery and accommodate WiFi access, they can be used to collect data on consumers when they’re outdoors or near a window, even when they are without cell phones, wearables or other devices used to track their movement and activities.

Ingestibles—“smart” pills containing sensors that are swallowed, allowing the collection of data within one’s body—are a nascent technology that, as they become more widely used, may ultimately fit into this theory.

Storage of Data

With social media platforms, mobile, wearable and IoT devices and UAVs collecting information on an unprecedented scale, that data needs to be stored somewhere. Enter the cloud. All of these new technologies depend heavily on the massive storage capacity made possible by cloud systems; it wouldn’t be cost effective otherwise. (Case in point: A 2013 study revealed that 90% of all the data in the world had been collected over the prior two years.)

Exploitation of Data

Once all this data has been collected and stored in the cloud, what then?

That’s where big data enters the picture. Big data is providing companies with the analytic tools for sifting through these inconceivably large databases in order to exploit the bits therein.

For example, that photo you uploaded to Instagram can now be analyzed for marketing opportunities. Perhaps you were holding a bag of potato chips; using big data analytics, the chip maker could target you in its next online ad campaign. Or maybe a competing snack company wants to entice you to switch brands. Why stop there? What about the shirt that you were wearing? And that pair of jeans? (I’ve written on the application of big data analytics to the billions of photos hosted on social media sites here.)

Similarly, information collected from wearables, when processed by big data tools, opens up new opportunities for marketers. Your pulse rate may be of interest to the health care industry. Your jogging workouts may attract attention from retailers of athletic shoes and clothing.

But the mother lode just might be all of the marketing insights to be generated by big data analytics stemming from multiple IoT devices in one’s home—the thermostat, stove, refrigerator, coffee machine, toaster, washer/dryer, humidifier, alarm clock and so on: for the first time ever, marketers will have access to real-time information regarding once-private quotidian activities.

Legal Considerations

So that’s my theory: The adoption of today’s hottest IT technologies is being driven in large part by the insatiable desire of businesses to collect and store ever-larger amounts of consumer data, and to then use that data to more successfully market to consumers. When these technologies are viewed in light of this theory, some key legal observations emerge.

First, because these technologies all involve the collection, storage and exploitation of consumer data, privacy and data security are necessarily raised and indeed are the most important legal considerations. That’s not meant to minimize intellectual property, product liability and other legal concerns associated with these technologies; privacy and data security laws, however, are the ones specifically designed to regulate the collection, use and exploitation of consumer data.

Second, these technologies are being developed and implemented far faster than the ability of legislators, regulators and courts to develop rules to govern them. It will be essential for companies embracing these technologies to self-regulate—failure to do so will result in an inevitable backlash, leading to burdensome regulations that will undermine innovation.

Third, these technologies will present real challenges to the majority of companies that want to “do the right thing” by their customers. For example, consumers ideally should be provided with notice and an opportunity to consent prior to the collection, storage and exploitation of their personal information, but how can this be done through, say, a smart electric toothbrush? These issues need to be addressed early in the development cycle for next-generation products—it can’t be an afterthought. Moreover, are customers receiving real, tangible value in connection with the data being collected from them?

Fourth, as our social-media pages, devices and appliances become more closely tied together, and linked to massive troves of data about us in the cloud, businesses need to be aware that it takes only one weak link to put the entire ecosystem at risk. Hackers will no longer need to bypass your computer or phone’s security to capture personal data; they may be able to access your records through, say, an Internet-enabled toaster that lacks adequate security controls.

Finally, companies need to pay attention to whether they need to collect all the data that can be collected through these technologies. Ideally, they should seek to minimize the amounts of personally identifiable information they hold, in order to reduce privacy- and security-related legal risks, and liability.

No doubt this last recommendation may be hard for many marketers to embrace; after all, data-gathering is in their DNA. And that same hard-wiring is in all of our DNA—the original source code for data collection, storage and exploitation. We wouldn’t be human without it.

(This is an expanded, “director’s cut” version of an op-ed piece that originally appeared in MarketWatch.)

Five Social Media Law Issues To Discuss With Your Clients

Posted in Arbitration, Copyright, Employment Law, IP, Labor Law, Litigation, Online Promotions, Terms of Use

Social_Community85The explosive growth of social media has clients facing legal questions that didn’t even exist a few short years ago. Helping your clients navigate this muddled legal landscape will have them clicking “like” in no time.

What’s in a Like?

Not long ago, the word “like” was primarily a verb (and an interjection used by “valley girls”). You could have likes and dislikes in the sense of preferences, but you couldn’t give someone a like, claim to own a like or assert legal rights in likes. Today, however, a company’s social media pages and profiles, and the associated likes, followers and connections, are often considered valuable business assets. Courts have come to various conclusions regarding whether likes and similar social media constructs constitute property, but one thing is clear: Every company that uses social media should have in place clear policies regarding employee social media use and ownership of business-related social media accounts.

Employees who manage a company’s social media accounts often insert themselves as the “voice” of the brand and establish a rapport with the company’s fans and followers. Without clear policies that address ownership of social media accounts, and clearly distinguish between the company’s accounts and employees’ personal accounts, your client may find itself in a dispute when these employees leave the company and try to take the company’s fans and followers with them.

Read a more detailed description of “likes” as assets here.

Dirty Laundry

It comes as no surprise that employees frequently use social media to complain about managers and coworkers, pay, work conditions and other aspects of their employment. Companies often would prefer not to air these issues publicly, so they establish policies and impose discipline when employees’ social media activity becomes problematic. Companies need to be careful, however, that their policies and disciplinary actions comply with applicable law.

A number of National Labor Relations Board decisions have examined whether employees’ statements on social media constitute “concerted activity”—activity by two or more employees that provides mutual aid or protection regarding terms or conditions of employment—for purposes of the National Labor Relations Act (which, notably, applies regardless of whether the employees are unionized or not). Companies also need to be careful to comply with state statutes limiting employer access to employees’ personal social media accounts, such as California Labor Code Section 980, which prohibits an employer from asking an employee or applicant to disclose personal social media usernames or passwords, access personal social media in the presence of the employer, or divulge personal social media.

Read more about the intersection of social media policies and labor law here and here.

Terms of (Ab)use

Companies often consider their social media pages and profiles to be even more important than are the companies’ own websites for marketing and maintaining customer engagement. But a company’s own website has one advantage over a third party social media platform: The company sets its own terms for use of its website, while the third party social media platform is subject to terms of use imposed by the platform operator. And, in many cases, the terms imposed on users of social media platforms are onerous and make little distinction between individual users using the platform just for recreation and corporate users who depend on the platform for their businesses.

Social media terms of use often grant platform operators broad licenses to content posted on the platform, impose one-sided indemnification obligations on users, and permit platform operators to terminate users’ access with or without cause. You may have little luck negotiating modifications to such online contracts for your clients, but you can at least inform your clients of the terms that govern their use of social media, so that they can weigh the costs and benefits.

Read more about social media platforms’ terms of use here, here, and here.

Same as It Ever Was

When it comes to using social media for advertising, the media may be new but the rules are the same as ever. Companies that advertise through social media—especially by leveraging user endorsements—need to comply with Section 5 of the FTC Act, which bars “unfair or deceptive acts or practices.” Bloggers and others who endorse products must actually use the product and must disclose any “material connections” they have with the product providers (for example, a tech blogger reviewing a mobile phone that she received for free from the manufacturer should disclose that fact). Because this information is likely to affect consumers’ assessment of an endorsement, failure to disclose may be deemed deceptive. So if you have a client that uses endorsements to promote its products, make sure to brush up on the FTC “Dot Com Disclosures” and other relevant FTC guidance.

Read more about endorsement disclosure obligations here.

Good Rep

As noted, a company’s social media pages, followers, etc., may constitute valuable business assets. But buyers in M&A transactions often neglect such assets when formulating the seller’s reps and warranties. Buyers should consider asking the seller to disclose all social media accounts that the target company uses and to represent and warrant that none of the target’s social media account names infringe any third party trademark or other IP rights, that all use of the accounts complies with applicable terms of service, and that the target has implemented policies providing that the company (and not any employee) owns all business-related social media accounts and imposing appropriate guidelines regarding employee use of social media.

Finally, if you have clients that use social media, it’s important to be familiar with the popular social media platforms and their (ever-changing) rules and features. Learning to spot these issues isn’t going to turn you into the next Shakira—as of this writing, the most liked person on Facebook with well over 100 million likes—but your clients will surely appreciate your help as they traverse the social media maze.

Read more about social media assets in M&A transactions here.

This piece originally appeared in The Recorder.