May 2012

History is littered with examples of the law being slow to catch up with the use of technology.  Social media is no exception.  As our Socially Aware blog attests, countries around the world are having to think fast to apply legal norms to rapidly evolving communications technologies and practices.

Law enforcement authorities in the United Kingdom have not found the absence of a codified “social media law” to be a problem.  They have applied a “horses for courses” approach, and brought prosecutions or allowed claims under a range of different laws that were designed for other purposes.  Of course, this presents problems to users, developers and providers of social media platforms, who can be by no means certain which legal standards apply.

The use of Twitter and other forms of social media is ever increasing and the attraction is obvious—social media gives people a platform to share views and ideas. Online communities can bring like-minded people together to discuss their passions and interests; and, with an increasing number of celebrities harnessing social media for both personal and commercial purposes, Twitter often provides a peek into the lives of the rich and famous.

As an increased number of Twitter-related cases have hit the front pages and the UK courts, it is becoming increasingly clear that, in the United Kingdom at least, the authorities are working hard to re-purpose laws designed for other purposes to catch unwary and unlawful online posters.

It’s typically hard to argue that someone who maliciously trolls a Facebook page set up in the memory of a dead teenager or sends racist tweets should not be prosecuted for the hurt they cause.  But in other cases, it may not be so clear-cut—how does the law decide what is and what is not unlawful?  For example, would a tweet criticizing a religious belief be caught?  What about a tweet that criticizes someone’s weight or looks?  Where is the line drawn between our freedom of expression and the rights of others?  Aren’t people merely restating online what was previously (and still is) being discussed down the pub?

A range of UK laws is currently being used to regulate the content of tweets and other online messages.  At the moment, there is no particular consistency as to which laws will be used to regulate which messages.  It appears to depend on what evidence is available.  As a spokesman of the Crown Prosecution Service remarked, “Cases are prosecuted under different laws.  We review the evidence given to us and decide what is the most appropriate legislation to charge under.”

Communications Act 2003

In 2011, there were 2,000 prosecutions in the United Kingdom under section 127 of the Communications Act 2003. A recent string of high-profile cases has brought the Communications Act under the spotlight.

Under section 127(1)(a), a person is guilty of an offense if he sends “a message or other matter that is grossly offensive or of an indecent, obscene or menacing character” by means of a public electronic communications network.  The offense is punishable by up to six months’ imprisonment or a fine, or both.

So… what is “grossly offensive” or “indecent, obscene or menacing”?

In DPP v Collins [2006], proceedings were brought under section 127(1)(a) in relation to a number of offensive and racist phone calls made by Mr. Collins to the offices of his local Member of Parliament.  The House of Lords held that whether a message was grossly offensive was to be determined as a question of fact applying the standards of an open and just multiracial society, and taking into account the context of the words and all relevant circumstances.  The yardstick was the application of reasonably enlightened, but not perfectionist, contemporary standards to the particular message set in its particular context.  The test was whether a message was couched in terms that were liable to cause gross offense to those to whom it related.  The defendant had to have intended his words to be grossly offensive to those to whom they related, or to have been aware that they may be taken to be so.  The court made clear that an individual is entitled to express his views and to do so strongly, however, the question was whether he had used language that went beyond the pale of what was tolerable in society.  The court considered that at least some of the language used by the defendant could only have been chosen because it was highly abusive, insulting and pejorative.  The messages sent by the defendant were grossly offensive and would be found by a reasonable person to be so.

Proceedings are also being brought under section 127(1)(a) for racist messages.  In March 2012, Joshua Cryer, a student who sent racially abusive messages on Twitter to the ex-footballer, Stan Collymore, was successfully prosecuted under section 127(1)(a) and sentenced to two years’ community service and ordered to pay £150 costs. (However, interestingly, Liam Stacey, who was sentenced  to 56 days’ imprisonment for 26 racially offensive tweets in relation to Bolton Wanderers footballer Fabrice Muamba, was charged with racially aggravated disorderly behavior with intent to cause harassment, alarm or distress under section 31 of the Crime and Disorder Act 1998, rather than under the Communications Act).

Similarly, religious abuse is also being caught under the Act.  In April 2012, Amy Graham, a former police cadet, was charged under the Communications Act for abusive anti-Muslim messages posted on Twitter.  She awaits sentencing.

These cases may appear relatively clear-cut, but there have been some other high-profile cases where the grounds for prosecution appear more questionable.

In April 2012, John Kerlen was found guilty of sending tweets that the court determined were both grossly offensive and menacing, for posting a picture of a Bexley councilor’s house and asking: “Which c**t lives in a house like this. Answers on a postcard to #bexleycouncil”; followed by a second tweet saying: “It’s silly posting a picture of a house on Twitter without an address, that will come later. Please feel free to post actual s**t.”  He avoided a jail sentence —instead being sentenced to 80 hours of unpaid labor over 12 months, asked to pay £620 in prosecution costs, and subjected to a five-year restraining order.  Were these messages really menacing or grossly offensive?  If he was going to be prosecuted, was the Communications Act the appropriate law or should he have been prosecuted for incitement to cause criminal damage (if he was genuinely inciting others to post feces) or for harassment?

Even more controversial is the case that has become widely known as the “Twitter joke trial.” Paul Chambers was prosecuted under section 127(1)(a) for sending the following tweet: “Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your s**t together otherwise I’m blowing the airport sky high!!”  He appealed against his conviction to the Crown Court.  In dismissing the appeal, the judge said his tweet was “menacing in its content and obviously so.  It could not be more clear.  Any ordinary person reading this would see it in that way and be alarmed.” This was despite the fact that Robin Hood Airport had classified the threat as non-credible on the basis that “there is no evidence at this stage to suggest that this is anything other than a foolish comment posted as a joke for only his close friends to see.”  The case attracted a huge following among Twitter users, including high profile users such as Stephen Fry and Al Murray.  Following a February 2012 appeal to the High Court, it was announced on May 28 that the High Court judges who heard the case were unable to reach agreement, and that therefore, a new appeal would need to be re-heard by a three-judge panel. Such a “split decision” is extremely unusual.  No date has yet been set for the new hearing.

Malicious Communications Act 1988

Cases are also being brought under section 1 of the Malicious Communications Act 1988. Under this Act, it is an offense to send an electronic communication that conveys a message that is grossly offensive to another person, where the message is sent with the purpose of causing distress or anxiety to that person.

In February 2012, Sunderland fan, Peter Copeland, received a four-month suspended sentence after posting racist comments on Twitter aimed at Newcastle United fans.  More recently, a 13th person was arrested by police investigating the alleged naming of a rape victim on social media sites after Sheffield United Striker, Ched Evans, was jailed for raping a 19-year-old woman.  The individuals involved have been arrested for offenses under various laws, including the Malicious Communications Act.

What’s next?

So, what’s next for malicious communications?  Perhaps sexist remarks.

Earlier this month, Louise Mensch, a Member of Parliament, highlighted a variety of sexist comments that had been sent to her Twitter account.  In response to this, Stuart Hyde, who is the Chief Constable of Cumbria Police and the national e-crime prevention lead for the Association of Chief Police Officers, remarked that the comments made to Mensch were “horrendous” and “sexist bigotry at its worst.”  He referred to the offenses available to the authorities:  “We are taking people to court. People do need to understand that while this is a social media it’s also a media with responsibilities and if you are going to act illegally using social media expect to face the full consequences of the law. Accepting that this is fairly new, even for policing … we do need to take action where necessary.”  Whether any of these comments will lead to charges remains to be seen.

In another example of online abuse, Alexa Chung, the TV presenter, recently received nasty comments criticizing her weight in response to some Instagram photos she had posted on Twitter.  She removed the photos in response, but is it possible that these kinds of messages could be considered grossly offensive and therefore unlawful?

We will have to wait and see what other cases are brought under the Communications Act and Malicious Communications Act and what balance is ultimately struck between freedom of expression and protecting individuals from receiving malicious messages.  However, it is not just criminal laws relating to communications that could apply to online behavior.  Recent events have also led to broader legislation such as the Contempt of Court Act and the Serious Crime Act being considered in connection with messages posted on Twitter and other social media services.

Contempt of Court Act 1981

If someone posts information online that is banned from publication by the UK courts, they could be found in contempt of court under the Contempt of Court Act 1981 and liable for an unlimited fine or a two-year prison sentence. However, as we saw in 2011, the viability of injunctions in the age of social media is questionable.  When the footballer, Ryan Giggs, requested that Twitter hand over details about Twitter users who had revealed his identity in breach of the terms of a “super-injunction,” hundreds of Twitter users simply responded by naming him again.  No users have, to date, been prosecuted for their breach of the injunction.

In another high profile case, in February 2012, the footballer, Joey Barton, was examined for contempt of court when he tweeted some comments regarding the trial of footballer, John Terry. Under the Contempt of Court Act 1981, once someone has been arrested or charged, there should be no public comments about them which could risk seriously prejudicing the trial.  In that case, it was found that Barton’s comments would not compromise the trial and therefore he was not prosecuted for his comments.

Serious Crime Act 2007

Last summer’s riots in Englandled to Jordan Blackshaw and Perry Sutcliffe-Keenan being found guilty under sections 44 and 46 of the Serious Crime Act and jailed for having encouraged others to riot. Blackshaw had created a Facebook event entitled “Smash d[o]wn in Northwich Town” and Sutcliffe-Keenan had invited people to “riot” in Warrington.  Both men were imprisoned for four years.

Defamation Act 1996

Of course, posting controversial messages online is not just a criminal issue.  Messages can also attract civil claims for defamation, under the Defamation Act 1996.

In March 2012, in the first UKruling of its kind, former New Zealandcricket captain, Chris Cairns, won a defamation claim against Lalit Modi, former Indian Premier League (IPL) chairman, for defamatory tweets.  Mr. Modi had tweeted that Mr. Cairns had been removed from the IPL list of players eligible and available to play in the IPL “due to his past record of match fixing.”  Mr. Cairns was awarded damages of £90,000 (approximately £3,750 per word tweeted).

Conclusion

As in other countries, a whole host of UK laws that were designed in an age before social media—even, in some cases, far before the Internet as we know it—are now being used to regulate digital speech.  Digital speech, by its very nature, has permanent records that are easily searchable, making the police and the prosecution’s job much easier.

Accordingly, these types of cases are only going to increase, and it will be interesting to see where UK courts decide to draw the line between freedom of expression and the law.  One would hope that a sense of proportionality and common sense will be used so that freedom of expression offers protection for ill-judged comments said in the heat of the moment or “close to the knuckle” jokes, while ensuring that the victims of abusive and threatening trolls are rightly protected.  In the meantime, users need to be very careful when tweeting and posting messages online, particularly in terms of the language they use.  Tone can be extremely difficult to convey in 140 characters or less.

One has to feel sorry for the UK holiday makers who were barred in January 2012 from entering the United States for tweeting that they were going to “destroy America” (despite making clear to the U.S. airport officials who detained them that “destroy” was simply British slang for “party”).  No doubt they will think twice before clicking that Tweet button in the future.

In our recent Socially Aware blog post, we noted that a number of pending state bills are seeking to ban employers from requesting confidential login information, including social media login information, as a condition of employment.  In fact, on April 9, 2012, Maryland passed Senate Bill 433/HB 964, prohibiting employers from requesting current and prospective employees’ passwords to online personal accounts, such as Facebook, Twitter, LinkedIn, and personal email accounts.  The new statute, which goes into effect on October 1, 2012, applies to “employers” – broadly defined as any person engaged in a business, industry, profession, trade, or other enterprise in Maryland, as well as units of Maryland state and local government – and their respective representatives and designees, and even employers that are based outside Maryland but that have employees located in Maryland will need to comply with the statute.

When it goes into effect, Maryland’s new law will prohibit covered employers from:

  • Requesting or requiring an employee or applicant to disclose his or her user name, password, or any other means of accessing a personal account or service through computers, telephones, PDAs, and similar devices;
  • Taking disciplinary action against employees for their refusal to disclose certain password and related information; and
  • Threatening to take disciplinary action against employees for their refusal to disclose such information.

However, employers are not entirely prohibited from accessing employees’ personal accounts.  Under certain circumstances, Maryland’s new law will allow employers to access employees’ personal accounts in order to investigate the following (in each case, only if the employer has received information regarding such conduct): 

  • Whether an employee is complying with securities or financial laws or regulatory requirements, if the employee is using a personal website, Internet website, web-based account or similar account for business purposes; or
  • An employee’s actions regarding his or her downloading of the employer’s proprietary information or financial data to a personal website, Internet website or web-based account. 

The Maryland bill gained support after a resident of the state, Robert Collins, made headlines when he was asked to disclose his Facebook password to be recertified as a correctional officer with the Maryland Department of Public Safety and Correctional Services. Reportedly, the department had a practice of reviewing applicants’ social media profiles to ensure that they were not engaged in any illegal activities and, believing he had no other option, Collins disclosed his Facebook username and password to his interviewer for the correctional officer position.

The Maryland statute is an illustration of the growing opposition to requiring current or potential employees to disclose their personal account passwords.  Similar incidents of employers requesting access to their current and prospective employees’ accounts have surfaced around the United States, with some employers taking disciplinary action for employees’ refusal to disclose password information.  As a result, Facebook, as well as privacy advocates, have publicly opposed the growing practice of employers requesting access to employees’ social media profiles.  U.S. Senators Richard Blumenthal and Charles Schumer have also sent letters to both the U.S. Department of Justice, asking it to investigate whether this practice violates the Stored Communication Act or the Computer Fraud and Abuse Act, and the U.S. Equal Employment Opportunity Commission, asking that agency to opine whether the practice violates existing anti-discrimination laws.

In parallel with this widespread opposition, other states may be following Maryland’s lead by enacting legislation that clarifies employees’ expectation of privacy in their online profiles.  As noted previously, bills similar to the Maryland statute have been introduced in California, Illinois, Massachusetts, Michigan, Minnesota, New Jersey, and Washington.  Moreover, federal law may soon prohibit employers from requesting employees’ social media passwords.  On April 27, 2012, Congressman Eliot Engel (D-NY) proposed H.R. 5050, the Social Networking Online Protection Act, or “SNOPA,” in the United States House of Representatives.  If passed, this bill would impose a nationwide ban on the practice of employers requiring or requesting access to their employees’ online personal accounts.  Like the newMaryland law, H.R. 5050 broadly defines employers who are covered under the law – for purposes of the bill, “employer” includes “any person acting directly or indirectly in the interest of an employer in relation to an employee or an applicant for employment.”  The House bill, which prohibits institutions of higher learning and local educational agencies from requesting the passwords of students or prospective students, is even broader in scope than the new Maryland law.

Shortly after H.R. 5050 was introduced, on May 9, 2012, Senators Richard Blumenthal (D-CT), Chuck Schumer (D-NY), Ron Wyden (D-OR), Jeanne Shaheen (D-NH), and Amy Klobuchar (D-MN) introduced the Password Protection Act of 2012  (S.B. S. 3074) (“PPA”) in the Senate, and Congressmen Heinrich (D-NM) and Perlmutter (D-CO) introduced a parallel bill in the House. The PPA would amend the Computer Fraud and Abuse Act and prohibit employers from requiring or requesting access to employees’ online personal accounts or password-protected computers, provided that such computers are not the employer’s computers.  The PPA would also prohibit employers from taking adverse actions against employees for refusing to disclose such passwords and, under the PPA, employees would be eligible to receive compensatory damages and injunctive relief if their employers were found to have violated the Act.

Even in the absence of statutory authority prohibiting employers from requesting access to current and future employees’ social media profiles, employers should exercise caution when seeking access to employees’ or prospective employees’ social media accounts.  For example, although a job applicant’s social media profile may be publicly available, when viewing the applicant’s profile, a potential employer may learn information that would otherwise remain undisclosed in the application process, such as an applicant’s membership in a protected class (we noted this issue with respect to current employees’ social media profiles back in November 2011). Employers can minimize their exposure to claims of discriminatory hiring practices by refraining from viewing applicants’ online profiles during the application process.  For this and other reasons, employers – whether in Maryland or elsewhere – are urged to carefully consider potential legal risks when instituting policies related to accessing their current and prospective employees’ online personal accounts.

A recent district court decision highlights the growing prevalence of issues relating to new media technologies arising in the courtroom.  In Bland v. Roberts, the Federal District Court for the Eastern District of Virginia held that merely “liking” a Facebook page is insufficient speech to merit constitutional protection.

Five former employees of the Hampton Sheriff’s Office brought a lawsuit against Sheriff B.J. Roberts, in his individual and official capacities, alleging that he violated their First Amendment rights to freedom of speech and freedom of association when he fired them, allegedly for having supported an opposing candidate, Jim Adams, in the local election against Roberts for Sheriff.  In particular, two of the plaintiffs had “liked” Jim Adams’s page on Facebook.  When Sheriff Roberts was reelected, he terminated the plaintiffs as employees, but did not cite the Facebook likes or other support of Jim Adams as reasons for their departures.

The two plaintiffs alleged that they engaged in constitutionally protected speech when they liked the Jim Adams Facebook page.  In April 2012, however, the court granted Roberts’s motion for summary judgment, ruling that a Facebook like does not meet the standard for constitutionally protected speech.  (The freedom of association claims were dismissed under the theories of qualified and Eleventh Amendment Immunity.)

The court looked to cases involving speech on social media websites, noting that precedent had developed around cases where the speech at issue involved actual statements (e.g., Mattingly v. Milligan and Gresham v. City of Atlanta). The court held that this case was distinguishable because liking involved no actual words, and constitutionally protected speech could not be inferred from “one click of a button.”  In summary, the court wrote that liking a Facebook page is “not the kind of substantive statement that had previously warranted constitutional protection.”

Because it ruled that liking a Facebook page cannot be considered constitutionally protected speech, the court did not proceed to analyze whether the plaintiffs’ First Amendment rights had been violated. The court based its decision on the fact that the plaintiffs made no actual statements, suggesting that had there been a declarative statement—such as a wall post—the court’s decision might have been different.  (One of the plaintiffs alleged that he had written a wall post with an expressed opinion, but deleted the post before it could be documented).  Critics point out that the court’s ruling that protected speech requires an actual statement is inconsistent with prior First Amendment case law, which identifies various forms of protected speech (e.g., armbands in Tinker v. Des Moines Independent Community School District; flag burning in Texas v. Johnson). This point is a key issue ripe for appeal.

Internet Law experts argue that the court failed to consider the technology behind liking a page on Facebook. For example, Professor Eric Goldman, a prominent legal scholar and blogger, has observed that liking is more than a passive signal of virtual approval and that the like functionality has various effects on Facebook’s algorithm, including increased publicity for the liked page.  Although it is unclear whether these underlying changes are sufficient to tip the protected speech scale, Goldman and others argue that these changes should at least be weighed in the court’s decision.

The question for the social media community moving forward is whether other courts will agree that liking should not amount to constitutionally protected speech.  Regardless of the outcome, the case provides a good lesson:  what you say on a social media network can be used against you.

Pinterest is 2012’s “most talked-about” social media platform and one of the fastest-growing standalone websites in history.  By tapping into the enthusiasm for gathering and presenting images that have been pulled from across the web, Pinterest has created a powerful content sharing platform – and has provoked strong objections from copyright owners.  Companies that are considering whether to promote their brands and products using Pinterest should understand the best practices for analyzing and mitigating these legal risks.

Please join John F. Delaney of Morrison & Foerster LLP for a one-hour audio briefing hosted by Practising Law Institute (PLI). Participants of this program will learn:

  • Pinterest’s history and explosive growth
  • How corporations are using Pinterest to interact with customers
  • How the Pinterest platform works
  • Copyright law concerns raised by use of the Pinterest site
  • Key online contract law considerations for Pinterest users
  • Risk reduction strategies for your company or clients

For more information or to register, please visit PLI’s website here.

In past Socially Aware posts, we have discussed using subpoenas in civil litigation to obtain evidence from social media sites, including whether individuals have a privacy interest in this information and how the Stored Communications Act may limit the use of subpoenas in civil cases.  Until now, we have not discussed these issues in the context of a criminal case.  Does the prosecutor have to get a search warrant to obtain information about someone’s social media use?  Does the Stored Communications Act limit the government’s authority in this area?  A decision from the Criminal Court of the City of New York arising out the Occupy Wall Street movement, People of the State of New York v. Malcolm Harris, sheds some light on these questions. 

On October 1, 2011, protesters marched on the Brooklyn Bridge as part of an Occupy Wall Street demonstration.  Malcolm Harris, along with hundreds of other protesters, was charged with disorderly conduct for allegedly occupying the roadway of the Brooklyn Bridge.  The District Attorney expected Harris to claim as a defense that he stepped onto the roadway because the police led him there.  The District Attorney, however, asserted that Harris, while on the Bridge, may have tweeted statements inconsistent with his anticipated defense.

The District Attorney issued a third-party subpoena on Twitter, seeking user information and tweets associated with the account @destructuremal, allegedly used by Harris.  Harris notified Twitter that he would move to quash the subpoena, and Twitter took the position that it would not comply with the subpoena absent a ruling by the Court.  The District Attorney opposed the motion.

The Court found that Harris lacked standing to quash the third-party subpoena on Twitter.  The Court found that Harris had neither a proprietary interest nor a privacy interest in the user information and tweets associated with the account.  The Court denied Harris’s motion to quash, and ordered Twitter to comply with the subpoena.

No Proprietary Interest in Tweets

First off, according to the Court, Harris’s tweets were not his tweets.  When registering a Twitter account, the user must agree to Twitter’s Terms of Service, which includes a grant to Twitter of a “worldwide, non-exclusive, royalty-free license to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute” user content posted to Twitter.  The Court found that Twitter’s license to use Harris’s tweets meant that the tweets posted by Harris “were not his.”  In the Court’s view, Harris’s “inability to preclude Twitter’s use of his [t]weets demonstrates a lack of proprietary interest in his [t]weets.” 

No Privacy Interest in Tweets 

The Court went on to reject Harris’s contention that he had a privacy interest in his tweets.  Twitter’s Terms of Service also state that submitted content “will be able to be viewed by other users of the Service and through third party services and websites,” and Twitter’s Privacy Policy states that the Twitter’s service is “primarily designed to help you share information with the world.”  Twitter makes no assurances of privacy.  Rather, Twitter notifies its users that their tweets (at least on default settings) will be available for the world to see.  Thus, the Court found that tweets are “by definition public.”

No Search Warrant Required

The Court further held that Harris’s Fourth Amendment rights were not at issue, because the internet is not a physical “home.”  While service providers may refer to a user’s space on the site as a “virtual home,” the Court took the position that this “home” is no more that “a block of ones and zeros stored somewhere on someone’s computer.”  Thus, while Twitter users may think that the Fourth Amendment protections that apply in their physical homes will also apply to their Twitter accounts, “in reality, the user is sending information to the third party, Twitter.” 

No Stored Communications Act Protection

Finally, the Court held that, unlike in a civil case, the Stored Communications Act permits the government in a criminal case to subpoena subscriber and session information directly from the social media site.  The Court held that, unlike private litigants in civil litigation, prosecutors may obtain such information using any federal or state grand jury, trial or administrative subpoena by showing “specific and articulable facts showing that there are reasonable grounds to believe” that the tweets “are relevant and material to an ongoing criminal investigation.”   The Court held that the District Attorney clearly made this showing in the case.

In short, the Court has made it clear that users of social media who also find themselves charged with a criminal offense should have no expectation that potentially relevant information will be considered private or beyond the reach of a subpoena.

Reaction to Decision

The Court’s decision has been criticized by tech blogs and the American Civil Liberties Union, and, on May 7, 2012, Twitter filed a motion to quash the Court’s order, arguing that among other errors in the Court’s decision, under Twitter’s Terms of Service, Harris in fact retained his rights to any content that he submitted, posted or displayed on or through the Twitter service.  We will keep in eye on further developments in this case.

According to press reports, a growing number of employers require job applicants to disclose their login information for Facebook or other social media accounts as a condition of employment.  While this practice may very well fall on the wrong side of the law, lawyers and lawmakers are still in the process of establishing the framework for legal analysis and of spreading the word about the legal risks involved with demanding this type of private information from current employees and employee-applicants alike.  

State and federal legislatures have been quick to respond to recent publicity concerning this controversial employment practice.  A number of state bills are in or emerging from the pipeline (including bills in California, Illinois, Maryland, Massachusetts, Michigan, Minnesota, and New Jersey), all seeking to ban employers from requesting confidential login information as a condition of employment.  At the federal level, Democratic Congressman Ed Perlmutter proposed amending the Federal Communications Commission Reform Act to specifically empower the Federal Communications Commission (FCC) to stop employers nationwide from asking current and prospective employees for access to this type of private information.  While this proposed federal amendment was rejected in the U.S. House of Representatives, the state bills appear to be gaining momentum, with Maryland being the first state to pass a bill into law preventing employers from requesting login information to social media accounts. 

Even without specific laws in place prohibiting it, employers nationwide could find themselves liable under a variety of different legal theories for requiring access to private login information.  Among these potential theories of liability are common law invasion of privacy, violation of state constitutional rights to privacy, interference with employee rights to engage in protected, concerted activity under the National Labor Relations Act, discrimination on the basis of protected characteristics which an employer may learn about through accessing employee social media, and, for public employees, violation of the Fourth Amendment right to be free from unreasonable searches and seizures.  In addition to considering the risk of liability under these and other legal theories, employers should also consider the statement issued by Facebook’s Chief Privacy Officer, Erin Egan, who warned that sharing or soliciting Facebook passwords is a violation of the company’s Statement of Rights and Responsibilities and that Facebook will “take action to protect the privacy and security of [its] users, whether by engaging policymakers or, where appropriate, by initiating legal action, including by shutting down applications that abuse their privileges.”

Beyond the legal concerns in connection with requesting private login information from current and prospective employees, as this issue continues to get attention in the press, employers should consider the practical implications, such as effect on company reputation, before engaging in this type of employment practice.