December 2012

The Federal Trade Commission (FTC) has cracked down on a company that was engaged in “history sniffing,” a means of online tracking that digs up information embedded in web browsers to reveal the websites that users have visited. In a proposed settlement with Epic Marketplace, Inc. and Epic Media Group (together, “EMG”) announced on December 5, 2012, the FTC settled charges that EMG had improperly used history sniffing to collect sensitive information regarding unsuspecting consumers. 

EMG functions as an intermediary between publishers—i.e., websites that publish ads—and the advertisers who want to place their ads on those websites. It does this through online behavioral advertising, which typically entails placing cookies on websites a consumer visits in order to collect information about his or her use of the website and then using that information to serve targeted ads to the user when he or she visits other websites within the EMG Marketplace Network.

What got EMG into trouble was that it also used history sniffing to collect information regarding the websites that users visited. Here’s how the technique works. In your web browser, hyperlinks to websites change color once you have visited them. After you have visited a webpage, the hyperlink to it will most likely appear in one color (e.g., purple). If you haven’t been to a particular webpage before, any link to it will probably show up in another color (e.g., blue). History sniffing code exploits this feature to go through your browser—that is, to “sniff” around—to see what color your hyperlinks are. When the code finds purple links, it knows that you’ve been to those websites.

According to the FTC, for almost 18 months—from March 2010 until August 2011—EMG included history sniffing code in ads it served to website visitors on at least 24,000 webpages within its network, including webpages associated with name brand websites. EMG used the code to determine whether consumers had visited more than 54,000 different domains, including websites “relating to fertility issues, impotence, menopause, incontinence, disability insurance, credit repair, debt relief, and personal bankruptcy.” EMG used this sensitive information to sort consumers into “interest segments” that, in turn, included sensitive categories like “Incontinence,” “Arthritis,” “Memory Improvement,” and “Pregnancy-Fertility Getting Pregnant.” EMG then used the sensitive interest segments to deliver targeted ads to consumers.

History sniffing is not per se illegal under U.S. law. What got EMG in trouble was that it allegedly misrepresented how it tracked consumers. First, EMG’s privacy policy at the time stated that the company only collected information about visits to websites within the EMG network; however, the FTC alleged that the history sniffing code enabled EMG to “determine whether consumers had visited webpages that were outside the [EMG] Marketplace Network, information it would not otherwise have been able to obtain.” EMG’s tracking of users in a manner inconsistent with its privacy policy was therefore allegedly deceptive, in violation of Section 5 of the FTC Act.

Second, EMG’s privacy policy did not disclose that the company was engaged in history sniffing; it disclosed only that it “receives and records anonymous information that your browser sends whenever you visit a website which is part of the [EMG] Marketplace Network.” According to the FTC, the fact that the company engaged in history sniffing would have been material to consumers in deciding whether to use EMG’s opt-out mechanism. EMG’s failure to disclose the practice was therefore also allegedly deceptive in violation of Section 5 of the FTC Act.

The proposed consent order would, among other things, require EMG to destroy all the information that it collected using history sniffing, bar it from collecting any data through history sniffing, prohibit it from using or disclosing any information that was collected through history sniffing, and bar misrepresentations regarding how the company collects and uses data from consumers or about its use of history sniffing code.

EMG stopped its history sniffing in August 2011, and most new versions of web browsers have technology that blocks this practice. Nonetheless, the FTC made it clear in the complaint that it wanted to highlight the problem because history sniffing “circumvents the most common and widely known method consumers use to prevent online tracking: deleting cookies.” Mark Eichorn, assistant director of the FTC’s Division of Privacy and Identity Protection, told the Los Angeles Times that the FTC “really wanted to make a statement with this case.” He added, “People, I think, really didn’t know that this was going on and didn’t have any reason to know.” The proposed consent order puts online tracking and advertising companies on notice: If you collect data in a manner inconsistent with—or not disclosed in—your privacy policy, you run the risk of a charge of deception.

In a string of cases against Google, approximately 20 separate plaintiffs have claimed that, through advertisements on its AdWords service, Google engaged in trademark infringement. These claims have been based on Google allowing its advertisers to use their competitors’ trademarks in Google-generated online advertisements. In a recent decision emerging from these cases, CYBERsitter v. Google, the U.S. District Court for the Central District of California found that Section 230 of the Communications Decency Act (CDA) provides protection for Google against some of the plaintiff’s state law claims.

As we have discussed previously (see here and here), Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The Section 230 safe harbor immunizes websites from liability for content created by users, as long as the website did not “materially contribute” to the development or creation of the content. An important limitation on this safe harbor, however, is that it shall not “be construed to limit or expand any law pertaining to intellectual property.”

In the CYBERsitter case, plaintiff CYBERsitter, which sells an Internet content-filtering program, sued Google for selling and displaying advertisements incorporating the CYBERsitter trademark to ContentWatch, one of CYBERsitter’s competitors. CYBERsitter’s complaint alleged that Google had violated numerous federal and California laws by, first, selling the right to use CYBERsitter’s trademark to ContentWatch and, second, permitting and encouraging ContentWatch to use the CYBERsitter mark in Google’s AdWords advertising. Specifically, CYBERsitter’s complaint included the following claims: Trademark infringement, contributory trademark infringement, false advertising, unfair competition and unjust enrichment.

Google filed a motion to dismiss, arguing that Section 230 of the CDA shielded it from liability for CYBERsitter’s state law claims. The court agreed with Google for the state law claims of trademark infringement, contributory trademark infringement, unfair competition and unjust enrichment, but only to the extent that these claims sought to hold Google liable for the infringing content of the advertisements. The court, however, did not discuss the apparent inapplicability of the Section 230 safe harbor to trademark claims. As noted above, Section 230 does not apply to intellectual property claims and, despite the fact that trademarks are a form of intellectual property, the court applied Section 230 without further note. This is because the Ninth Circuit has held that the term “intellectual property” in Section 230 of the CDA refers to federal intellectual property law and therefore state intellectual property law claims are not excluded from the safe harbor. The Ninth Circuit, however, appears to be an outlier with this interpretation; decisions from other circuit courts suggest disagreement with the Ninth Circuit’s approach, and district courts outside the Ninth Circuit have not followed the Ninth Circuit’s lead.

Google was not let off the hook entirely with regard to the plaintiff’s state trademark law claims. In dismissing the trademark infringement and contributory trademark infringement claims, the court distinguished between Google’s liability for the content of the advertisements and its liability for its potentially tortious conduct unrelated to the content of the advertisements. The court refused to dismiss these claims to the extent they sought to hold Google liable for selling to third parties the right to use CYBERsitter’s trademark, and for encouraging and facilitating third parties to use CYBERsitter’s trademark, without CYBERsitter’s authorization. Because such action by Google has nothing to do with the online content of the advertisements, the court held that Section 230 is inapplicable.

The court also found that CYBERsitter’s false advertising claim was not barred by Section 230 because Google may have “materially contributed” to the content of the advertisements and, therefore, under Section 230 would have been an “information content provider” and not immune from liability. Prof. Eric Goldman, who blogs frequently on CDA-related matters, has pointed out an apparent inconsistency in the CYBERsitter court’s reasoning, noting that Google did not materially contribute to the content of the advertisements for the purposes of the trademark infringement, contributory infringement, unfair competition and unjust enrichment claims, but that Google might have done so for the purposes of the false advertising claim.

CYBERsitter highlights at least two key points for website operators, bloggers, and other providers of interactive computer services. First, at least in the Ninth Circuit, but not necessarily in other circuits, the Section 230 safe harbor provides protection from state intellectual property law claims with regard to user-generated content. Second, to be protected under the Section 230 safe harbor, the service provider must not have created the content and it must not have materially contributed to such content’s creation.

Waves of class actions have recently alleged that the delivery of an opt-out confirmation text message violates the Telephone Consumer Protection Act (TCPA). Thus, a Federal Communications Commission (“Commission”) Declaratory Ruling finding that a single opt-out confirmation text does not violate the TCPA comes at a crucial time. The Commission’s decision, issued on November 29, 2012, is a welcome relief to companies facing these cases.

The TCPA generally permits the delivery of text messages to consumers after receiving prior express consent to do so. Numerous plaintiffs have taken the position that an opt-out confirmation message violates the TCPA because it is delivered after consent has been revoked. In its ruling, however, the Commission found that a consumer’s prior express consent to receive a text message can be reasonably construed to include consent to receive a final, one-time message confirming that the consumer has revoked such consent. Specifically, delivery of an opt-out confirmation text message does not violate the TCPA provided that it: 1) merely confirms the consumer’s opt-out request and does not include any marketing or promotional information; and 2) is the only message sent to the consumer after receipt of his or her opt-out request. In addition, the Commission explained that if the opt-out confirmation text is sent within five minutes of receipt of the opt-out, it will be presumed to fall within the consumer’s prior express consent. If it takes longer, however, “the sender will have to make a showing that such delay was reasonable and the longer this delay, the more difficult it will be to demonstrate that such messages fall within the original prior consent.”

The Commission’s ruling brings the TCPA into harmony with widely followed self-regulatory guidelines issued by the Mobile Marketing Association, which affirmatively recommend that a confirmation text be sent to the subscriber after receiving an opt-out request. The ruling also comes on the heels of, and is consistent with, at least two recent decisions in putative class action cases filed in the Southern District of California. In Ryabyshchuck v. Citibank (South Dakota) N.A., the court held that Citibank did not violate the TCPA by sending a text message confirming that it had received the customer’s opt-out request. The court went as far as to say that “common sense renders the [opt-out] text inactionable under the TCPA.” The court reasoned that the TCPA was intended to shield consumers from the proliferation of intrusive, nuisance communications, and “[s]uch simple, confirmatory responses to plaintiff-initiated contact can hardly be termed an invasion of privacy under the TCPA.” Likewise, in Ibey v. Taco Bell Corp., the court dismissed a lawsuit alleging that Taco Bell had violated the TCPA by sending an opt-out confirmation message. Noting that the TCPA was enacted to prevent unsolicited and mass communications, the court held, “[to] impose liability … for a single, confirmatory text message would contravene public policy and the spirit of the statute—prevention of unsolicited telemarketing in a bulk format.”

The Commission’s ruling should bring an end to the rash of class actions brought in recent months challenging the legality of confirmatory opt-out messages.

The Superior Court of the State of California has entered a temporary restraining order requiring Twitter to continue to provide PeopleBrowsr with access to the Firehose, Twitter’s complete stream of all public tweets. Through the Firehose, Twitter provides third-party access to over 400 million daily tweets.

PeopleBrowsr is a San Francisco-based social media analytics firm that provides custom applications to clients ranging from private businesses, consumers and publishers to government agencies. PeopleBrowsr’s data mining and analytics platforms support various products and services, such as data streams, social media command centers and consumer targeting programs.  For example, PeopleBrowsr’s product Kred provides a real-time measure of social influence within social media user networks.

PeopleBrowsr’s business depends on its continued access to user-generated social media content from Twitter. Twitter’s recent decision to restrict PeopleBrowsr’s access to the Firehose led PeopleBrowsr to sue Twitter in California state court in order to protect its current business model.

PeopleBrowsr and Twitter entered into a license agreement in June 2010, enabling PeopleBrowsr to receive access to the Firehose in exchange for over $1 million a year. Twitter recently invoked a contractual provision allowing Twitter to terminate the agreement without cause.  PeopleBrowsr filed a complaint for interference with contractual relations, in which it claims that its products and services require access to the Twitter Firehose in order to provide clients with contextual data analysis. In response, Twitter claims that it had decided not to renew most of its direct-to-user Firehose contracts, instead reselling Twitter data in various forms through intermediaries. Without full access to the Firehose, PeopleBrowsr claimed, it could not provide the products that its customers expect. According to PeopleBrowsr, it needs access to the Firehose in order to detect and analyze emerging trends fully and quickly; all tweets in the Firehose are necessary to conduct the scoring and ranking of individual influence that underpins PeopleBrowsr’s analysis.

As this case moves forward it promises to provide an in-depth look at the Twitter ecosystem and guidance for companies with business models that depend on access to data from social media companies such as Twitter. Stay tuned for further developments.

Update: On April 25, 2013, Twitter and Peoplebrowsr have reached a settlement, wherein Peoplebrowsr will continue to purchase Twitter’s Firehose data directly through the end of 2013. At that point, Peoplebrowsr will have to purchase Firehose access through one of Twitter’s authorized data resellers, namely Gnip, DataSift, or Topsy. Financial terms of the settlement were not disclosed.

When an employee uses a social media account to promote his or her company, who keeps that account when the employee leaves? Perhaps more importantly, who keeps the friends, followers and connections associated with that account? Three lawsuits highlight the challenges an employer may face in seeking to gain control of work-related social media accounts maintained by current or former employees.

We start with Eagle v. Edcomm, a federal case out of Pennsylvania involving a dispute over an ex-employee’s LinkedIn account and related connections. The plaintiff, Dr. Linda Eagle, was a co-founder of the defendant company, Edcomm. She established a LinkedIn account while at Edcomm, using the account to promote the company and to build her network. Edcomm personnel had access to her LinkedIn password and helped to maintain the account. Following termination of her employment, Edcomm allegedly changed Dr. Eagle’s LinkedIn password and her account profile; the new profile displayed the new interim CEO’s name and photograph instead of Dr. Eagle’s. (Apparently, “individuals searching for Dr. Eagle were routed to a LinkedIn page featuring [the new CEO]’s name and photograph, but Dr. Eagle’s honors and awards, recommendations, and connections.”) Both parties raced to the courthouse, filing lawsuits against each other over the LinkedIn account and other disputes. Although a final ruling on all the issues has not yet been made, the court has issued two decisions.

In the earlier of the two decisions, the court granted Dr. Eagle’s motion to dismiss Edcomm’s trade secret misappropriation claim, concluding that the LinkedIn connections were not a trade secret because they are “either generally known in the wider business community or capable of being easily derived from public information.”

The most recent decision, however, was largely a win for Edcomm. The court granted Edcomm’s motion for summary judgment on Dr. Eagle’s Computer Fraud and Abuse Act (CFAA) and Lanham Act claims. Regarding her CFAA claims, the court concluded that the damages Dr. Eagle claimed she had suffered—related to harm to reputation, goodwill and business opportunities—were insufficient to satisfy the “loss” element of a CFAA claim, which requires some relation to “the impairment or damage to a computer or computer system.” In rejecting Dr. Eagle’s claim that Edcomm violated the Lanham Act by posting the new CEO’s name and picture on Dr. Eagle’s LinkedIn account, the court found that Dr. Eagle could not demonstrate Edcomm’s actions caused a “likelihood of confusion,” as required by the Act.

In a federal case out of Illinois, Maremont v. Susan Fredman Design Group LTD, the employee, Jill Maremont, was seriously injured in a car accident and had to spend several months rehabilitating away from work. While recovering, Ms. Maremont’s employer—Susan Fredman Design Group—posted and tweeted promotional messages on Ms. Maremont’s private Facebook and Twitter accounts, where she had developed a large following as a well-known interior designer. The posts and tweets continued after Ms. Maremont had asked her employer to stop, so Ms. Maremont changed her passwords. Following the password changes, Ms. Maremont alleged that her employer started treating her poorly in order to force her to resign. Ms. Maremont then brought claims under the Lanham Act, Illinois’ Right of Publicity Act, and the common law right to privacy. Although the case is still pending, the court issued a decision refusing to dismiss Ms. Maremont’s Lanham Act and Right of Publicity Act claims. The court, however, dismissed her common law right to privacy claims, holding that she had failed to demonstrate that her employer’s “intrusion into her personal ‘digital life’ is actionable under the common law theory of unreasonable intrusion upon the seclusion of another,” and that she failed to allege a false light claim because she did not allege that her employer “acted with actual malice.”

A recently-settled California case, PhoneDog LLC v. Noah Kravitz, which we have written about previously, involved a similar dispute over a former employee’s Twitter account. Unlike the LinkedIn account at issue in the Edcomm case, the Twitter account in PhoneDog was apparently created by the employer, not the employee; the Twitter “handle” identifying the account, however, included both the employer’s name and the employee’s name: @PhoneDog_Noah. According to PhoneDog’s complaint, the account attracted approximately 17,000 Twitter followers. Mr. Kravitz—who after leaving PhoneDog eventually began working for one of PhoneDog’s competitors—kept the Twitter account but removed PhoneDog’s name, changing the handle to @noahkravitz. PhoneDog sued Mr. Kravitz, alleging that Mr. Kravitz wrongfully used the Twitter account to compete unfairly against PhoneDog. Like Edcomm, PhoneDog alleged misappropriation of trade secrets, although PhoneDog appears to have viewed the account log-in information rather than the actual followers as the relevant trade secret information. As noted above, the parties have settled this case, so we will not learn how the court would have ultimately ruled; nevertheless, this case and the other pending suits discussed above, offer important lessons to employers. While the terms of the settlement are confidential, news reports have indicated that the agreement does allow Mr. Kravitz to keep his Twitter account and followers.

These cases have received media attention, and the two pending cases—Eagle and Maremont—will continue to be closely watched by the legal community to see how courts define ownership interests in employee social media accounts. Employers, however, should not wait on the rulings in these pending cases to take steps to protect their interests in their social media accounts. All three of these cases illustrate the importance of creating clear policies regarding the treatment of business-related social media accounts, and making sure that employees are aware of these policies. Other measures an employer can take include: Being certain to control the passwords of the company’s own social media accounts and making sure that the name of the account does not include an individual employee’s name. At the same time, employers need to be mindful of new laws in California restricting an employer’s ability to gain access to its employees’ personal social media accounts.

In light of these developments, it will be particularly important to maintain a clear distinction between company and personal social media accounts.