Header graphic for print

Socially Aware Blog

The Law and Business of Social Media

Social Links: Instagram’s & Pinterest’s new features; the per-post premium paid to top influencers; a successful social media investor shares his strategy

Posted in Advertising, First Amendment, Free Speech, Marketing, Mobile

Instagram now allows users to zoom in on photos in their feeds and at least 11 brands are already capitalizing on the new feature.

Pinterest acquired Instapaper, a tool that allows you to cache webpages for reading at a later time.

A social-media celebrity with 500,000 followers and a lot of people interacting with his or her content could bring in how much for a single post?!

Snapchat’s first investor shares his secret for identifying the next big app.

SEC steps up scrutiny of investment advisers’ use of social media.

As younger audiences’ primary source of news, social media has understandably affected photojournalism.

Should social media companies establish guidelines for when they will—and will not—heed police officers’ requests to suspend suspects’ accounts?

Meet the officer behind a small New England city’s police department’s viral Facebook page.

Wondering whether you should hit “reply all” when someone has mistakenly included you on an email chain? The New York Times has one word for you.

Court Upholds Enforceability of “Clickwrap” Employee Agreement

Posted in Electronic Contracts

Correct check box digital concept

As we have previously discussed, if you want your electronic contracts to be enforceable, it is a best practice to require the counterparty to affirmatively accept the contract by checking a box or clicking a button. A recent New Jersey district court decision, ADP, LLC v. Lynch, reinforces this point. Such issues most often arise in the context of website terms of use, but ADP v. Lynch involved a non-competition provision and forum selection clause contained in documentation presented to employees electronically in connection with stock option grants.

The employer, ADP, sued two former employees for taking jobs at a competitor in violation of certain restrictive covenants contained in the stock option grant documentation. The employees sought to dismiss the action on the basis of lack of jurisdiction, and ADP responded by pointing to a forum selection clause in the grant documentation. The employees argued, however, that they had not received adequate notice of the restrictive covenants and that the forum selection clause was unenforceable.

The grant documentation containing the restrictive covenants and the forum selection clause had been presented to the employees in electronic form and, based on the allegations in ADP’s complaint, the employees were required to acknowledge the documentation in order to receive the stock option grants. Specifically, ADP had presented the documentation in such a way that each employee was physically unable to click the required “Accept Grant” button unless he or she had affirmatively checked a prior box indicating that he or she had read the associated documents containing the restrictive covenants and forum selection clause.

The court also noted that ADP’s manager of its stock plan services “provided a step-by-step rundown” of the process that employees were required to follow to accept stock option grants, and that, “in order to accept those awards, an employee would have to affirmatively acknowledge that he or she reviewed the Restrictive Covenants before proceeding.” This illustrates another point we have noted previously: If you want your electronic contracts to be enforceable, you should not only make sure to implement them in a way that requires affirmative acceptance, you should also be prepared to produce evidence that the user at issue actually accepted.

In light of the above, the court analyzed the grant documentation containing the restrictive covenants and forum selection clause as an enforceable “clickwrap” contract similar to the website terms of use at issue in another case we have written about previously, Fteja v. Facebook, Inc.:

 “At this stage in the litigation, the Court finds that the forum selection clauses are encompassed by enforceable clickwrap agreements. The complaints unequivocally allege that an employee could not accept any stock grants until acknowledging that he or she reviewed all grant documents, including the Restrictive Covenants that contained the forum selection clauses. […] In order to accept those awards, an employee would have to affirmatively acknowledge that he or she reviewed the Restrictive Covenants before proceeding. […] Therefore, this case involves the type of clickwrap agreement that other courts have found to be enforceable.”

The court also found unpersuasive the employees’ argument that mutual assent was lacking because the acknowledgment box did not expressly state “I agree to the terms of the grant documents,” but instead merely required the employees to acknowledge that they had read those documents. According to the court, this was a “distinction without difference” because, in accepting the option grant, the defendants were required to represent as part of the grant agreements that they had read the restrictive covenant agreements.

Accordingly, as ADP sufficiently alleged that it had required the employees to affirmatively accept the restrictive covenants and forum selection clause as part of the electronic contracting process, the court denied the employees’ motion to dismiss.

While this case does not necessarily break new ground in terms of the enforceability of electronic contracts, it does illustrate that the same principle applies whether you are seeking to impose terms and conditions on users of your website or enforce restrictive covenants and a forum selection clause in an employment agreement: make sure the counterparty is required to take some clear and affirmative action to expressly accept the contract.

*          *          *

For more on what it takes for an online agreement to be enforceable, see Implementing and Enforcing Online Terms of Use; Three Steps to Help Ensure the Enforceability of Your Website’s Terms of Use; Clickwrap, Browsewrap and Mixed Media Contracts: A Few Words Can Go a Long Way; and Terms and Conditions Buried in Easily Ignored Scroll Box Don’t Cut It, the Seventh Circuit Holds.

Social Links: Google penalizes sites with pop-up ads; proposed Federal legislation to criminalize revenge porn; ad industry group questions Kardashians’ social media posts

Posted in Advertising, Employment Law, Endorsement Guides, Free Speech, FTC, Labor Law, Litigation, Marketing, Mobile, Privacy

Google is cracking down on mobile pop-up ads by knocking down the search-result position of websites that use them.

The National Labor Relations Board decided a social media policy that Chipotle had in place for its employees violates federal labor law.

A group of lawmakers plans to introduce legislation that would criminalize revenge porn—explicit images posted to the web without the consent of the subject—at the federal level.

The Truth in Advertising organization sent the Kardashians a letter threatening to report them for violating the FTC’s endorsement guides. This isn’t the first time the legality of the famous family’s social media posts has been called into question. If only Kim would read our influencer marketing blog posts.

According to one study, 68% percent of publishers use editorial staff to create native ads.

Twitter launched a button that a company can place on its website to allow users to send a direct message to the company’s Twitter inbox.

The Center for Democracy & Technology criticized the Department of Homeland Security’s proposal to ask visa-waiver-program applicants to disclose their social media account information.

UK lawmakers issued a report calling on the big social media companies to do more to purge their platforms of hate speech and material that incites violence.

Social media is playing bigger role in jury selection, Arkansas prosecutors and criminal defense lawyers say.

A day in the life of the Economist‘s head of social media.

Seven things smart entrepreneurs do on Instagram.

Four ways to get busy people to read the email you send them.

Want to know how Facebook views your political leanings? Here’s the way to find out.

Controversial California Court Decision Significantly Narrows a Crucial Liability Safe Harbor for Website Operators

Posted in Defamation, Online Reviews, Section 230 Safe Harbor

"Unlike" on a screen. More>>

A recent California court decision involving Section 230 of the Communications Decency Act (CDA) is creating considerable concern among social media companies and other website operators.

As we’ve discussed in past blog posts, CDA Section 230 has played an essential role in the growth of the Internet by shielding website operators from defamation and other claims arising from content posted to their websites by others.

Under Section 230, a website operator is not “treated as the publisher or speaker of any information provided” by a user of that website; as a result, online businesses such as Facebook, Twitter and YouTube have been able to thrive despite hosting user-generated content on their platforms that may be false, deceptive or malicious and that, absent Section 230, might subject these and other Internet companies to crippling lawsuits.

Recently, however, the California Court of Appeal affirmed a lower court opinion that could significantly narrow the contours of Section 230 protection. After a law firm sued a former client for posting defamatory reviews on Yelp.com, the court not only ordered the former client to remove the reviews, but demanded that Yelp (which was not party to the dispute) remove these reviews.

The case, Hassell v. Bird, began in 2013 when attorney Dawn Hassell sued former client Ava Bird regarding three negative reviews that Hassell claimed Bird had published on Yelp.com under different usernames. Hassell alleged that Bird had defamed her, and, after Bird failed to appear, the California trial court issued an order granting Hassell’s requested damages and injunctive relief.

In particular, the court ordered Bird to remove the offending posts, but Hassell further requested that the court require Yelp to remove the posts because Bird had not appeared in the case herself. The court agreed, entering a default judgment and ordering Yelp to remove the offending posts. (The trial court also ordered that any subsequent comments associated with Bird’s alleged usernames be removed, which the Court of Appeal struck down as an impermissible prior restraint.) Yelp challenged the order on a variety of grounds, including under Section 230.

The Court of Appeal held that the Section 230 safe harbor did not apply, and that Yelp could be forced to comply with the order. The court reasoned that the order requiring Yelp to remove the reviews did not impose any liability on Yelp; Yelp was not itself sued for defamation and had no damages exposure, so Yelp did not face liability as a speaker or publisher of third-party speech. Rather, citing California law that authorized a court to prevent the repetition of “statements that have been adjudged to be defamatory,” the court characterized the injunction as “simply” controlling “the perpetuation of judicially declared defamatory statements.” The court acknowledged that Yelp could face liability for failing to comply with the injunction, but that would be liability under the court’s contempt power, not liability as a speaker or publisher.

The Hassell case represents a significant setback for social media companies, bloggers and other website operators who rely on the Section 230 safe harbor to shield themselves from the misconduct of their users. While courts have previously held that a website operator may be liable for “contribut[ing] materially to the alleged illegality of the conduct”—such as StubHub.com allegedly suggesting and encouraging illegally high ticket resale prices—here, in contrast, there is no claim that Yelp contributed to or aided in the creation or publication of the defamatory reviews, besides merely providing the platform on which such reviews were hosted.

Of particular concern for online businesses is that Hassell appears to create an end-run around Section 230 for plaintiffs who seek to have allegedly defamatory or false user-generated content removed from a website—sue the suspected posting party and, if that party fails to appear, obtain a default judgment; with a default judgment in hand, seek a court order requiring the hosting website to remove the objectionable post, as the plaintiff was able to do in the Hassell case.

Commentators have observed that Hassell is one of a growing number of recent decisions seeking to curtail the scope of Section 230. After two decades of expansive applications of Section 230, are we now on the verge of a judicial backlash against the law that has helped to fuel the remarkable success of the U.S. Internet industry?


*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: How to Protect Your Company’s Social Media Currency; Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor; and A Dirty Job: TheDirty.com Cases Show the Limits of CDA Section 230.


Now Available: The August Issue of Our Socially Aware Newsletter

Posted in Cyberbullying, E-Commerce, Infographic, Marketing, Mobile, Privacy, Protected Speech, Right To Be Forgotten, Terms of Use

CaptureThe latest issue of our Socially Aware newsletter is now available here.

In this issue of Socially Aware, our Burton Award winning guide to the law and business of social media, we discuss the impact online trolls are having on social media marketing; we revisit whether hashtags should be afforded trademark protection; we explain how an unusual New Jersey law is disrupting the ecommerce industry and creating traps for the unwary; we explore legal and business implications of the Pokémon Go craze; we examine a recent federal court decision likely to affect application of the Video Privacy Protection Act to mobile apps; we discuss a class action suit against an app developer that highlights the legal risks of transitioning app customers from one business model to another; and we describe how Europe’s Right to Be Forgotten has spread to Asia.

All this—plus infographics illustrating the enormous popularity of Pokémon Go and the unfortunate prevalence of online trolling.

Read our newsletter.

Social Links: Twitter offers anti-harassment tools; Pinterest takes on video ads; P&G changes its social strategy

Posted in Advertising, Cyberbullying, Disappearing Content, First Amendment, Litigation, Marketing

Twitter took steps to remedy its harassment problem.

In addition, over the last six months, Twitter suspended 235,000 accounts that promoted terrorism.

The Washington Post is using language-generation technology to automatically produce stories on the Olympics and the election.

Video ads are going to start popping up on Pinterest.

Does it make sense for big brands to invest in expensive, highly-targeted social media advertising? Procter & Gamble doesn’t think so.

These brands are using Facebook in particularly effective ways during the Olympic games.

Since we first reported on the phenomenon nearly two years ago, Facebook has become an increasingly common vehicle for serving divorce papers.

Across the country, states are grappling with the conflict between existing laws that prohibit disclosing ballot information or images and the growing phenomenon of “ballot selfies”—photos posted to social media of people at the polls casting their ballots or of the ballots themselves.

Creating dozens of Facebook pages for a single brand can help marketers to increase social-media-engagement and please the Facebook algorithm gods, according to Contenly.

Here’s how Snapchat makes money from disappearing videos.

A Harvard Business Review article advises marketers to start listening to (as opposed to managing) conversations about their brands on social media.

For intel on what it can do to keep teens’ attention, Instagram goes straight to the source.

Social Links: Facebook’s anti-ad-blocking software; LinkedIn’s “scraper” lawsuit; FTC’s upcoming crackdown on social influencers

Posted in Advertising, Compliance, Cyberbullying, Data Security, Endorsement Guides, Free Speech, Litigation, Marketing, Mobile, Online Endorsements

Facebook introduced technology that disables ad blockers used by people who visit the platform via desktop computers, but Adblock Plus has already foiled the platform’s efforts, at least for now.

A look at Twitter’s 10-year failure to stop harassment.

Are mobile apps killing the web?

LinkedIn sues to shut down “scrapers.”

The FTC is planning to police social media influencers’ paid endorsements more strictly; hashtags like #ad may not be sufficient to avoid FTC scrutiny. Officials in the UK are cracking down on paid posts, too.

Dan Rather, Facebook anchorman.

The U.S. Olympic Committee sent letters to non-sponsoring companies warning them against posting about the games on their corporate social media accounts.

How IHOP keeps winning the love & affection of its 3.5 million Facebook fans.

A Canadian woman whose home was designated a Pokémon Go “stop” is suing the app’s creators for trespass and nuisance. We saw that coming.

There’s a website dedicated to helping Snapchat users fool their followers into thinking they’re out on the town.

Facebook has been wooing premium content owners, but TV companies are reportedly resisting.

PETA got a primatologist to submit an amicus curiae brief supporting its suit alleging a monkey who took a selfie is entitled to a copyright for the image.

Commercializing User-Generated Content: Five Risk Reduction Strategies

Posted in Copyright, DMCA, IP, Terms of Use

ContentGraphic_SmallWe’re in the midst of a seismic shift in how companies interact with user-generated content (UGC).

For years, companies were happy simply to host UGC on their websites, blogs and social media pages and reap the resulting boost to their traffic numbers. And U.S. law—in the form of Section 512(c) of the Digital Millennium Copyright Act (DMCA)—accommodated this passive use of UGC by creating a safe harbor from copyright damages for websites, blogs and social media platform operators that hosted UGC posted without the authorization of the owners of the copyrights in such UGC, so long as such operators complied with the requirements of the safe harbor.

Increasingly, companies are no longer satisfied with passively hosting UGC. Rather, they now want to find creative ways to commercialize such content—by incorporating it into ads (including print, TV and other offline ads), creating new works based on such content and even selling such content. Yet, in moving beyond mere hosting to proactive exploitation of UGC, companies risk losing the benefit of the DMCA Section 512(c) safe harbor, which could result in potentially significant copyright liability exposure.

For example, if a company finds that users are posting potentially valuable UGC to the company’s Facebook page, or on Twitter in connection with one of the company’s hashtags, that company may want to make such UGC available on its own website. The DMCA Section 512(c) safe harbor, however, is unlikely to protect the company in copying such UGC from the Facebook or Twitter platform to its own website.

The reality is that any company seeking to monetize or otherwise exploit UGC needs to proceed with extreme caution. This is true for several reasons:

  • UGC can implicate a wide range of rights . . . As with any content, UGC is almost certainly subject to copyright protection, although certain Tweets and other short, text-only posts could potentially be exempt from copyright protection if they qualify as “short phrases” under the Copyright Act. If any individuals are identifiable in UGC, then rights of publicity and rights of privacy may also be relevant. In addition, UGC may contain visible third-party trademarks or comments that defame or invade the privacy of third parties.
  • . . . and a wide range of rightsholders. Notably, many of the rights necessary to exploit UGC are likely to be held by individuals and corporations other than the posting user. For example, unless a photo is a “selfie,” the photographer and the subject of the photo will be different individuals, with each holding different rights—copyright, for the photographer, and the rights of publicity and privacy, for the subject—that could be relevant to the exploitation of the photo. Moreover, any trademarks, logos and other images contained in a photo could potentially implicate third-party rightsholders, including third-party corporations. Videos also raise the possibility of unauthorized clips or embedded music.
  • If the UGC is hosted by a third-party social network, it may have Terms of Service that help—or hurt—efforts to exploit the UGC. Most social media networks collect broad rights to UGC from their users, although they differ substantially when it comes to passing those rights along to third parties interested in exploiting the content. For example, if a company uses Twitter’s Application Programming Interface (API) to identify and access Tweets that it would like to republish, then Twitter grants to that company a license to “copy a reasonable amount of and display” the Tweets on the company’s own services, subject to certain limitations. (For example, Twitter currently prohibits any display of Tweets that could imply an endorsement of a product or service, absent separate permission from the user.) Instagram also has an API that provides access to UGC, but, in contrast to Twitter, Instagram’s API terms do not appear to grant any license to the UGC and affirmatively require companies to “comply with any requirements or restrictions” imposed by Instagram users on their UGC.

With these risks in mind, we note several emerging best practices for a company to consider if it has decided to exploit UGC in ways that may fall outside the scope of DMCA Section 512(c) and other online safe harbors. Although legal risk can never be eliminated in dealing with UGC, these strategies may help to reduce such risk:

  • Carefully review the Social Media Platform Terms. If the item of UGC at issue has been posted to a social media platform, determine whether the Terms of Service for such platform grants any rights to use such posted UGC off of the platform or imposes any restrictions on such content. Note, however, that any license to UGC granted by a social media platform almost certainly will not include any representations, warranties or indemnities, and so it may not offer any protection against third-party claims arising from the UGC at issue.
  • Seek Permission. If the social media platform’s governing terms don’t provide you with all of the rights needed to exploit the UGC item at issue (or even if they do), seek permission directly from the user who posted the item. Sophisticated brands will often approach a user via the commenting or private messaging features of the applicable social media platform, and will present him or her with a link to a short, user-friendly license agreement. Often, the user will be delighted by the brand’s interest in using his or her content. Of course, be aware that the party posting the content may not be the party that can authorize use of that content, as Agence France Presse learned the hard way in using photos taken from Twitter.
  • Make Available Terms and Conditions for “Promotional” Hashtags. If a company promotes a particular hashtag to its customers, and would like to use content that is posted in conjunction with the hashtag, the company could consider making available a short set of terms alongside its promotion of that hashtag. For example, in any communications promoting the existence of the hashtag and associated marketing campaign, the company could inform customers that their use of the hashtag will constitute permission for the company to use any content posted together with the hashtag. Such an approach could face significant enforceability issues—after all, it is essentially a form of “browsewrap” agreement—but it could provide the company with a potential defense in the event of a subsequent dispute.
  • Adopt a Curation Process. Adopt an internal curation process to identify items of UGC that are especially high risk, which could include videos, photos of celebrities, photos of children, professional-quality content, any content containing copyright notices, watermarks and so forth, and any content containing potentially defamatory, fraudulent or otherwise illegal content. Ensure that the curators are trained and equipped with checklists and other materials approved by the company’s legal department or outside counsel. Ideally, any high-risk content should be subject to the company’s most stringent approach to obtaining permission and clearing rights—or perhaps avoided altogether.
  • Adjust the Approach for High-Risk Uses. Consider the way in which the UGC at issue is expected to be used, and whether the company’s risk tolerance should be adjusted accordingly. For example, if an item of UGC will be used in a high-profile advertisement, the company may want to undertake independent diligence on any questionable aspects of the UGC, even after obtaining the posting user’s permission—or perhaps avoid any questionable UGC altogether.

In a social media age that values authenticity, more and more companies—even big, risk-adverse Fortune 100 companies—are interested in finding ways to leverage UGC relevant to their business, products or services. Yet the shift from merely hosting UGC to actively exploiting it raises very real legal hurdles for companies. The tips above are not a substitute for working closely with experienced social media counsel, but they collectively provide a framework for addressing legal risks in connection with a company’s efforts to commercialize UGC.

*          *        *

For more on the issues related to user-generated content, see New Court Decision Highlights Potential Headache for Companies Hosting User-Generated Content; Court Holds That DMCA Safe Harbor Does Not Extend to Infringement Prior to Designation of Agent; and Thinking About Using Pictures Pulled From Twitter? Think Again, New York Court Warns.

Ninth Circuit Case Demonstrates That the Social Media Platform, Not the User, Is in Control

Posted in Litigation, Privacy

iStock_65128811_600dpiWe have written before about website operators’ use of the federal Computer Fraud and Abuse Act (CFAA) to combat data scraping. We have also noted a number of recent cases in which courts held that social media platforms, rather than the users of those platforms, have the right to control content on and access to the relevant websites. A recent Ninth Circuit decision, Facebook v. Power Ventures, brings these two trends together.

Power Ventures, the defendant, operated a website that aggregated users’ content, such as friends lists, from various social media platforms. In an attempt to increase its user base, Power Ventures initiated an advertising campaign that encouraged users to invite their Facebook friends to Power Ventures’ site.

Specifically, an icon on the Power Ventures site gave users the option to “Share with friends through my photos,” “Share with friends through events,” or “Share with friends through status,” and displayed a “Yes I do” button that users could click. If the user clicked the “Yes I do” button, Power Ventures would create an event, photo, or status on the user’s Facebook profile. In some cases, clicking the button also caused an email to be sent to the user’s friends “from” Facebook stating that the user had invited them to a Facebook event.

Upon becoming aware of this activity, Facebook sent Power Ventures a cease and desist letter informing Power Ventures that it had violated Facebook’s terms of use and demanding that Power Ventures stop soliciting Facebook users’ information. Facebook also blocked Power Ventures’ IP address from accessing Facebook. When Power Ventures changed its IP address and continued to access the site, Facebook sued, alleging among other things that Power Ventures had violated the CFAA. As we discussed at greater length in our previous article, the CFAA imposes liability on anyone who “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer.”

In analyzing Facebook’s CFAA claim, the court reasoned that Power Ventures did not access Facebook’s computers without authorization initially because “Power users arguably gave Power permission to use Facebook’s computers to disseminate messages” and, accordingly, “Power reasonably could have thought that consent from Facebook users to share the promotion was permission for Power to access Facebook’s computers.” That all changed, however, when Facebook sent Power Ventures the cease and desist letter expressly rescinding whatever authorization Power Ventures may have otherwise had. According to the court, “[t]he consent that Power had received from Facebook users was not sufficient to grant continuing authorization to access Facebook’s computers after Facebook’s express revocation of permission.”

The court employed a colorful analogy to support its reasoning:

Suppose that a person wants to borrow a friend’s jewelry that is held in a safe deposit box at a bank. The friend gives permission for the person to access the safe deposit box and lends him a key. Upon receiving the key, though, the person decides to visit the bank while carrying a shotgun. The bank ejects the person from its premises and bans his reentry. The gun-toting jewelry borrower could not then reenter the bank, claiming that access to the safe deposit box gave him authority to stride about the bank’s property while armed. In other words, to access the safe deposit box, the person needs permission both from his friend (who controls access to the safe) and from the bank (which controls access to its premises). Similarly, for Power to continue its campaign using Facebook’s computers, it needed authorization both from individual Facebook users (who controlled their data and personal pages) and from Facebook (which stored this data on its physical servers).

Accordingly, the court held that, following receipt of Facebook’s cease and desist letter, Power Ventures intentionally accessed Facebook’s computers knowing that it was not authorized to do so, making Power Ventures liable under the CFAA.

On one level, Facebook v. Power Ventures can be seen as a battle between two competing social media platforms over valuable user data. Certainly it is easy to understand why Facebook would object to Power Ventures poaching Facebook’s data. But the case can also be seen as an example of social media operators exerting the right to control their platforms, and the content and data that users post to those platforms, even against the users’ own wishes.

In this sense, one can place Facebook v. Power Ventures in the line of recent cases holding that, at the end of the day, it is the social media platform operator and not the user that controls the platform. And that is an important fact for individuals and companies to keep in mind when they are investing time and money to establish and maintain a social media presence on a platform controlled by someone else.

*          *        *

For more regarding online data scraping and the Computer Fraud and Abuse Act, see our earlier blog post Data for the Taking: Using the Computer Fraud and Abuse Act to Combat Web Scraping.

Social Links: Twitter’s tough quarter; Yelp warns users about litigious dentist; Pinterest battles Snapchat

Posted in Advertising, Digital Content, Disappearing Content, Marketing, Mobile

Instagram now allows celebrities to block trolls.

While Facebook reached new highs last quarter, Twitter continued to stumble. Will adding more live video content or allowing users to create Snapchat-like collage custom emojis over photos help Twitter regain its footing?

Tips for fixing your company’s social media marketing strategy.

A pop singer told fans to send him their Twitter passwords so he could post personal messages to their feeds. Marketing genius or potential Consumer Fraud and Abuse Act violation?

Tiffany & Co. launched a Snapchat filter to attract millennials.

Yelp posted a warning on the Yelp.com page of a Manhattan dentist who filed defamation suits against five patients over four years for giving him negative reviews.

Sponsored content is becoming king in a Facebook world.

The New York Times built an in-house analytics dashboard to make it easy for its reporters to access reader engagement data.

Pinterest appears to be losing to Snapchat in the battle for digital ad dollars.

Profile pranks or endorsement bombing on LinkedIn is an actual thing.