April 2012

We’ve reported before on Section 230 of the Communications Decency Act (CDA), the 1996 statute that states, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Courts have interpreted Section 230 to immunize social media and other websites from liability for publishing content created by their users, provided the site owners are not “responsible in whole or in part, for the creation or development of” the offending content. 

Two recent federal cases involving the website TheDirty.com show that, 15 years after the landmark Zeran v. AOL case interpreting Section 230 immunity broadly, courts still grapple with the statute and, arguably, get cases wrong, particularly when faced with unsavory content.

TheDirty.com is an ad-supported website that features gossip, salacious content, news and sports stories.  The site, run by owner/editor Hooman Karamian, a/k/a Nik Richie, prompts users to “submit dirt” via a basic text form requesting “what’s happening” and “who, what, when, where, why,” and allows users to upload files. In response, users, referred to on the site as the “Dirty Army,” submit stories and photographs along with gossip about the people pictured. Richie then posts the pictures and information, often accompanied by his own comments. Two such racy posts, one detailing the sex habits of a Cincinnati Bengals cheerleader and the other about the supposed exploits of a “Church Girl,” led their subjects to bring defamation claims in federal court. Third-party users, not TheDirty.com, generated the content. Cases dismissed on Section 230 grounds, right?  Not quite.

In Jones v. Dirty World Entertainment Recordings, a case in the U.S. District Court for the Eastern District of Kentucky, plaintiff Sarah Jones, a cheerleader for the Cincinnati Bengals football team and also a high school teacher, sued TheDirty.com based on two user-submitted posts that included her picture and statements regarding her sex partners, as well as allegations that she had sexually transmitted diseases. Richie added a one-line comment— “why are all high school teachers freaks in the sack?”—and published the post. Jones requested that the posts be removed, but TheDirty.com refused. Richie also commented on the site directly addressing Jones, saying her concern about the post was misguided and that she was “d[igging] her own grave” by calling attention to it. Jones sought damages for defamation and invasion of privacy under state tort law, and TheDirty.com moved for judgment as a matter of law on CDA immunity grounds. 

The court held that TheDirty.com did not qualify for CDA immunity because it “specifically encouraged the development of what is offensive about the content” (citing the Tenth Circuit’s opinion in Federal Trade Comm’n v. Accusearch).  The court found that the TheDirty.com encouraged the development of, and therefore was responsible for, the offensive content based on the site’s name, the fact that the site encouraged the posting of “dirt,” Richie’s personal comments added to users’ posts, and his direct reference to the plaintiff’s request that the post be taken down. The court focused on Richie’s comments, including his statement “I love how the Dirty Army has war mentality. Why go after one ugly cheerleader when you can go after all the brown baggers.”

The Jones court’s analysis diverges from prevailing CDA case law in a few respects. For example, regarding the issue of responding to a subject’s request that an allegedly defamatory post be taken down, the Ninth Circuit has held that deciding what to post and what to remove are “traditional duties of a publisher” for which the CDA provides immunity to website operators.  More critically, in adopting the “specifically encouraged the development of what is offensive” standard coined in Accusearch, the court in Jones reasoned that by requesting “dirt,” the site “encourage[d] material which is potentially defamatory or an invasion of the subject’s privacy,” and therefore lost CDA immunity.  That reasoning, though, could extend to any website functionality, such as free-form text boxes, that permits users to input potentially defamatory material. To hold that a website operator loses immunity based on the mere potential that users will post defamatory content effectively vitiates CDA immunity and parts ways with cases like the Ninth Circuit’s Roommates.com case, which held that a website’s provision of “neutral tools” cannot constitute development of content for purposes of the exception to CDA immunity. For these and other reasons, one leading Internet law commentator calls the case a “terrible ruling that needs to be fixed on appeal.” TheDirty.com’s appeal to the Sixth Circuit is pending.

In a more recent case, S.C. v. Dirty World, LLC, the U.S. District Court for Western District of Missouri held that Richie and TheDirty.com did qualify for CDA Section 230 immunity on facts similar to those in Jones. The plaintiff in S.C. brought suit based on a user-generated post on TheDirty.com that showed her picture along with a description alleging that she had relations with the user’s boyfriend and attempted to do so with the user’s son. Richie published the post, adding a comment about the plaintiff’s appearance. The court explained that, because a third party authored the allegedly defamatory content, CDA immunity turned on whether TheDirty “developed” the content by having “materially contribute[d] to [its] alleged illegality.”  The court held that the defendants did not materially contribute to the post’s alleged illegality because the defendants never instructed or requested the third party to submit the post at issue, “did nothing to specifically induce it,” and did not add to or substantively alter the post before publishing it on the site.

After having noted these facts, and how they differed from the facts in Jones, which the S.C. plaintiff had cited, the court explicitly “distanced itself from certain legal implications set forth in Jones.”  The S.C. court pointed out that a “broad” interpretation of CDA immunity is the accepted view.  It explained that CDA immunity does not, and should not, turn on the “name of the site in and of itself,” but instead focuses on the content that is actually defamatory or otherwise gives rise to legal liability.  The court noted, for example, that the site itself has a variety of content, much of it not defamatory or capable of being defamatory (e.g., sports stories and other news).

Given that some may consider TheDirty.com’s gossip content and mission extreme, cases like S.C. likely provide peace of mind to operators of more conventional social media sites.  Still, should Jones survive appeal, it could lead to forum shopping in cases where plaintiffs expect to face CDA immunity defenses, because the “specifically encouraged” standard could, as in Jones, lead to a loss of immunity. We’ll keep you posted on the appeal.

In our September 2010 issue of Socially Aware, we provided a brief overview of Facebook’s “Statement of Rights and Responsibilities,” the social media service’s complex set of terms and conditions that companies frequently “click-accept” with little review (often, in a rush to establish their Facebook presences).  Naturally, this situation is not limited to Facebook— for many if not most social media services— when users first sign up for an account, they are required to agree to the service’s lengthy standard terms and conditions of use.  It’s part of life on the Internet.

And Twitter is no exception.  When you sign up for a Twitter account, “By clicking the button, you agree to the terms below.”  And although Twitter’s core Terms of Service are a bit shorter than Facebook’s Statement of Rights and Responsibilities, Twitter’s terms similarly link and branch off to a variety of policies, guidelines, and related documents, all of which govern your use of the service.

At the top of Twitter’s hierarchy of terms and conditions are the Twitter Terms of Service.  Those Terms of Service incorporate two documents by reference:  (1) Twitter’s Privacy Policy, which notes that use of Twitter’s services constitutes consent to the collection, transfer, manipulation, storage, disclosure, and other uses of information described in such policy and (2) the Twitter Rules, which describe how end-users should and should not use Twitter, and impose a variety of rules regarding content, spam, and abuse.  But the Terms of Service also link to Twitter’s Developer Rules of the Road (described by Twitter as “an evolving set of rules for how ecosystem partners can interact with your content”), which govern the use of Twitter’s application programming interface (API) and, more generally, Twitter’s philosophy around how information and content shared on Twitter can and cannot be used.

Given the complexity of Twitter’s ecosystem, the Developer Rules of the Road branch off to and incorporate a variety of other policies and guidelines, including the service’s Display Guidelines, (which describe how Tweets must be displayed), rules on trademark usage, automation rules, spam rules (which actually loop back to the end-user-focused Twitter Rules), and various other documents.  A complete list of Twitter’s terms, conditions, rules, guidelines, and best practices can be found here — there are 36 documents in total.

Below, we describe a few key terms from Twitter’s various written policies.  These terms are not necessarily uncommon for Internet-based services — particularly services that are free to use — but they’re worth keeping in mind:

Key Term for End-Users — Broad License to User Content.  Foremost for many end-users of social media services, is the license being granted to such services in users’ posted content — and end-users grant Twitter a typically broad license.  By posting photos or other content on Twitter, end-users grant Twitter the right to “use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute” such content in any manner now known or later developed.  The Terms of Service also expressly provide that such license “includes the right for Twitter to make [such content] available to other companies, organizations or individuals who partner with Twitter” for the purpose of distributing such content on other media and services.  True, given that Tweets are publicly available by their nature (assuming a public account), one would expect a broad license grant.  But the grant to Twitter gives Twitter the right to use Tweets for purposes other than simply operating the Twitter service, and the right to distribute those Tweets in ways that may not have even existed when the Tweets were originally posted.  Although the broad license grant does apply to “protected” Tweets (i.e., Tweets that are not publicly viewable) and public Tweets alike, Twitter does state  that protected Tweets will not appear in Twitter or Google searches, meaning that, as a practical matter, a user’s privacy settings should be useful in controlling how widely Twitter may disseminate the user’s Tweets.

Key Terms for Developers — API Terms.  Foremost for many developers who leverage social media services, are the services’ rules for accessing their platforms and APIs.  According to Twitter’s Terms of Service, unless otherwise permitted through Twitter’s services or terms, users are required to use the Twitter API  in order to “reproduce, modify, create derivative works, distribute, sell, transfer, publicly display, publicly perform, transmit, or otherwise use” Twitter’s content or services.  In light of this, it is important for developers who leverage Twitter in their own apps and services to carefully review the terms and conditions governing Twitter’s API.  Those terms and conditions are sprinkled throughout Twitter’s policies, including the Developer Rules of the Road (changes to which are archived at Twitter’s API Terms of Service Archive) and Twitter’s Rate Limiting page, which addresses the number of “calls” that can be made to various Twitter APIs and services over time.  Of course, all of Twitter’s API restrictions are in addition to, and not in lieu of, those found in the site’s Terms of Service and elsewhere — per the Developer Rules of the Road, use of the API and Twitter content “are subject to certain limitations on access, calls, and use as set forth in the [rules], on dev.twitter.com, or as otherwise provided to you by Twitter.”  Perhaps most importantly, Twitter retains the right to block use of the API and Twitter’s content if Twitter believes that a user has attempted to circumvent or exceed any limitations imposed by Twitter, and Twitter disclaims any liability for resulting costs or damages.

Key Terms for Everyone — Modifications to Twitter’s Terms and Services.  Twitter reserves the right to unilaterally modify its Terms of Service and the form and nature of its services at any time.  If Twitter determines in its sole discretion that changes to its Terms of Service are material, Twitter promises to notify users via a Twitter update or by email; nevertheless, as with changes to most websites’ terms of use, a user’s continued use of Twitter following such changes constitutes the user’s agreement to the modified terms.  It is important to keep in mind that changes in a social media site’s services or terms of use — even seemingly tiny changes in the way a social media profile appears to end-users, or in what flavors of activities are or are not permitted on the site — can wreak havoc on a company’s costly social media strategy.  Conveniently, Twitter provides an archive of previous versions of its Terms of Service, which can help users spot changes over time more easily. 

Our message to end-users and developers alike remains what it was back in 2010:  be sure to carefully review social media services’ terms and conditions so that you know what you’re getting into, particularly when you will be investing money or time in using a service for your business or building an app or site that relies on the service’s content or functionality.

While Facebook, Twitter, LinkedIn and other social media platforms have become an increasingly important tool for businesses across industries to meet their customers’ needs and expectations, financial institutions have been slow to embrace social media.  This is likely attributable to the highly regulated environment in which financial institutions operate, the unique risks associated with operating within it, and the lack of available guidance on how to navigate and mitigate such risks.  

In an effort to address industry concerns, the California Department of Financial Institutions (“DFI”) – the licensing and regulatory agency that oversees California’s state-chartered financial institutions – recently conducted a survey of more than 340 financial institutions’ use of social media policies.  The survey revealed that 72 percent of the financial institutions surveyed did not have a social media plan, and 59 percent did not have a social media policy.  These findings suggest that either a significant number of financial institutions are not utilizing social media or they are doing so without the important framework needed to help ensure that they do not run afoul of their many regulatory requirements. 

To that end, the DFI has published guidance on the development of social media policies.  It first addresses how a financial institution should go about developing a social media plan – specifically, by asking itself a variety of questions that form the basis for plan development, including:

  • What does your financial institution expect to gain from using social media?
  • Who are the target viewers?
  • What types of bank activities and postings are planned?
  • What types of social media do you plan to use and how do you plan to use them?
  • How will the activities be managed and by whom?

The DFI’s guidance also identifies the elements necessary for a financial institution’s creation of appropriate social media policies.  These include:

  • A description of approved social media activities;
  • Guidelines for personal use, if allowed;
  • Definition of permitted content;
  • Inclusion of applicable consumer protection laws and regulations requirements, if the institution’s products and services will be advertised;
  • Employee training; and
  • Identification of oversight responsibility.

The DFI’s three-part series was published in its December, February and March Monthly Bulletin.  The DFI plans to continue to cover these issues in subsequent bulletins. 

The DFI is not the only regulatory body that is taking action in this area.  The Federal Financial Institutions Examination Council (“FFIEC”) – the interagency body tasked with prescribing uniform principles, standards, and report forms for the federal examination of financial institutions – has charged a task force with developing guidance on financial institutions’ use of social media.  In addition, the Financial Industry Regulatory Association (“FINRA”) – an independent regulator of securities firms – has published basic guidance in the form of two Regulatory Notices, one in January 2010 and the other in August 2011.

While greater input may be required from financial industry regulators as corporate usage of social media continues to evolve, the DFI frameworks and the guidance provided by FINRA are the pragmatic first steps needed by an industry that seems to have partly steered clear of this potentially large, growing, and indispensable channel for reaching its consumers.  Financial institutions should seriously consider reviewing these materials when creating their own plans and policies.

On March 26, 2012, the Federal Trade Commission (the “Commission” or “FTC”) released its much-anticipated final privacy report, Protecting Consumer Privacy in an Era of Rapid Change.  The report builds upon the Commission’s December 2010 preliminary report, and provides recommendations for businesses and policymakers with respect to online and offline privacy practices.  The report will be of interest to any company using social media for marketing purposes.  Specifically, the report:

  • Presents a privacy framework that sets forth best practices – not legal requirements – for businesses.  The Commission states that, to the extent that the best practices set forth in the report extend beyond existing legal requirements, such best practices are not intended to serve as a template for law enforcement actions or regulation under laws currently enforced by the Commission.  FTC Chairman Jon Leibowitz reiterated this point to a House Energy and Commerce subcommittee on March 29, 2012, informing legislators that, while companies that follow the report’s best practices would not be in violation of the FTC Act, those that do not follow them would not necessarily be in breach of the law.  In his words, the report “is not a regulatory document or an enforcement document.”  That said, those elements of the report that focus on transparency and consumer choice build on the Commission’s recent law enforcement experience; it is therefore reasonable to assume that the Commission will continue its pattern of focusing on data practices that are not obvious to consumers in context, that are not disclosed adequately, and, in some instances, where consumers do not have meaningful choice.  Of course, the Commission will continue its aggressive enforcement of companies’ privacy and data security promises.
  • Recommends baseline privacy legislation.  In the Commission’s view, because self-regulation has not yet gone far enough, flexible and technologically neutral baseline privacy legislation is desirable.  While encouraging industry to continue its self-regulatory efforts, the Commission also intends the privacy framework set forth in the report to assist Congress in crafting legislation.  The Commission also reiterates its call for federal information security and data breach notification legislation and for legislation regulating the practices of data brokers.
  • Highlights the Commission’s privacy priorities for the coming year.  The report explains that the Commission will promote implementation of the privacy framework by focusing its efforts on five main areas: (1) cooperation with industry to complete the implementation of an easy-to-use, persistent and effective Do Not Track mechanism (the Commission does not call for Do Not Track legislation in this report); (2) improvement of privacy disclosures and other protections offered by mobile services, including through its May 30, 2012 public workshop on revisions to its Dot Com Disclosures guidance; (3) support for targeted legislation to give consumers access to the information about them held by data brokers and encouragement to data brokers that compile data for marketing purposes to create a centralized website to further increase the transparency of their practices; (4) exploration of the privacy issues associated with the comprehensive tracking of consumers’ online activities by large platform providers, such as ISPs, operating systems, browsers and social media in a workshop later this year; and (5) participation with the Department of Commerce and industry stakeholders to create enforceable self-regulatory codes of conduct

This final priority reflects the Commission’s support for the report issued by the Obama administration on February 23, 2012.  In its report, entitled Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, the administration detailed a “Consumer Privacy Bill of Rights” and announced the creation of a multi-stakeholder process to be convened by the Department of Commerce to create voluntary codes of conduct which, if adopted by companies, would be enforceable by the Commission pursuant to its deception authority under Section 5 of the FTC Act.  Importantly, the Commission’s report makes clear that the FTC will participate in the Department of Commerce’s multi-stakeholder process.

The Scope of the Privacy Framework

The privacy framework applies to all commercial entities that collect or use online and/or offline consumer data that can be reasonably linked to a specific consumer or computer or other device.  There is an exception for entities that collect only non-sensitive data from fewer than 5,000 consumers per year and do not share the data with third parties, so as not to unduly burden small businesses.  The Commission did not, however, exempt from the framework’s intended coverage those companies already covered by sector-specific privacy laws, such as the Gramm-Leach-Bliley Act or the Health Insurance Portability and Accountability Act.  Instead, it emphasizes in the final report that the framework is intended to foster best practices but not impose conflicting legal obligations, and it urges Congress not to pass legislation that creates overlapping or contradictory requirements for entities subject to such laws. 

The extension of privacy best practices to data linkable to a computer or other device reflects the Commission’s position that the line between “personally identifiable information” (“PII”) and “non-PII” is increasingly blurred.  The Commission justified this application on the grounds that, not only is re-identification of supposedly “anonymous” data increasingly possible, but, in the Commission’s view, businesses have strong incentives to re-identify such data. 

To provide businesses with certainty with respect to what constitutes “reasonably linkable” data, the Commission has taken the position that data is not “reasonably linkable”—and therefore not within the scope of the privacy framework—if the company possessing it implements the following protections: (1) reasonable measures to ensure that the data is de-identified; (2) a public commitment to using the data in a de-identified way; and (3) contractual prohibitions on downstream entities that use the data from de-identifying it, coupled with reasonable measures to ensure compliance with that prohibition.  Even with this attempt at clarity, questions remain, including what it means to “de-identify” data.  For example, does this mean removing PII or does it mean removing any identifier, such as cookie IDs?  Furthermore, what measures are “reasonable” in terms of monitoring downstream entities?

The Substance of the Privacy Framework

The Commission’s report proposes a privacy framework that calls for companies to incorporate “privacy by design” into their practices, to offer consumers simplified choices about how their data is collected and used, and to provide consumers with greater transparency about their practices.   

Privacy by Design

According to the report, companies should promote consumer privacy throughout their organizations and at every stage of the development and life cycle of their products and services.  As a substantive matter, this means that companies should incorporate the following privacy protections into their practices:

  • Reasonable security for consumer data.  The Commission notes that this obligation is already well settled, as it has a long history of enforcing data security obligations under Section 5 of the FTC Act and other laws.  The Commission commends industry’s efforts to ensure the security of consumers’ data, but, nonetheless, it renews its call for Congress to enact comprehensive data security and breach notification legislation.
  • Reasonable limits on data collection.  According to the Commission, reasonable limits are those that are consistent with the context of a particular transaction or the consumer’s relationship with the business (or as required or specifically permitted by law). 
  • Sound retention and disposal practices.  The Commission states that companies should implement reasonable restrictions on the retention of consumer data and should dispose of it once the data has outlived the legitimate purpose for which it was collected.  What is “reasonable” depends on the type of relationship and the nature and use of the data. 
  • Data accuracy.  According to the Commission, companies should maintain the accuracy of the data they hold about consumers.  As with other elements of the framework, the Commission believes that the best approach to achieving accuracy is through a flexible approach, scaled to the intended use and sensitivity of the data at issue.

The Commission also urges businesses to maintain comprehensive data management procedures throughout the life cycle of their products and services.  It cites its recent settlement orders with Facebook and Google as providing a roadmap for the types of comprehensive procedural protections it envisions: (1) designation of personnel responsible for the privacy program; (2) a risk assessment that covers, at a minimum, employee training, management and product design and development; (3) implementation of controls designed to mitigate identified risks; (4) appropriate oversight; and (5) evaluation and adjustment of the program in light of regular testing and monitoring. 

Simplified Consumer Choice

The report encourages companies to simplify consumer choice, in part by identifying those practices for which choice is not necessary.  Specifically, the report provides that companies do not need to provide consumers with choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer or that are required or specifically authorized by law.  While this standard relies to some degree on consumer expectations, it focuses on objective factors related to the consumer’s relationship with the business.  The following commonly accepted practices are provided as examples of the kinds of practices that do not typically require consumer choice: product and service fulfillment, internal operations, fraud prevention, legal compliance and public purpose and first-party marketing. 

The report goes on to address practices that require choice and states that, when choice is required, it should be offered at a time and in a context in which the consumer is making a decision about his or her data.  (The Commission declines, however, to impose any particular method of providing choice, leaving it to industry to develop the most appropriate choice mechanisms.)  As a general matter, data use and disclosure practices that are inconsistent with the context of the transaction or the company’s relationship with the consumer require consumer choice (unless such practices are required or specifically authorized by law).  Such practices may include, for example, sharing customer data with an affiliate for the affiliate’s own direct marketing use, if the consumer would not have been aware of the affiliate relationship (e.g., because the companies are differently branded).

The Commission identifies two practices that it believes require affirmative express consent:  (1) in connection with material retroactive changes to privacy representations (this is not new, as the Commission has expressed it repeatedly for years and has imposed it in settlement orders); and (2) before collecting sensitive data, such as information about children, health and financial information, geolocation data and Social Security numbers. (The Commission also proposes that social networks and others specifically targeting teens should take extra precautions with respect to their submission of personal information.)  The Commission’s identification of specific practices that require affirmative express consent suggests that, where it otherwise calls for choice, clear and conspicuous notice and opt-out would be sufficient.

Greater Transparency

The Commission states that companies should increase the transparency of their data practices, through privacy notices, access to data and consumer education:

  • Privacy notices should be clearer, shorter and more standardized, to enable better comprehension and comparison of privacy practices.  The Commission calls for the simplification of privacy notices, such as through the use of standardized terminology, format and other elements.  In the Commission’s view, members of various industry sectors should work together to create standards relevant to their industry, possibly through the multi-stakeholder process that the Department of Commerce plans to convene.  According to the Commission, the need for simplification and industry involvement is particularly acute in the mobile realm, given the number of entities that want to collect user data and the limited space for disclosures.  As noted above, the Commission plans to address mobile disclosures in a May 30, 2012 public workshop.
  • Companies should provide reasonable access to the consumer data they maintain.  The extent of access should be proportionate to the sensitivity of the data and the nature of its use.  For example, the Commission urges businesses that maintain data for marketing purposes to, at a minimum, provide consumers with access to such data and permit them to suppress categories they would not like used for targeting.
  • Companies should make efforts to increase the transparency of their data enhancement practices.  The Commission does not suggest that companies obtain consent to such practices; however, it urges industry to rely on the other elements of the privacy framework to address the privacy concerns raised by it.  In the Commission’s view, this means that companies should, for example, explain to consumers how data enhancement works and how they can contact data enhancement sources directly.  They should also encourage their data sources to increase their own transparency.
  • The Commission encourages companies to continue to engage in consumer education efforts and invites industry to re-brand and use the Commission’s own materials.

Conclusion

The report reflects the Commission’s continued concern that consumers bear too much of a burden for understanding and controlling how their data is collected, used, retained and disclosed.  The report reflects its desire to see this paradigm reversed so that the burden is shouldered by companies instead.  How far this concern is turned into enforceable requirements will depend in large part on the support the Commission receives from Congress, as well as the extent of the development and adoption of self-regulatory codes of conduct.

For several months, various legislative proposals that would ease regulatory and financing burdens on smaller companies have been discussed by legislators, business leaders and commentators. These proposals were brought together under the Jumpstart Our Business Startups (JOBS) Act (H.R. 3606). The JOBS Act was passed by Congress on March 27, 2012 and signed into law by President Obama on April 5, 2012. For a comprehensive overview of the JOBS Act, see our Client Alert.

The JOBS Act tackles various issues relating to financing businesses, however, within the realm of social media, “crowdfunding” is a key topic which Congress has chosen to regulate. In this article, we take a close look at the crowdfunding component of the new law.

Background on Crowdfunding

“Crowdfunding” or “crowdsourced funding” is a new outgrowth of social media that provides an emerging source of funding for ventures. Crowdfunding works based on the ability to pool money from individuals who have a common interest and are willing to provide small contributions toward the venture. Crowdfunding can be used to accomplish a variety of goals (e.g., raising money for a charity or other causes of interest to the participants), but when the goal is commercial in nature and there is an opportunity for crowdfunding participants to share in the venture’s profits, federal and state securities laws will likely apply. Absent an exemption from SEC registration (or actually registering the offering with the SEC), crowdfunding efforts that involve sales of securities are in all likelihood illegal. In addition to SEC requirements, those seeking capital through crowdfunding have had to be cognizant of state securities laws, which include varying requirements and exemptions. By crowdfunding through the Internet, a person or venture can be exposed to potential liability at the federal level, in all 50 states, and potentially in foreign jurisdictions. Existing exemptions present some problems for persons seeking to raise capital through crowdfunding. For example, Regulation A requires a filing with the SEC and disclosure in the form of an offering circular, which would make conducting a crowdfunding offering difficult. The Regulation D exemptions generally would prove too cumbersome, and a private offering approach or the intrastate offering exemption is inconsistent with widespread use of the Internet. Section 25102(n) of the California Corporations Code might provide a possible exemption for some California issuers, given that it permits general announcement of an offering without qualification in California (with a corresponding exemption from registration at the federal level provided by SEC Rule 1001, the California limited general solicitation exemption). Crowdfunding advocates have called on the SEC to consider implementing a new exemption from registration under the federal securities laws for crowdfunding. For more on crowdfunding, see also our prior Client Alert here.

When H.R. 3606 was adopted in the House of Representatives, the bill included Title III, titled “Entrepreneur Access to Capital.” This Title provided an exemption from registration under the Securities Act for offerings of up to $1 million, or $2 million in certain cases when investors were provided with audited financial statements, provided that individual investments were limited to $10,000 or 10 percent of the investor’s annual income. The exemption was conditioned on issuers and intermediaries meeting a number of specific requirements, including notice to the SEC about the offering and the parties involved with the offering, which would be shared with state regulatory authorities. The measure would have permitted an unlimited number of investors in the crowdfunding offering, and would have preempted state securities regulation of these types of offerings (except that states would be permitted to address fraudulent offerings through their existing enforcement mechanisms).

The House measure also contemplated that the issuer must state a target offering amount and a third-party custodian would withhold the proceeds of the offering until the issuer has raised 60 percent of the target offering amount. The provision also contemplated certain disclosures and questions for investors, and provided for an exemption from broker-dealer registration for intermediaries involved in an exempt crowdfunding offering.

After it was adopted, the House crowdfunding measure drew a significant amount of criticism, with much of that criticism focused on a perceived lack of investor protections. In a letter to the Senate leadership, SEC Chairman Mary Schapiro noted that “an important safeguard that could be considered to better protect investors in crowdfunding offerings would be to provide for oversight of industry professionals that intermediate and facilitate these offerings,” and also noted that additional information about companies seeking to raise capital through crowdfunding offerings would benefit investors.

In the Senate, an amendment to H.R. 3606 submitted by Senator Merkley and incorporated in the final JOBS Act provides additional investor protections in crowdfunding offerings. Title III, titled “Crowdfunding,” amends Section 4 of the Securities Act to add a new paragraph (6) that provides a crowdfunding exemption from registration under the Securities Act. The conditions of the exemption are that:

  • The aggregate amount sold to all investors by the issuer, including any amount sold in reliance on the crowdfunding exemption during the 12-month period preceding the date of the transaction, is not more than $1,000,000;
  • The aggregate amount sold to any investor by the issuer, including any amount sold in reliance on the crowdfunding exemption during the 12-month period preceding the date of the transaction, does not exceed:
    • the greater of $2,000 or 5 percent of the annual income or net worth of the investor, as applicable, if either the annual income or the net worth of the investor is less than $100,000; or
    • 10 percent of the annual income or net worth of an investor, as applicable, not to exceed a maximum aggregate amount sold of $100,000, if either the annual income or net worth of the investor is equal to or more than $100,000;
  • The transaction is conducted through a broker or funding portal that complies with the requirements of the exemption; and
  • The issuer complies with the requirements of the exemption.

Among the requirements for exempt crowdfunding offerings would be that an intermediary:

  • Registers with the SEC as a broker or a “funding portal,” as such term is defined in the amendment;
  • Registers with any applicable self-regulatory authority;
  • Provides disclosures to investors, as well as questionnaires, regarding the level of risk involved with the offerings;
  • Takes measures, including obtaining background checks and other actions that the SEC can specify, of officers, directors, and significant shareholders;
  • Ensures that all offering proceeds are only provided to issuers when the amount equals or exceeds the target offering amount, and allows for cancellation of commitments to purchase in the offering;
  • Ensures that no investor in a 12-month period has invested in excess of the limit described above in all issuers conducting exempt crowdfunding offerings;
  • Takes steps to protect privacy of information;
  • Does not compensate promoters, finders, or lead generators for providing personal identifying information of personal investors;
  • Prohibits insiders from having any financial interest in an issuer using that intermediary’s services; and
  • Meets any other requirements that the SEC may prescribe.

Issuers also must meet specific conditions in order to rely on the exemption, including that an issuer file with the SEC and provide to investors and intermediaries information about the issuer (including financial statements, which would be reviewed or audited depending on the size of the target offering amount), its officers, directors, and greater than 20 percent shareholders, and risks relating to the issuer and the offering, as well specific offering information such as the use of proceeds for the offering, the target amount for the offering, the deadline to reach the target offering amount, and regular updates regarding progress in reaching the target.

The provision would prohibit issuers from advertising the terms of the exempt offering, other than to provide notices directing investors to the funding portal or broker, and would require disclosure of amounts paid to compensate solicitors promoting the offering through the channels of the broker or funding portal.

Issuers relying on the exemption would need to file with the SEC and provide to investors, no less than annually, reports of the results of operations and financial statements of the issuers as the SEC may determine is appropriate. The SEC may also impose any other requirements that it determines appropriate.

A purchaser in a crowdfunding offering could bring an action against an issuer for rescission in accordance with Section 12(b) and Section 13 of the Securities Act, as if liability were created under Section 12(a)(2) of the Securities Act, in the event that there are material misstatements or omissions in connection with the offering.

Securities sold on an exempt basis under this provision would not be transferrable by the purchaser for a one-year period beginning on the date of purchase, except in certain limited circumstances. The crowdfunding exemption would only be available for domestic issuers that are not reporting companies under the Exchange Act and that are not investment companies, or as the SEC otherwise determines is appropriate. Bad actor disqualification provisions similar to those required under Regulation A would also be required for exempt crowdfunding offerings.

Funding portals would not be subject to registration as a broker-dealer, but would be subject to an alternative regulatory regime, subject to SEC and SRO authority, to be determined by rulemaking. A funding portal is defined as an intermediary for exempt crowdfunding offerings that does not: (1) offer investment advice or recommendations; (2) solicit purchases, sales, or offers to buy securities offered or displayed on its website or portal; (3) compensate employees, agents, or other persons for such solicitation or based on the sale securities displayed or referenced on its website or portal; (4) hold, manage, possess, or otherwise handle investor funds or securities; or (5) engage in other activities as the SEC may determine by rulemaking.

The provision would preempt state securities laws by making exempt crowdfunding securities “covered securities,” however, some state enforcement authority and notice filing requirements would be retained. State regulation of funding portals would also be preempted, subject to limited enforcement and examination authority.

The SEC must issue rules to carry out these measures not later than 270 days following enactment. The dollar thresholds applicable under the exemption are subject to adjustment by the SEC at least once every five years.

The provisions of this title of the JOBS Act are not self-effectuating, as indicated above.

Practical Considerations

Issuers: For those issuers who are seeking to raise small amounts of capital from a broad group of investors, the crowdfunding exemption may ultimately provide a viable alternative to current offering exemptions, given the potential that raising capital through crowdfunding over the Internet may be less costly and may provide more sources of funding. At the same time, issuers will need to weigh the ongoing costs that will arise with crowdfunding offerings, in particular the annual reporting requirement that is contemplated by the legislation. Moreover, it is not yet known how much intermediaries such as brokers and funding portals will charge issuers once SEC and SRO regulations apply to their ongoing crowdfunding operations.

Intermediaries: Brokers and potential funding portals will need to consider how their processes can be revamped to comply with regulations applicable to exempt crowdfunding offerings, in particular given the level of information that will need to be provided in connection with crowdfunding offerings and the critical role that intermediaries will play in terms of “self-regulating” these offerings.

Join us for this timely seminar series in New York, Palo Alto, and San Francisco.

The use of social media by public companies and their employees raises many legal issues, such as the application of the federal securities laws to communications made through company websites, blogs, Twitter and Facebook. Those communications implicate laws regulating selective disclosure and prohibited disclosures during public and private securities offerings. In this program we will provide best practices and practical advice on addressing these issues. Topics will include:

  • The latest trends in the use of social media for communicating corporate and investor information
  • The use of social media in the public and private securities offering process, including crowdfunding
    • Social media considerations under Regulations FD and G and proxy solicitation
    • Social media guidelines vs. policies
    • Antifraud considerations with social media communications

Speakers

To register for one of these sessions, please click here.

NY and CA CLE credit are pending.