- Back to the future. Socially Aware readers – and editors – of, uhm, a certain age will fondly recall how, during the early days of the dotcom era, we hung out on message boards and in chat rooms discussing (some might say arguing about) politics, sports, movies, music, you name it – with people we’d never met, and never would meet, in person. Well, Facebook is now trying to recapture that vibe with a new feature called “Rooms” – free-form areas that include text and photos based on some niche interest that was kicked off by the room’s originator. Of course, it’s not 1995 anymore, and one question that has to be asked is: can “Rooms” fit into a business’s marketing strategy? It’s easy to see how: would makers of high-end kitchen equipment participate in “Rooms” on gourmet cuisine? Athletic clothes manufacturers in “Rooms” on yoga poses? Old-school feature, meet new-school branding.
- If an ad falls below the fold, does it make an impression? Online ads are big business, of course, as advertising rapidly migrates from print to websites, apps, social media and other online outlets. But how do advertisers even know that their ads are being noticed? In the old days, it was a pretty fair assumption that newspaper ads were actually looked at by readers, but a major 2013 survey showed that more than 50 percent of ads online are not viewed. Advertisers and agencies would, understandably, like to see standards in place to ensure that they’re not paying for ads that a web surfer had no chance of seeing (for example, because the ad was “below the fold” on a site’s home page, yet the site visitor never scrolled down to where the ad could be viewed). A media VP at Unilever has noted, “It’s simple — we want to get what we pay for.” So agencies and clients, led by GroupM – the world’s largest ad-buying firm – and by Unilever, are leading the charge for standards addressing these concerns. Among the proposed standards: 100% of display ads must be visible to site visitors; 100% of the video player for video ads must be visible to site visitors, and at least 50% of the video must be played while visible; the video player’s sound cannot be turned off while the video is playing; and no use of “auto-start” functionality – rather, the site visitor must initiate playing of the video ad.
- Laundry list. The “Internet of things,” touted for years as a big part of the digital future, seems to be approaching rather more slowly than anticipated. Whirlpool, the nation’s largest appliance maker, is marketing a “smart washer” and “smart dryer” at $1,699 each, but these cutting-edge, fully wired machines are not exactly jumping off the shelves. Many consumers are apparently in no rush to pay that kind of cash just to own a Web-enabled washing machine that will text them when their clothes are ready for the dryer. Even a Whirlpool executive acknowledges the problem, observing that “trying to understand exactly the value proposition that you provide to the consumer has been a little bit of a challenge.” After all, the machine won’t sort and fold your laundry for you, or track down that missing sock – now that’s an innovation worth paying a premium for.
For many companies, the main question about cloud computing is no longer whether to move their data to the “cloud,” but how they can accomplish this transition. Cloud (or Internet-based on-demand) computing involves a shift away from reliance on a company’s own local computing resources, in favor of greater reliance on shared servers and data centers. Well-known examples of cloud computing services include Google Apps, Salesforce.com, and Amazon Web Services. In principle, a company also may maintain its own internal “private cloud” without using a third-party provider. Since many companies choose to use third-party cloud providers, however, this article will focus on that cloud computing model.
Cloud computing offerings range from the provision of IT infrastructure alone (servers, storage, and bandwidth) to the provision of complete software-enabled solutions. Cloud computing can offer significant advantages in cost, efficiency, and accessibility of data. The pooling and harnessing of processing power provides companies with flexible and cost-efficient IT systems. At the same time, however, cloud computing arrangements tend to reduce a company’s direct control over the location, transfer, and handling of its data.
The flexibility and easy flow of data that characterize the cloud can raise challenging issues related to protection of data in the cloud. A company’s legal obligations and risks will be shaped by the nature of the data to be moved to the cloud, whether the data involve personal information, trade secret information, customer data, or other competitively sensitive information. This article describes the special legal considerations that apply when moving personal information to the cloud. It also offers a framework to help companies navigate these issues to arrive at a solution that meets their own legal and business needs.
Determine the categories of personal information to be moved to the cloud
As a general principle, personal information includes any information that identifies or can be associated with a specific individual. Some types of personal information involve much greater legal and business risks than other types of personal information. For example, a database containing health information will involve greater risks than a database containing names and business contact information of prospective business leads. Also, financial regulators in many countries require specific security standards for financial information. Accordingly, a cloud computing service that may be sufficient for the business lead data may fail to provide the legally required level of protection for health, financial, or other sensitive types of information.
A company will want to develop a strategy that provides sufficient protection to the most sensitive personal information to be transmitted to the cloud. In some cases, a company may elect to maintain certain types of personal information internally, in order to take advantage of more cost-efficient cloud computing services for its less-sensitive data.
Identify applicable laws affecting your outsourcing of personal information
Cloud computing, by its nature, can implicate a variety of laws, including privacy laws, data security and breach notification laws, and laws limiting cross-border transfers of personal information.
(a) Privacy Laws
Companies operating in the United States will need to consider whether they are subject to sector-specific privacy laws or regulations, such as the Gramm-Leach-Bliley Act (GLBA) or the Health Insurance Portability and Accountability Act (HIPAA). Such laws impose detailed privacy and data security obligations, and may require more specialized cloud-based offerings.
Europe-based companies, as well as companies working with providers in or with infrastructure in Europe, will need to account for the broad-reaching requirements under local omnibus data protection laws that protect all personal information, even basic details like business contact information. These requirements can include notifying employees, customers, or other individuals about the outsourcing and processing of their data; obligations to consult with works councils before outsourcing employee data; and registering with local data protection authorities. Similar requirements arise under data protection laws of many other countries, including countries throughout Europe, Asia, the Middle East, and the Americas.
(b) Data Security Requirements
Even if a company is not subject to these types of privacy laws, it will want to ensure safeguards for personal information covered by data security and breach notification laws. In the United States, these laws tend to focus on personal information such as social security numbers, driver’s license numbers, and credit or debit card or financial account numbers. One of the key safeguards is encryption because many (although not all) of the U.S. state breach notification laws provide an exception for encrypted data.
In contrast, many other countries require protection of all personal information, and do not necessarily provide an exception for encrypted data. Consequently, companies operating outside of the United States may have broader-reaching obligations to protect all personal information. While data protection obligations vary significantly from law to law, both U.S. and international privacy laws commonly require the following types of safeguards:
i. Conducting appropriate due diligence on providers;
ii. Restricting access, use, and disclosure of personal information;
iii. Establishing technical, organizational, and administrative safeguards;
iv. Executing legally sufficient contracts with providers; and
v. Notifying affected individuals (and potentially regulators) of a security breach compromising personal information.
The topic of data security in the cloud has received significant industry attention. Industry groups, such as the Cloud Security Alliance, have suggested voluntary guidelines for improving data security in the cloud. For example, please refer to the CSA’s Security Guidelines for Critical Areas of Focus for Cloud Computing, available at https://cloudsecurityalliance.org/download/security-guidance-for-critical-areas-of-focus-in-cloud-computing-v3/. In Europe, the Cloud Select Industry Group (CSIG), an industry group sponsored by the European Commission, recently issued the Cloud Service Level Agreement Standardization Guidelines, available at http://ec.europa.eu/digital-agenda/en/news/cloud-service-level-agreement-standardisation-guidelines. The Guidelines recommend contractual stipulations covering (1) business continuity, disaster recovery, and data loss prevention controls; (2) authentication/authorization controls, including access provision/revocation, and access storage protection; (3) encryption controls; (4) security incident management and reporting controls and metrics; (5) logging and monitoring parameters and log retention periods; (6) auditing and security certification; (7) vulnerability management metrics; and (8) security governance metrics. Providers also may choose to be certified under standards such as ISO 27001, although such certifications may not address all applicable legal requirements.
(c) Restrictions on Cross-Border Data Transfers
A number of countries—e.g., all the European Economic Area (EEA) Member States and certain neighboring countries (including Albania, the Channel Islands, Croatia, the Faroe Islands, the Isle of Man, Macedonia, Russia, and Switzerland), as well as countries in North Africa (e.g., Morocco), the Middle East (e.g., Israel), Latin America (e.g., Argentina and Uruguay), and Asia (e.g., South Korea)—restrict the transfer or sharing of personal information beyond their borders. These restrictions can present significant challenges for multinational companies seeking to move their data to the cloud. Recognizing these challenges, some providers are starting to offer geographic-specific clouds, in which the data are maintained within a given country or jurisdiction. Some U.S. providers have also certified to the U.S.-European Union Safe Harbor program, in order to accommodate EU-based customers. However, as the Safe Harbor only permits transfers from the EU to the United States, it is not a global solution. Accordingly, a company should assess carefully whether the options offered by a provider are sufficient to meet the company’s own legal obligations in the countries where it operates.
To complicate matters, international data protection authorities, particularly in the EEA, have expressed concerns about use of the cloud model for personal information. The Working Party 29 (WP29), the assembly of EEA data protection authorities, and many other local EEA authorities have issued guidance about cloud computing, covering purpose and transfer restrictions, notification requirements, mandatory security requirements, and the content of the contract to be concluded with cloud providers. This guidance includes the WP29 Opinion 05/2012 on Cloud Computing, which is discussed further below. The draft Data Protection regulation currently discussed among the EEA Member States reflects such guidance and should be accounted for prior to engaging cloud providers.
Review contractual obligations affecting your outsourcing of personal information
If your company is seeking to outsource to a cloud provider applications that involve third-party data, such as personal information maintained on behalf of customers or business partners, it is important to consider any limitations imposed by contracts with those third parties. Such agreements might require third-party consent to the outsourcing or subcontracting of data processing activities, or may require your company to impose specific contractual obligations on the new provider or subcontractor.
Select an appropriate cloud computing solution
Cloud services tend to be offered on a take-it-or-leave-it basis, with little opportunity to negotiate additional contractual protections or customized terms of service. As a result, companies may find themselves unable to negotiate the types of privacy and data security protections that they typically include in contracts with other service providers. Companies will need to evaluate whether the contract fulfills their applicable legal and contractual obligations, as discussed above. Beyond that, companies will want to evaluate the practical level of risk to their data, and what steps they might take to reduce those risks.
(a) Public vs. Private Cloud
Broadly speaking, a private cloud maintains the data on equipment that is owned, leased, or otherwise controlled by the provider. Private cloud models can be compared with many other well-established forms of IT outsourcing and do not tend to raise the same level of concerns as a public cloud model.
A public cloud model disperses data more broadly across computers and networks of unrelated third parties, which might include business competitors or individual consumers. While offering maximum flexibility and expansion capabilities, the public cloud model raises heightened concerns about the inability to know who holds your company’s data, the lack of oversight over those parties, and the absence of standardized data security practices on the hosting equipment. Given these challenges, companies outsourcing personal information will want to understand whether the proposed service involves a private or public cloud, as well as evaluate what contractual commitments the provider is willing to make about data security.
(b) Securing Data Before Transmission to the Cloud
Companies also may be able to take measures themselves to protect personal information before it is transmitted to the cloud. Some provider agreements instruct or require customers to encrypt their data before uploading the data to the cloud, for example. If it is feasible to encrypt the data prior to transmission to the provider, this may provide substantial additional protections, as long as the encryption keys are not available to the provider.
It is also important to account for applicable security requirements. To this effect, several countries in Europe have very specific statutory requirements for security measures, and some regulators have issued detailed security standards for cloud computing providers. Pursuant to the WP29 Opinion 05/2012, all contracts should include security measures in accordance with EU data protection laws, including requirements for cloud providers on technical and organizational security measures, access controls, disclosure of data to third parties, cooperation with the cloud client, details on cross-border transfer of data, logging, and auditing processing. The recent guidelines from the CSIG recommends the inclusion of the following provisions in processing agreements: (1) standards or certification mechanisms the cloud service provider complies with; (2) precise description of purposes of processing; (3) clear provisions regarding retention and erasure of data; (4) reference to instances of disclosure of personal data to law enforcement and notification to the customer of such disclosures; (5) a full list of subcontractors involved in the processing and inclusion of a right of the customer to object to changes to the list, with special attention to requirements for processing of special or sensitive data; (6) description of data breach policies implemented by the cloud service provider including relevant documentation suitable to demonstrate compliance with legal requirements; (7) clear description of geographical location where personal data is stored or processed, for purposes of implementing appropriate cross-border transfer mechanisms; and (8) time period necessary for a cloud service provider to respond to access, rectification, erasure, blocking, or objection requests by data subjects.
(c) Contract Issues
In the majority of cloud computing services, the client is the data controller and the cloud provider is the data processor. However, in certain scenarios (in particular Platform as a Service (PaaS) and Software as a Service (SaaS) in public computing models), the client and the cloud provider may be joint controllers. Under EU guidance, the responsibilities of joint controllers must be very clearly set out in the contract to avoid any “dilution” of legal responsibility.
The contract with the cloud services provider needs to set out clearly the roles and responsibilities of the parties. Unlike many outsourcing arrangements, cloud service contracts usually do not distinguish between personal information and other types of data. These contracts may still include at least basic data protection concepts, even if they are not expressly identified as such. At a minimum, companies will want to look for provisions preventing the provider from using the information for its own purposes, restricting the provider from sharing the information except in narrowly specified cases, and confirming appropriate data security and breach notification measures. Various European data protection authorities have underscored that access to cloud data by public authorities must comply with national data protection law and that the contract should require notification of any such requests unless prohibited under criminal law and should prohibit any non-mandatory sharing. Given the difficulty of negotiating special arrangements with cloud providers, it is important to select a cloud offering that is appropriately tailored to the nature of the data and the related legal obligations. It is likely that as cloud computing matures, more offerings tailored to specific business requirements, including compliance with privacy and similar laws, will be made available to companies.
While cloud computing can substantially improve the efficiency of IT solutions, particularly for small and medium-sized businesses, the specific offerings need to be examined closely. There is no “one-size-fits-all” solution to cloud computing, especially for companies operating in highly regulated sectors or internationally. By understanding their legal compliance obligations, companies can make informed decisions in selecting cloud computing services or suites of services that best meet their needs.
Editor’s Note: At first glance, drones may seem unrelated to the social media and Internet-related issues that we track on Socially Aware. Upon closer examination, however, many social media and Internet companies are exploring the commercial use of drones; for example, Amazon has publicly announced its intentions to incorporate drones into its package delivery system, and both Facebook and Google have expressed their desire to use drones to facilitate Internet connectivity. With that in mind, we present the following post regarding the upcoming Notice of Proposed Rulemaking related to commercial drone use in the United States.
With drone technology rapidly advancing and the FAA recently starting to open the door to commercial drone use, companies across industries should begin evaluating how drones can add value to their businesses, if they have not already done so.
Drones can benefit a wide range of industries and activities, including (to name only a few): industrial-scale agriculture; energy generation, transmission, production, and pipeline facilities; other conveyances and linear projects (such as water and flood control); transportation infrastructure, including railways, roads, ports, and waterways, and the rolling stock, vehicles, and vessels that use them; private and public emergency response (e.g., fire, flooding); insurance and accident inspection; and resource assessment, monitoring, and compliance. But without input from leaders in these industries, their use of drones may not be realized in the foreseeable future. Industry leaders need to demand that the FAA’s much-anticipated Notice of Proposed Rulemaking (NPRM) for small UAS—now expected to be issued in the first half of December—is reasonable and practical for the wide range of industries and activities, and fosters drone use and innovation while responsibly ensuring public safety.
Background of the Notice of Proposed Rulemaking for Small UAS
FAA rulemaking for drones was mandated by Congress as part of the FAA Modernization and Reform Act of 2012. The law requires the FAA to “provide for the safe integration of civil unmanned aircraft systems into the national airspace system as soon as practicable, but not later than September 30, 2015.”
The NPRM for small UAS (meaning UAS that weigh less than 55 pounds) was expected sooner—with Congress requiring the FAA to issue a final rule by August 2014. But the agency is notably behind this schedule. According to the latest publicly available information regarding the rulemaking, the NPRM for small UAS will issue in November 2014. We believe, however, that the FAA is more likely to issue the NPRM in mid-December. Moreover, in fall 2013, the DOT declared a deadline of May 2014 for issuing the small UAS NPRM, which it extended. That could happen again. The NPRM will initiate what is expected to be a decade of rulemaking to establish the regulatory regime for drones, large and small.
What Will the Proposed Regulations Say?
More important than the timing of the NPRM, however, is its expected content. This rulemaking is going to be comprehensive, designed to adopt specific rules for the operation of small UAS in the national airspace. The proposed regulations are likely to address classification of small UAS, certification and training of pilots and visual observers, registration, approval of operations, and operational limitations. Additionally, there will likely be provisions requiring the FAA to collect safety data from the user community.
Operational limitations and certification requirements that the FAA may require can be gleaned from the exemption requests that the FAA granted for the commercial use of small UAS in film production late last month. These exemptions—while allowing limited commercial use—remain highly restrictive. They permit the use of specific drone models that must fly at speeds below 50 knots and be equipped with advanced GPS systems. The flights must be conducted below 400 feet and within the visual line of sight of the pilot in command, who must possess at least a private pilot’s certificate. Flight plans of activities are required to be submitted to the local Flight Standards District Offices three days in advance of the operations, and the operators must obtain specific waivers from the relevant air traffic organizations.
If the FAA attempts to impose these types of restrictions on small UAS operations across the board, the utility of drone operations for many industries may be severely limited, if not extinguished. For example, using drones to inspect pipelines and power lines over long distances would prove impossible if the FAA imposes a visual line of sight requirement. Similarly, requiring a private pilot’s certificate for all operations may hinder the ability of farmers to use drones for precision agriculture, or realtors to use drones to obtain aerial footage of properties. Simply put, a one-size-fits-all approach will not work for the small UAS regulations. Given the FAA’s historical concerns and agency culture, there is reason for concern.
What Can Be Done Now?
Companies and trade associations interested in obtaining the benefits of small UAS should start formulating plans now to help shape the NPRM and the regulations that will come out of it. They need not wait for the NPRM to issue.
The FAA can be petitioned in advance of the NPRM with broad requests to include or exclude certain provisions. Moreover, comments can be submitted on pending Section 333 exemption requests. These comments can be narrow and limited to why the specific exemption request should or should not be granted; or they can be broad, sweeping commentary on the current status of the FAA’s position on small UAS operations. Several well-known associations have already begun commenting on the exemption requests, including the Aerospace Industries Association, the National Agricultural Aviation Association, the Association for Unmanned Vehicle Systems International, and the Air Line Pilots Association International.
Industry leaders should also plan to comment on the NPRM once it is issued. This will require careful consideration of the current operating environment, as well as a keen eye toward potential future uses for UAS. Industry should seek to ensure that small UAS operations are not unduly restricted, while taking into account the risks associated with, and potential unintended consequences of, expanding UAS operations.
- Unfree speech? In the United States, the First Amendment would likely prevent the prosecution of someone who posted racist or anti-Semitic messages on a social media platform. But social media platforms operate worldwide, and many nations’ laws are much less permissive when it comes to speech of this type. Following a French case in which Twitter was forced to remove certain anti-Semitic content, many operators of social media platforms have updated their terms of service to comply with European laws regarding racist statements, Holocaust denial and other hate speech.
- By invitation only. Google is currently rolling out Inbox, a new email system with added features that may eventually replace Gmail for some users. Interestingly, Google is initially making Inbox available only by invitation. Each person with an Inbox account can invite up to three friends by clicking a “golden ticket” icon. It’s not clear why Google is doing this. According to an article on Techcrunch, Google may be trying to create a “sense of buzz” for the new app so that it can grow the user base inexpensively and virally.
- Not with a bang but a whimper. Last month, we wrote about the long-drawn-out trademark battle between Twitter and Twitpic in which Twitter said it would prevent Twitpic from gaining access to its API if Twitpic did not abandon its trademark. Twitpic decided to shut down instead, and we just heard the last bit of news on this dispute: Twitpic’s archives of photos will remain accessible and available for perusal, but no new additions will be allowed. And Twitpic is ending the availability of its mobile apps. So if you put a photo on Twitpic a year ago, you’ll still be able to find it, but that’s about all.
The pressure on ISPs to take responsibility for the sites accessible through their services has been growing in recent years (e.g., the requirement for certain ISPs to block filesharing sites). On October 17, 2014, the High Court of England and Wales took this one step further by granting a website-blocking order against certain ISPs in a case involving counterfeit goods. This case is notable for the fact that the infringement related to trademarks and not copyright. While English copyright law has a provision under which blocking injunctions may be sought, there is no statutory equivalent under trademark law, yet an injunction was still granted. Has the war on ISPs just gotten tougher?
The ISPs in question were Sky, BT, EE, TalkTalk and Virgin, and the matter centered around six websites that advertise and sell counterfeit goods (such as Cartier and Montblanc). The claimants (trademark owners in the Richemont/Cartier group) sought a blocking injunction from the ISPs for these six sites.
In reaching his decision to grant the blocking injunction, Mr. Justice Arnold focused on (a) whether the court had jurisdiction to grant the injunction; (b) whether such an injunction could be granted where no specific statutory legislation was in place relating to this remedy; and (c) whether the threshold conditions were met for granting such an injunction.
Having established that the court did indeed have jurisdiction, Mr. Justice Arnold noted that, although there is no specific legislation providing for injunctions in cases of trademark infringement, to grant such an injunction against a non-infringing party would nevertheless be consistent with EU law and UK policy. Further, Mr. Justice Arnold noted that “the 1994 [Trade Mark] Act both confers remedies against persons who are not necessarily infringers . . . and yet does not purport to contain a comprehensive code of the remedies available to a trade mark proprietor . . . More generally, there is nothing inconsistent between granting an injunction against intermediaries . . . and the provisions of the 1994 Act.” Thus, in this instance, the court held that an injunction could be granted even where no specific statutory legislation was in place.
Mr. Justice Arnold then focused on whether the threshold conditions for an injunction (in this case a website-blocking order) were met:
- Is the defendant an intermediary within the meaning of Article 11 of the Enforcement Directive (Directive 2004/48/EC)? The court determined that ISPs clearly fall into this category.
- Do the users and/or the operators of the website in question infringe the claimant’s trademarks? The court determined that each of the six websites did infringe because each provided goods bearing signs identical to the trademarks in dispute, and sold these goods in response to orders without consent of the claimants.
- Do users and/or the operators of the websites use the ISP’s services to infringe? Mr. Justice Arnold held that the answer to this question was yes. The ISPs have an essential role, as it is via their services that the advertisements and offers for sale are communicated to users in the UK. Even if UK consumers don’t purchase any goods, the first act of infringement is already complete based just on the advertisements.
- Do the ISPs have actual knowledge? Here again, the court held in the affirmative: If the operators of the websites in question use the ISPs’ services to infringe, then the ISPs have actual knowledge of the infringement, based on the fact that the claimants sent notices to the ISPs and the other evidence produced.
In considering whether the injunction would unduly interfere with the ISPs’ freedom to carry on business and Internet users’ freedom to receive information, Mr. Justice Arnold considered that no new technology would be required to block the sites in question and, although alternative measures such as takedown and de-indexing were available, these measures would not be as effective as an injunction and would not be less burdensome. However, he did adopt certain points made by the Open Rights Group, including requiring that additional information be provided to users when they attempt to access the blocked sites and limiting the order to an initial two-year period.
The Internet is increasingly used in the counterfeit goods trade. A study published in 2008 by the Organisation for Economic Co-operation and Development entitled The Economic Impact of Counterfeiting and Piracy estimated that the value of counterfeited and pirated goods moving through international trade alone in 2005 amounted to US$200 billion. In 2014 the European Commission published its Report on EU Customs Enforcement of Intellectual Property Rights: Results at the EU Border, which recorded that, in 2012, customs authorities at the external borders of the EU seized a total of over 39.9 million articles, representing a market value of almost €900 million, with the UK seizing more articles than any other Member State. It remains to be seen, however, whether this case, acknowledged by Mr. Justice Arnold as a test case, will open the floodgates for trademark owners affected by this widespread issue or, given that domain names can be easily purchased and new sites quickly set up, will have little real impact.
- Clearing the air. Aereo, the startup broadcasting service that lost big in the U.S. Supreme Court last June, just lost another, and possibly its last, court battle. A U.S. district judge in the Southern District of New York, responding to a motion filed by the major broadcasting networks, granted a preliminary injunction barring Aereo from retransmitting programs to its subscribers while the programs are still being broadcast . The ruling by U.S. District Judge Alison Nathan also rejected Aereo’s argument that it should be able to take advantage of the statutory compulsory license applicable to cable systems.
- Let’s be friends. Twitter’s relationship with app developers has been somewhat strained since the microblogging platform tightened its rules on outside apps a couple years back. That’s all changing now, as Twitter convened its first mobile app-developer conference in four years. The event in San Francisco attracted 1,000 developers. At the conference, Twitter introduced Fabric, a set of developer tools that are intended to make it easier for developers to build apps and make money from them. It looks as if Twitter is taking note of the similar steps that Google, Facebook and others are taking to attract app developers.
- Sharing the wealth. A New York-based tech startup called Tsu is trying to establish a whole new business model for a social network. Tsu, which has attracted a $7 million venture capital investment from Sancus Capital, will pay users based on the advertisements that their postings attract. Tsu keeps only 10 percent of the revenue that it receives from ads, sponsorships, and third-party applications. The other 90 percent is divided into two pools of money. Half of it goes to the content creator who posted the content that attracted the ad. The other half goes to the social network that recruited that content creator.
- Time change. Until now, Twitter has made a clear distinction between people you follow and people you don’t follow: You only saw tweets from those whom you followed. Now, the service, in what it calls a “timeline experiment,” will place tweets on your timeline from select users that you are not following. Twitter is using an algorithm that determines which such tweets you will see based on the users that you do follow, the popularity of the users you do not follow, and other factors. You won’t be able to opt out of this feature and some frequent Twitter users have complained that it removes one of the factors that distinguishes Twitter from other social media platforms.
- False flag. We wrote recently about the fake Facebook account that the Drug Enforcement Administration created to gather information for a narcotics investigation. On October 17, Facebook’s chief security officer wrote a letter to DEA Administrator Michele Leonhart calling the agency’s actions a “knowing and serious breach” of Facebook’s policies. Facebook asked the DEA to confirm that it had stopped engaging in this tactic. Facebook’s letter specifically questioned the DEA’s contention that the woman who was the subject of the fake account implicitly consented to use of her personal information for such purposes when she consented to a search of her phone.
- Square deal. Foursquare has been known mostly as a check-in app – a place where you post your location but not much more. The company’s new ad campaign hopes to change that image and to position Foursquare as a food-oriented rating and recommendation network similar to Yelp and Urbanspoon. “Introducing the all-new Foursquare, which learns what you like and leads you to places you’ll love,” is the new slogan on the Foursquare website. The ad campaign will roll out in mass transit in New York and Chicago and in bike-share locations in the Windy City.
From our sister blog, MoFo Tech:
Within a decade, analysts say, the “Internet of Things” will have transformed our lives. Billions of Internet-connected devices will monitor our homes, businesses, cars, and even our bodies, using the data to manage everything from appliances to heart monitors. Companies like Google— which recently paid $3.2 billion for smart-thermostat company Nest Labs—are already racing to build the IoT. But businesses face fundamental questions regarding the ownership of data, protecting customer privacy, liability when devices fail, and more.
The IoT will connect product developers and manufacturers in countless new ways, creating uncertainty about ownership of and rights to customer data. If a company contracts with a big data vendor to store and process consumer information, for instance, each party will need to know that its partner has the legal rights to collect or share data, says Alistair Maughan, a partner in Morrison & Foerster’s London office who is co-chair of the Technology Transactions Group. Then there is the question of who owns the data. “There is a whole supply chain the law is only beginning to grapple with,” Maughan says. “Manufacturers will need to understand the risks when there aren’t clear government standards.”
An area of major interest is how companies will protect customer privacy when so much data is in play. Companies need to make sure that what they say about their use of data collected from connected devices is accurate, complete, and up to date. “There is no one-sizefits-all approach to data security,” says Morrison & Foerster partner D. Reed Freeman Jr., who specializes in privacy matters. “The burden for a company is to consider what kind of data you have and how to protect against reasonably foreseeable, unauthorized access to personal information.”
Liability, of course, is a paramount concern when connected businesses adjust their use of data for new business or consumer products, says Stephanie Sharron, a Morrison & Foerster partner and a member of the firm’s Technology Transactions Group. Using vast sets of data to find patterns and targets will leave open all sorts of possibilities for technical and human mistakes. “There are questions about who should bear responsibility for inaccurate inferences or patterns that give rise to harm,” Sharron says. “Or who is responsible if a pattern comes from inaccurate data from a malfunctioning sensor.”
Then there is the question of who will manage and monitor the electrical systems needed to operate such vast networks—traditional public utility companies, new electricity market participants, or a combination. “Customers will want more choices to accommodate the new technologies and services they get to use” in the IoT, says Robert S. Fleishman, senior of counsel for Morrison & Foerster and an expert on energy regulation law. “Generally it will be up to state public utility commissions to decide who gets to provide the traffic control function and related activities for these things to operate within the system for distributing energy.” Some state utility commissions have already started to look at reforming their regulations and policies.
- Court spanks parents. In a landmark decision, the Georgia Court of Appeals ruled in Boston v. Athearn that parents can be held responsible for the social media activities of their kids. The case involved a seventh-grade boy who, with assistance from a friend, created a fake Facebook profile for a female classmate; then, pretending to be the classmate, the boy made a series of offensive and outrageous posts, some of which falsely claimed that the classmate suffered from mental illness and took illegal drugs. Following complaints from the victim’s parents, the school suspended the boy for several days, and his parents grounded him for a week; the fake profile, however, remained on Facebook for eleven months. The victim, through her parents, ultimately sued the boy and his parents and the Georgia Court of Appeals, reversing a lower court decision to the contrary, determined that a reasonable jury could find that the boy’s parents, after learning of their son’s behavior, failed to exercise due care from that point onward by allowing the fake profile to remain on Facebook, and that such negligence proximately caused some portion of the injury sustained by the girl. With the growth of cyberbullying, and in the wake of the Boston decision, will we see more suits seeking to hold parents liable for their kids’ online misconduct?
- The oversharing economy. An Uber driver in Albuquerque, New Mexico had his driver account for the company cancelled because of what Uber called “hateful statements regarding Uber through Social Media.” Turns out that he had posted a tweet linking to an article about robberies of Uber drivers, and had included the following observation: “Driving for Uber, not much safer than driving a taxi.” The driver, Christopher Ortiz, said he was just sharing a story that was going around. Uber quickly agreed that he had done no real harm and reinstated him with an apology, calling the original decision “an error.” After all, Ortiz had a high rating from customers (4.8 out of a possible 5), and Uber’s own position is that drivers associated with the company are independent contractors, not company employees.
- What’s not to like? Copyblogger, a highly successful social media and online marketing company, has decided to ditch its Facebook presence – even though it had 38,000 fans for its Facebook page. After a good deal of thought, the company concluded that “Copyblogger’s presence on Facebook has not been beneficial for the brand or its audience.” In a detailed essay, brand marketing consultant Erika Napoletano, whom Copyblogger had brought in for the purpose of improving its Facebook presence, explained the perhaps surprising decision. One of the main reasons: the 38,000 fans didn’t really interact with the page. “The page had an overwhelming number of junk fans. These are accounts with little to no personal status update activity that just go around “Liking” Facebook pages. They’re essentially accounts tied to “click farms”—ones paid pennies for every Facebook page they Like,” Napoletano wrote. For this reason and several others, Copyblogger decided that, going forward, it would be “on the Web, just not on Facebook.”
- Doctor in the mouse. What if you could input a list of your current symptoms to Google, and quickly be connected with a doctor for a brief consultation? For a limited trial period, Google seems to have set up such a system for people who are looking for medical advice online. A lot of the details aren’t known yet, but a Google spokesperson told a Gizmodo reporter, “When you’re searching for basic health information — from conditions like insomnia or food poisoning — our goal is provide you with the most helpful information available.” The feature is part of Google’s Helpouts video-chat service.
- Just shoot me. Data mining has reached the world of selfies. Social media users may not know this, but unless they have marked their photos posted on social media sites as private, the photos can be analyzed in bulk by third parties and used for marketing purposes. Privacy advocates say people should assume that their photos, unless clearly marked as private, are being scanned by market researchers. The rules and regulations applicable to this practice, including the privacy policies of the relevant social media platforms, are not always clear. So if you’ve posted a photo of yourself wearing a particular brand of ski gear on the mountain, some company may be making marketing decisions based on your photo and thousands of others. Soon, it may be targeting ads to you on that basis as well. For our own blog post on this subject, please click here.
- Mere threats? In 2010, Anthony Elonis, a man from western Pennsylvania, made a series of rants on Facebook in the form of rap lyrics that threatened to kill his wife, an FBI agent, and children in a kindergarten class. He claimed that he never intended to kill anyone and that he was merely venting. He also claimed that his comments were protected by the First Amendment. Elonis was nonetheless charged and convicted under a federal threat statute and sentenced to 44 months in prison. The U.S. Supreme Court will hear his appeal in December. The case raises important issues, including whether statements on social media should be treated differently from statements made on the phone or in person. Elonis wrote to the Court, for example, “Modern media allow personal reflections intended for a small audience (or no audience) to be viewed widely by people who are unfamiliar with the context in which the statements were made and thus who may interpret the statements much differently than the speakers intended.”