• Blind spots. Self-driving cars are an excellent example of innovation, and the ones with Google technology have already traveled more than 700,000 miles. But what if a self-driving car doesn’t “see” a new traffic light or a previously nonexistent traffic sign? This could result in traffic citations, or worse. But Google says it’s taking steps towards eliminating this type of problem and that the future of self-driving cars is essentially unlimited.
  • Getting personal. As the name suggests, Michigan’s Video Rental Privacy Act limits the ability of companies to disclose information regarding customers’ video rental activities. But does the law cover magazines as well as videos? In a case filed by a consumer who alleged that a magazine company had improperly disclosed her personal information, along with information about the magazines to which she subscribed, the U.S. District Court for the Eastern District of Michigan recently held that the law does in fact apply to magazines. The court noted that the statute is directed to companies “engaged in the business of selling at retail, renting, or lending books or other written materials, sound recordings, or video recordings,” and that magazines constitute “other written materials.”
  • Geotargeting crime. In a new effort to use technology to foil credit-card fraud, a company called BillGuard is testing a system that would monitor the precise whereabouts of mobile devices to detect possible payment issues. The tech firm is tracking mobile-phone locations in an attempt to stay one step ahead of fraudsters. Because smartphones are almost always near their owners, the technology would register and flag those occasions when a phone is not near the owner’s credit card. The technology would only be used with the consumer’s consent.
  • Going mainstream. For the first time, both Twitter and Facebook are seeing significant growth in online advertising placed by major companies for brands such as Heineken, Tide, McDonald’s, and Charmin. Major consumer products companies have long struggled with the question of how to reach consumers on their mobile devices, and right now, this appears to be how they’re doing it.
  • Mismatch? The popular dating site OKCupid conducted some experiments with its user base — changing the type of information available to them about prospective matches and even falsifying it to an extent — in order to see what effect it had on conversations among daters and on how relationships developed. Some observers are critical of this type of experiment on ethical grounds.
  • Loose tweets sink ships? Law enforcement agencies in the Pacific Northwest have launched a “Tweet Smart” program that is intended to discourage people from using social media during emergencies to describe the movements and activities of law enforcement personnel. After a few recent shooting incidents, police are concerned that a tweet, for example, might tip off a perpetrator to police tactics.
  • Judges’ perspective. A recent survey of federal judges found that the vast majority of them do not believe that jurors’ use of social media has posed a problem in their courtrooms. Only 33 of 494 judges responding reported any detectable instances of jurors using social media, and the vast majority of those instances were harmless.

Google Glass (“Glass”) is the most high profile of the new wearable technologies that commentators predict will transform how we live and work.

Until now, the Android-powered glasses were only available in the U.S.  However, as of this week, Glass has been launched in the UK. Now, if you are 18 years old, have a UK credit card and address and a spare £1,000, you can purchase your own Glass and see what the fuss is all about.

Google has stated that it selected the UK for its second market because “[the UK] has a history of embracing technology, design and fashion and … there’s a resurgence happening in technology in the UK”.  But perhaps it is also because the UK’s data protection regulator, the Information Commissioner’s Office (ICO), has a reputation for being one of the more pragmatic privacy regulators in Europe. Because, for all its exciting technological benefits, Glass raises some thorny legal issues, in particular in relation to privacy.  In this alert we will address some of those key issues.


As many readers will already be aware, Glass is a form of wearable technology that gives its users hands-free access to a variety of smartphone features by attaching a highly compact head-mounted display system to a pair of specially designed eyeglass frames. The display system connects to a smartphone via Bluetooth. Glass can run specialised Android apps known as “Glassware”. In its current form, Glass can pull information from the web, take photographs, record videos, make and receive phone calls (via the Bluetooth smartphone connection), send messages via email or SMS, notify its user about messages and upcoming events, and provide navigation directions via GPS.  Although Glass is still in the testing stage and boasts only a modest set of features, the prototype device has already caused quite a stir. In particular, it has some triggered significant privacy concerns.


In terms of privacy, Glass throws up a variety of issues. Due to its functionality, Glass is likely to process two types of data relating to individuals: (i) personal data and meta data relating to the wearer of the Glass (“Glass User”) and (ii) personal data and meta data relating to any member of the general public who may be photographed or recorded by the Glass User (“Public”).  In June 2013, a group of regulators and the Article 29 Working Party, wrote to Google inviting Google to enter into a dialogue over the privacy issues relating to Glass. The letter pointed out that the authorities have long emphasised the importance of privacy by design, but added that most of the authorities had not been approached by Google to discuss privacy issues in detail. In Google’s response it stated that protecting the security and privacy of users was one of its top priorities. Google also identified various steps that it has taken to address privacy concerns, including a ban on facial-recognition Glassware.


As with any smartphone, Google will collect personal data and other meta-data relating to each Glass User. Google will need to comply with its obligations under the UK’s Data Protection Act 1998 (DPA). A key element of such compliance will be putting in place an appropriate privacy policy for Glass Users. However, to date, Google has encountered some difficulties in this regard.

Indeed, in July 2013, the ICO wrote to Google confirming that Google’s updated privacy policy raised serious questions about its compliance with the DPA. In particular, the ICO believed that the updated policy did not provide sufficient information to enable UK users of Google’s services to understand how their data will be used across all of the company’s products. It stated that Google must amend its privacy policy, and failure to take necessary action would leave the company open to the possibility of formal enforcement action.

Google has argued consistently that its privacy policy complies with EU data protection law. To date, no formal action has been taken by the UK, although Google has faced action elsewhere in Europe (e.g. in Spain).

Continue Reading Google Glass Into Europe – A Small Step or a Giant Leap?

From our sister blog, MoFo Tech:

Widely applicable rules regarding consumer privacy disclosures in our increasingly mobile world are only now emerging. Government agencies, individual states, and professional associations are all weighing in on how mobile app developers should disclose how they collect, store, use, and protect the wide range of highly personal data being collected every day.

The Application Privacy, Protection, and Security Act of 2013, better known as the APPS Act, is intended to bring conformity to the unwieldy world of mobile app development. With a divided Congress struggling to pass even mandatory legislation, though, passage of any type of discretionary legislation this year seems unlikely, says D. Reed Freeman Jr., a partner with Morrison & Foerster in Washington, D.C. In the meantime, Freeman says, developers should focus on the Federal Trade Commission, “because even without congressional action, it has broad jurisdiction, and it has already brought cases and issued guidance on mobile privacy and data security.”

Charged with the intentionally broad mandate of guarding consumers from “deceptive” and “unfair” business practices, the FTC has been proactively applying its consumer protection laws across nearly all media, including mobile technology. A recent FTC policy document is especially revealing because it describes how the FTC expects disclosures of material facts to be made on mobile devices, “and privacy disclosures can certainly be material,” Freeman says.

So it’s up to the mobile app company to think carefully about the ways its program could surprise a reasonable user and disclose them appropriately. Freeman offers this rule of thumb:  “Would a reasonable consumer, under the circumstances, understand what information is being collected about her while she’s on a mobile device and what it is being used for?” If so, companies need to disclose those facts clearly and not bury them in EULAs or terms of use.

California’s Online Privacy Protection Act, passed in 2003, has taken consumer privacy one step further than the FTC has. It requires companies that operate commercial websites or online services and that collect personal information of any kind—including usernames and passwords—to prominently post a privacy policy somewhere on their homepage, says Andrew Serwin, a partner in Morrison & Foerster’s San Diego office.

And while California’s jurisdiction ends at the state line, its reach is often national, Serwin adds. “Companies with customers in all 50 states have to ask themselves whether they want to develop state-specific programs or apply standards across the board,” he says. Since the mobile world doesn’t recognize geographic boundaries, Serwin recommends that developers work toward the highest standards and beyond. “Privacy isn’t just a legal issue. It’s a brand issue,” he says.

Apart from knowing the law, businesses need to consider their own reputations and their customer relationships when collecting, using, and protecting personal information, Serwin says. For example, how could losing users’ passwords tarnish the company’s image in the market? “Current law doesn’t specifically cover that possibility, but,” he notes, “it may be in the company’s best interest to address these types of issues.”

Sign offered by Stop the Cyborgs to indicate a ‘no-Glass’ zone. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

What Is Google Glass?

As most Socially Aware readers know, Google Glass (“Glass”) is a form of wearable technology that gives its users hands-free access to a variety of smartphone features by attaching a highly compact head-mounted display system to a pair of specially designed eyeglass frames. The display system connects to a smartphone via Bluetooth. In its current form, Glass can pull information from the web, take photographs, record videos, send messages via email or SMS, notify its user about messages and upcoming events and provide navigation directions via GPS. An embellished demonstration of Glass’s features is available at Google’s Glass web page.

Although Glass is in the testing stage as of the time of this writing and boasts only a modest set of features, the device has caused quite a stir in both the mainstream and social media spheres. Wearable technology, however, has been around for quite a while (for an extensive history of wearable computers, pay a visit to Paul Miller’s article on The Verge) and, although controversial, many of the concerns raised by Google Glass are not entirely new. This post will explore some of the more common concerns raised about Glass in the context of evolving legal and social norms—all premised on the assumption that Glass eventually will ultimately become a widely used, mainstream product.

Glass and Privacy

When the original Kodak cameras were released in the late 19th century, they caused a huge uproar among both lawmakers and consumers for their ability to do what they are designed to do: that is, take pictures. This led to widespread bans on cameras at beaches, the Washington Monument and other locations. Samuel Warren and Justice Louis Brandeis aptly noted in an 1890 Harvard Law Review article:

Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons[.]

As Kodak cameras became more mainstream, society adapted by creating new laws, one of the most important of which was development of the “reasonable expectation of privacy” doctrine, which purports to protect individuals from being photographed in certain places recognized as “zones of privacy”—a designation that does not typically extend to public places.

Needless to say, Glass is made of more advanced technology than the original Kodak cameras, and this new technology raises a whole new set of potential concerns. In particular, (1) taking a photograph with a traditional camera is typically more noticeable to subjects and onlookers alike than taking a photograph with a “wearable” device like Glass, and (2) the Bluetooth connection between Glass and its user’s smartphone allows the possibility of real-time facial recognition.

In part due to these concerns, on May 16, 2013, a bipartisan caucus of congressmen sent Google an inquiry regarding a variety of privacy matters. In response to that inquiry, Google announced on June 3, 2013, that it would not allow applications with facial recognition on Google Glass. (Naturally, hackers have thumbed their noses at Google’s announcement, reportedly building their own unauthorized software with facial recognition features.)

Although banning facial recognition apps may address the second concern noted above, the first concern still stands because people being photographed by a Glass wearer, whether in a “zone of privacy” or in a public place in which there is no reasonable expectation of privacy, simply might not even know it. A handful of establishments have responded by preemptively banning the device from their premises. Seattle’s 5 Point Café was, perhaps, the first to issue such a ban, announcing via Facebook back in May 2013, “For the record, The 5 Point is the first Seattle business to ban in advance Google Glasses. And a** kickings will be encouraged for violators.” Colorado’s Press Play Bar followed with its own ban in July 2013. And Guantanamo has banned Google Glass.

Only time will tell whether one-off bans on Glass and similar devices are akin to the overreactions—at least we now perceive them to be overreactions—that inspired bans on Kodak cameras in the late 19th century. And, perhaps preemptively, Glass already limits a user’s ability to take photos to cases in which the user either speaks an audible command or makes a visible swipe on the device’s tactile sensor, and limits video recordings to 10 seconds in length without a user holding onto the tactile sensor. Of course, developers have already created an app that lets users take pictures by simply winking. Glass’s entrance into the mainstream is poised to cause further disruption.

Breaking the Casino

In the 1960s, a group of UCLA and MIT graduate students created a “cigarette pack sized analog device” that increased the expected gain of playing roulette by 44%. The theory behind the device was to feed data concerning the motion of the roulette wheel and ball to a primitive computer that would predict the likely location of the ball’s drop. The premise of such a device was featured more recently in an episode of the popular television show CSI, in which (again) a pair of students created a device that would send video data from the casino back to an off-site computer run by one of the students, who would then relay the predictions back to the player on-site.

The possibility of improving gamblers’ odds over the house’s odds goes further than just roulette. For instance, with the assistance of a computer, even average blackjack players could accomplish feats reserved for the most skilled card counters; this is why Nevada gaming regulators issued an alert to casino operators in February 2009, warning them about the use of a then newly released simple card counter app. Wearable computers at the poker table can even be used to transmit hand information from one play to another, enabling collusion.

Perhaps it’s only natural that casino operators are fearful of Google Glass. The Associated Press reported on June 12, 2013, that the Nevada and New Jersey Gaming Commissions have urged casinos to ban gamblers from wearing Google Glass on their premises. Some casino operators, such as Caesar’s Palace, have already forbidden their customers from wearing Glass while in their casinos, and Delaware has banned Glass from its own casinos. None of this is surprising, given casinos’ long history of taking strong measures to prevent players from gaining an edge over the house. And given the level of deference that state gaming commissions afford casinos in limiting the use of electronics on their premises, Glass is likely to be unwelcome at gambling houses for the foreseeable future.

Safety While Driving

In February 2013, Sergey Brin, Google co-founder and Glass developer, commented during a segment of TED Talks that one of Project Glass’s goals was to change how people interact with their smartphones. According to Brin, the goal is to “free your hands” and “free your eyes” by limiting the need to look down at a phone screen. One Glass feature that best embodies this goal is turn-by-turn navigation.

In its current iteration, Glass’s turn-by-turn navigation is relatively simple, capable only of providing pop-up notifications of upcoming turns. In the future, Glass may be capable of layering information over a user’s peripheral vision, and even augmenting that information with information from the web. Yet, even in light of the device’s relatively simple set of current navigational features, the possibility of using Glass while driving has caused plenty of stir.

Daniel Simons and Christopher Chabris, psychology professors at the University of Illinois and Union College, respectively, explored the potential safety concerns arising from using Glass while driving in a May 24, 2013 New York Times op-ed piece. Simons and Chabris argue that people are fundamentally incapable of looking away from where they’re headed for more than a couple of seconds without losing their bearings. Drivers “intuitively grasp” this limitation by only glancing at the car radio or speedometer briefly before returning their eyes to the road. (Meanwhile, other distractions have been shown to be far more dangerous; the op-ed cites a study that demonstrated that drivers who texted with their mobile devices looked away from the road for as long as 4.6 seconds during a given six-second period, more than sufficient time to cause a major accident.)

Glass tries to circumvent this limitation by only displaying turn-by-turn information at relevant times, that is, just before turns that are coming up, as demonstrated in this video. Still, Simons and Chabris believe that it will be a challenge to find the right balance of information that can be safely displayed directly in drivers’ fields of vision.

Safety concerns like these are the motivation behind West Virginia State Rep. Howell’s proposed legislation that would amend driving laws to prohibit “using a wearable computer with head mounted display” while driving. Delaware’s lawmakers have introduced similar legislation. And according to some reports, the UK Department for Transport is considering its own ban on using Google Glass while driving.

It is unclear whether a blanket legal ban on head-mounted displays is the best approach to maximize safety. Arguably, Glass may strike the right balance by providing drivers with the same information they would typically retrieve by glancing down at a GPS system—without making drivers look away from the road. Head-mounted systems like Glass could be also used as a sort of “warning system” that alerts drivers that they are, say, approaching the speed limit, again without having to look down at separate speedometers. On the other hand, any guidelines for when and how head-mounted displays like Glass can be used on the road would probably need to be both granular and flexible to accommodate what will undoubtedly be a rapidly evolving technology.

The Future

Can you envision the first time someone uses Glass to surreptitiously record a feature film at the local multiplex? According to Fast Company, a VP at the National Association of Theatre Owners has imagined just such a situation and says that his group anticipates working with its hundreds of members to develop Glass usage policies for their theaters. Can you picture the first time someone uses Glass to record a concert whose producer or venue enforces a strict “no videotaping” policy, or to secretly photograph sensitive documents containing trade secrets? Or the first time someone is wearing Glass while committing a crime? How will workplaces handle Glass, whether worn by visitors or used by their own employees on or off the job?

Countless situations are going to be influenced by Google Glass and similar wearable technologies. And given the range of issues that have already arisen in beta, these technologies’ impact on laws and social norms is bound to be more than just a matter of where you can or can’t wear your Glass.

Article courtesy of Morrison & Foerster’s Mobile Payments Practice

On May 30, 2013, the California Department of Financial Institutions (CADFI) issued a cease and desist letter to Bitcoin Foundation, a not-for-profit organization established to standardize, protect and promote the use and adoption of Bitcoin. CADFI stated in its letter that Bitcoin Foundation “may be engaged in the business of money transmission without having obtained the license or proper authorization required by” California’s Money Transmission Act. CADFI’s issuance of the letter, the Financial Crimes Enforcement Network’s (FinCEN) recent guidance regarding virtual currencies and the subsequent asset seizures of prominent Bitcoin exchanges all reflect increased scrutiny of the use of virtual currencies.


CADFI’s letter notes that Bitcoin Foundation may be in violation of California’s money transmitter licensing law (Cal. Fin. Code § 2030), as well as federal statutes that impose penalties for the failure to have a required state money transmission license and the failure to register as a money transmission business. (18 U.S.C. § 1960, 31 U.S.C. § 5330.)

Section 2030 of the California Financial Code prohibits persons from engaging in “the business of money transmission in California without first obtaining a license from the Commissioner of Financial Institutions.” A person in violation of this statute may be subject to civil money penalties under § 2151, and possibly criminal prosecution under § 2152. The California Attorney General may also sue under §§ 17200, 17205 and 17206 of the California Business and Professions Code.

In addition, CADFI noted that under 18 U.S.C. § 1960, it is a felony to own, control or conduct the business of money transmission without the appropriate state license, or without registering with FinCEN. Violations of this Section are punishable by “up to 5 years in prison and a $250,000 fine.” CADFI stated that this same activity without a license is also a “felony under California law, pursuant to [California] Financial Code § 2152(b).”

CADFI requested that Bitcoin Foundation “advise [it] in writing within [20] days” of the date of the letter regarding the “steps [] taken to comply with [CADFI’s] order.” In addition, CADFI noted that “[n]othing in [its] letter is intended to affect any legal remedies, criminal or civil, which the State of California or the Commissioner might pursue for past or future violation of [the] laws [cited].”


Introduced in 2009, Bitcoin is a virtual currency that is controlled by a software algorithm (“Bitcoin Algorithm”) running on the Internet. Both the creation and transfer of Bitcoins is performed by this algorithm. Bitcoins are created by a procedure called “mining,” where users provide their computer resources to help the Bitcoin Algorithm process Bitcoin transactions. In exchange, users are compensated with Bitcoins. The Bitcoin Algorithm restricts the total number of Bitcoins to be “mined” to 21 million Bitcoins. Currently, there are approximately 11 million mined Bitcoins that are in circulation.

Like other forms of currency, Bitcoins can be exchanged for goods and services. However, the value of a Bitcoin (how many goods or services can be exchanged for a Bitcoin) is volatile. This volatility is attributable to the fact that, unlike currencies like the U.S. dollar or the Euro, which are issued by their respective governing bodies, Bitcoins are not supported by any sovereign entity. As a result, the value of a Bitcoin is driven and determined by public perception.

Since Bitcoin’s inception, the virtual currency has been gaining popularity and acceptance. Currently, several Bitcoin exchanges and payment websites allow users to exchange (buy and sell) Bitcoins with popular currencies, such as the U.S. dollar. In addition, some merchants, both online and in person, are beginning to accept Bitcoins as an alternative to traditional currencies for payment. The value of a Bitcoin has fluctuated from about $0.0025 since its inception to a high of about $266 on April 10, 2013. The current value is approximately $78 per Bitcoin.

Why Do Individuals Use Bitcoin?

A major reason why individuals may prefer to use Bitcoin transfers instead of traditional electronic transfers is for anonymity and privacy. Bitcoins are transferred from peer to peer without the need for an intermediary financial institution to process payments. The Bitcoin-transmitting party merely needs to know the receiving party’s Bitcoin address to execute a transfer. In contrast, when a traditional payment card is used to make a transaction, there typically are records identifying the transferor, transferee and the amount transferred. Because Bitcoin transfers do not rely on established payment systems to process transactions, Bitcoin transfers allow the transferor and transferee to remain anonymous.

Concerns Driving Regulatory Interest

In issuing the cease and desist letter to Bitcoin Foundation, CADFI was likely concerned about the same aspects of Bitcoin that attracts its users—anonymity and privacy. Specifically, the concern regarding the anonymous aspects of Bitcoin is its potential for facilitating criminal activity, money laundering and illegal transactions. On the other hand, by classifying Bitcoin Foundation as a money transmitter and, by extension, classifying the transfer of Bitcoins as transmission of money, CADFI can subject Bitcoin Foundation to the same requirements as traditional money transmitters. The resulting mandatory record keeping could significantly diminish the anonymous aspects of Bitcoin, and in turn diminish the attractiveness of using Bitcoin for the transfer of funds related to criminal activity.


On March 18, 2013, FinCEN issued interpretive guidance, entitled “Application of FinCEN’s Regulations to Persons Administering, Exchanging, or Using Virtual Currencies.” The guidance is intended by FinCEN to clarify the applicability of the Bank Secrecy Act (BSA) and its implementing regulations to persons creating, obtaining, distributing, exchanging, accepting or transmitting virtual currencies. The guidance addresses “convertible” virtual currency, which is described as a type of virtual currency that either has an equivalent value in real currency, or acts as a substitute for real currency.

The guidance defines “users,” “administrators” and “exchangers” of convertible virtual currency and explains which of these participants in a virtual currency environment is a Money Services Business (MSB) for purposes of the BSA and FinCEN’s implementing regulations. Under the guidance, an administrator or exchanger that accepts and transmits a convertible virtual currency, or that buys or sells convertible virtual currency for any reason, is a money transmitter under FinCEN’s regulations (unless a limitation or exemption from the money transmitter definition applies). The guidance also explains that accepting and transmitting anything of value that substitutes for currency makes a person a money transmitter under the BSA’s implementing regulations.


Since FinCEN’s virtual currency guidance was issued, U.S. regulators have seized assets of several virtual currency exchanges. Notably, on May 14, 2013, one of the world’s largest Bitcoin exchanges, Mt. Gox, had its U.S.-based assets seized by U.S. authorities.

On April 12, 2013, the UK’s Office of Fair Trading (OFT), the UK regulator for consumer affairs and competition, announced that it was launching an investigation into children’s web- and app-based games. In particular, the OFT is looking into whether such games comply with the Consumer Protection from Unfair Trading Regulations 2008 (“Regulations”), and are not misleading or aggressive (for example, directly encouraging children to buy something or to pester their parents or other adults to buy something on their behalf). The investigation is expected to take six months to complete, and will take into account views from mobile app platform operators and other businesses operating in this market, together with the views of parents and consumer groups.

The investigation was launched in response to reports of children racking up substantial bills on so-called “free” online and app-based games. For example, in March 2013, it was reported that a 5-year-old boy amassed a bill of £1,700 in just 15 minutes via add-ons while playing the “free” game “Zombies vs Ninja.” There are thousands of games like this that are marketed as being free to download but, once the user starts playing, present advertising encouraging the user to pay to get access to extra levels or to receive in-game extras such as virtual coins, gems or other tokens.

The OFT estimates that, as of April 9, 2013, 80 of the 100 top-grossing Android apps were free to download, yet raised revenue through these kinds of in-app purchases. Although platforms will often enable password protection to restrict in-app purchases, such measures will not prevent purchases by children who know their parents’ password or by parents who, at the request of their children, insert their password without appreciation of the risks.

The Regulations, which implement the Unfair Commercial Practices Directive 2005/29/EC, state that unfair commercial practices are prohibited. A commercial practice is deemed to be unfair if it contravenes the requirements of professional diligence and materially distorts, or is likely to materially distort, the economic behavior of the consumer with regard to the product. Aggressive commercial practices are those that impair the average consumer’s freedom of choice or conduct through the use of harassment, coercion or undue influence, and that thereby cause, or are likely to cause, the consumer “to take a transactional decision he would not have taken otherwise.” Undue influence includes exploiting a position of power in relation to the consumer.

The Regulations clearly provide that it is unfair to include in an advertisement a direct exhortation to children to buy advertised products, or persuade their parents or other adults to buy advertised products for them. Breach of the Regulations can lead to criminal penalties such as a fine or imprisonment for up to two years.

The OFT has made it clear that no company that is included in its investigation should be assumed to have broken the Regulations. In addition, the OFT has stated that it does not wish to ban in-app purchases, but rather to determine whether they are compliant with the Regulations in order to ensure that children are protected. Nevertheless, the OFT has indicated that it will take enforcement action against offending companies if necessary. The outcome of the OFT’s investigation is expected to be published in October 2013.

In the meantime, a number of guides have appeared providing advice to parents on how to block in-app purchases (including guidance published by the UK communications regulator, Ofcom), and at least one major app distributor has added in-app purchase warnings to its app store listings. This could be the key to future settings: allowing users to filter out in-app purchase applications when downloading them.

It will be interesting to see what approach the OFT decides to take as a result of its investigation, and where it believes responsibility should lie. Should parents be expected to take more control over their children’s gaming activities or should providers be required to do more, e.g., by providing more information warning users on the nature of these “freemium” apps contained within their stores?

Lastly, note that this investigation may have consequences for game providers operating elsewhere in Europe. Because the Regulations are based on EU law, other EU regulators will be watching the progress of the OFT investigation closely to consider whether they, too, need to scrutinize games providers’ compliance with the equivalent laws in their territories.