|Alex van der Wolk, Marijn Storm, and Ronan Tigner authored an article for the IAPP covering the Belgian Data Protection Authority’s challenge to the “tell-a-friend” function on social media websites that enables users to share content with their personal contacts.
The DPA’s decision to fine social media platform Twoo for privacy violations of its tell-a-friend feature is important “because it makes non-users’ consent necessary for a platform to send emails or other communications to non-users, which puts the platform in a near-impossible situation,” according to the authors.
“The challenge for the outcome of the DPA’s decision is that it renders most tell-a-friend systems impracticable,” the authors wrote. “One of the main purposes of a tell-a-friend system is to alert and invite people who are not on the platform. From the platform’s perspective, it will be nearly impossible to validly obtain their consent to receive communications.”
Read the full article.
In the wake of the COVID-19 pandemic, children are spending more of their lives in the digital realm, both for education and entertainment purposes—but that doesn’t mean the Federal Trade Commission (FTC) is cutting online operators slack for not complying with the Children’s Online Privacy Protection Act (COPPA). Last week, the FTC levied a $4 million penalty against HyperBeard, Inc., a popular mobile app developer, to settle allegations that HyperBeard integrated third-party ad networks into its child-directed apps in violation of COPPA.(Due to HyperBeard’s inability to pay the full amount, the $4 million penalty will be suspended upon payment of $150,000 by HyperBeard).
The complaint is notable in that the FTC did not allege that HyperBeard itself collected any personal information from children—rather, the alleged violations centered around the company enabling third parties to collect personal information from children through its service. The fine serves as a warning to online operators that they are strictly responsible for their third-party integrations, even if they themselves do not collect personal information from children. Andrew Smith, Director of the FTC’s Bureau of Consumer Protection, emphasized, “If your app or website is directed to kids, you’ve got to make sure parents are in the loop before you collect children’s personal information. This includes allowing someone else, such as an ad network, to collect persistent identifiers, like advertising IDs or cookies, in order to serve behavioral advertising.” Continue Reading It’s 10 p.m. Do You Know What Your Third-Party Integrations Are Doing?
Despite the coronavirus pandemic, the process of implementing Brexit continues. One of the key Brexit issues for the tech sector is the extent to which the UK will either align or diverge its digital regulations with the EU.
Both the UK and EU have set out their intentions for their post-Brexit relationship in matters relating to technology, digital, and telecoms issues. There are signs that both the UK and EU will seek early alignment in key tech/digital compliance areas: is this a sign of things to come?
The UK government recently published its draft working text for a comprehensive free trade agreement between the UK and the EU (the “draft text”). As we explain below, the draft text covers (1) digital trade, recognising the importance of adopting frameworks that promote consumer confidence in digital services, (2) telecoms, and (3) audiovisual services. Intriguingly, the draft text aims to maintain regulatory alignment between the UK and the EU, insofar as possible, which could be a sign of the extent to which the UK plans to align with the EU (rather than forge its own path), both in these three areas and elsewhere, once the Brexit transition period is over. Continue Reading Digital Compliance in Europe: Regulatory Alignment Post-Brexit
Is scraping data from a publicly available website trade secret misappropriation? Based on a new opinion from the Eleventh Circuit, It might be.
In Compulife Software, Inc. v. Newman, Compulife Software, a life insurance quote database service alleged that one of its competitors scraped millions of insurance quotes from its database and then sold the proprietary data themselves. Compulife brought a number of claims against its competitors, including misappropriation of trade secrets under Florida’s version of the Uniform Trade Secrets Act (FUTSA) and under the Federal Defend Trade Secrets Act (DTSA).
Following a bench trial, Magistrate Judge James Hopkins found that, while Compulife’s underlying database merits trade secret protection, the individual quotes generated through public Internet queries to the database do not. So using a bot to take those individual quotes one by one did not constitute a misappropriation of trade secrets. On appeal, however, the Eleventh Circuit disagreed, vacated, and remanded the case.
Facts of the Case
Compulife’s main product is its “Transformative Database,” which contains many different premium-rate tables that it receives from life insurance companies. While these rate tables are available to the public, Compulife often receives these tables before they are released for general use. In addition, Compulife applies a special formula to these rate tables to calculate its personalized life insurance quotes. Continue Reading Webscraping a Publicly Available Database May Constitute Trade Secret Misappropriation
As we have noted many times in prior articles, courts often refuse to enforce “browsewrap” agreements where terms are presented to users merely by including a link on a page or screen without requiring affirmative acceptance. Courts typically look more favorably on “clickwrap” agreements where users agree to be bound by, for example, checking a box or clicking an “I accept” button.
The problem is that many implementations of online contracts do not fit neatly into one category or the other. The result is that courts, seemingly unable to resist the siren song of the “-wrap” terminology, find themselves struggling to shoehorn real-life cases into the binary clickwrap/browsewrap rubric, and often resort to inventing new terminology such as the dreaded “hybridwrap.”
HealthplanCRM, LLC v. Avmed, Inc., a case out of the Western District of Pennsylvania, illustrates this phenomenon. Plaintiff Cavulus licensed certain CRM software to defendant AvMed. AvMed decided to replace Cavulus software with a different CRM product and engaged defendant NTT to assist AvMed in transitioning its data to the successor product. Cavulus alleged, among other things, that NTT misappropriated its trade secrets in the course of doing this work. Cavulus sought to compel NTT to arbitrate these claims based on an arbitration clause contained in an “End-User Agreement” that was referenced in a link on the log-in page of the Cavulus software. Continue Reading Court Discovers Rare and Elusive “Enforceable Browsewrap”
A new report from the U.S. Copyright Office suggests that Congress should fine-tune the Digital Millennium Copyright Act (DMCA) to, among other things, alter the takedown system that platforms must adhere to in order to be eligible for the safe harbor the DMCA affords to online platforms when third parties post infringing content. Read about the Copyright Office’s issues with the current takedown system.
Malwarebytes, an online filtering company, has asked the U.S. Supreme Court to grant certiorari in a case brought by one of Malwarebytes’ competitors, Enigma Software, alleging that—when Malwarebytes flagged one of Enigma’s most popular offerings as a potential threat—Malwarebytes, among other things, committed a deceptive business practice. The Ninth Circuit refused to dismiss the case, holding that Section 230(c)(2) of the Communications Decency Act did not insulate Malwarebytes from liability.
A new law in France would impose fines of up to $1.36 million on technology platforms that fail to take down terrorist and child pornography content within one hour of that content being flagged, or fail to remove hateful comments that concern topics including gender or disability within 24 hours of being flagged.
Members of the Florida Bar who deliver targeted ads through social media must comply with that state bar’s more restrictive direct solicitation rules rather than its general advertising rules.
A law firm fired one of its Dallas-based employees after learning he had posted to his personal social media account a rant about businesses requesting him to wear a mask to thwart the possibility of spreading or catching the COVID-19 virus. The rant included a threat to “show [his] Glock 21” handgun shooting range results to “the lame security guard outside of a ghetto store.”
The New Jersey Supreme Court’s Disciplinary Review Board (DRB) decided that John Robertelli, a Rivkin Radler lawyer, violated the state’s Rules of Professional Conduct when—in order to gather evidence while acting as counsel for defendants in a personal injury case—Robertelli surreptitiously accessed the private Facebook account of the plaintiff, whom Robertelli knew was represented by opposing counsel. The DRB also recommended that the New Jersey Supreme Court adopt a policy on using social media for discovery purposes. Read the guidelines the DRB suggested.
Big changes are afoot at Facebook, which has recently introduced Shops, allowing users to purchase products directly from businesses’ Facebook pages, and announced the addition of new features to Workplace, the company’s “enterprise-focused chat and video platform.”
China’s “internet police,” who coordinate online censorship, have become especially busy since the coronavirus outbreak.
Inspired by homicides that were precipitated by social media posts created by one group of teenagers to incite another, a Florida bill would allow law enforcement to charge juveniles with a misdemeanor for posting photos of themselves with firearms online.
In an effort to control the proliferation of “a broad range of online harms”—from cyberbullying to child exploitation—the UK government has chosen the communications watchdog Ofcom as its first pick for enforcing its plan requiring platforms to take “reasonable” measures to protect their users from those harms.
Two-and-a-half years after the EU initiated an optional code of conduct on online hate speech, the percentage of flagged content that gets reviewed within 24 hours by the platforms that have opted in has risen considerably.
Unlike the rest of the European Union, which has adopted an opt-in code of conduct to address the online hate-speech problem, Germany is proposing legislation that would impose hefty fines on social media platforms that fail to report illegal content such as posts that are related to terrorism or qualify as racial incitement. Read how much they risk having to pay.
As the demand for “aspirational” influencers gives way to a desire for “authenticity,” influencers who chronicled their COVID-19 coping efforts drew ire for privileged behaviors including fleeing town to sit out the quarantine in vacation towns where they risk spreading the virus.
Mashable initially sought a license from the plaintiff, a professional photographer named Stephanie Sinclair, to display a photograph in connection with an article the company planned to post on its website, mashable.com. The plaintiff refused Mashable’s offer, but Mashable, nevertheless, embedded the photograph on its website through the use of Instagram’s API.
Often lauded as the most important law for online speech, Section 230 of the Communications Decency Act (CDA) does not just protect popular websites like Facebook, YouTube and Google from defamation and other claims based on third-party content. It is also critically important to spyware and malware protection services that offer online filtration tools.
Section 230(c)(2) grants broad immunity to any interactive computer service that blocks content it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Under a plain reading of the statute, Section 230(c)(2) clearly offers broad protection. With respect to what the phrase “otherwise objectionable” was intended to capture, however, the protections are less clear. Continue Reading Computer Service Providers Face Implied Limits on CDA Immunity
A federal district court in Illinois allowed claims for vicarious and direct copyright infringement to proceed against an employee of the Chicago Cubs Baseball Club for retweeting a third-party tweet containing the plaintiff’s copyrighted material. Read the opinion.
In an opinion important to platforms that monetize user-generated content, the U.S. Court of Appeals for the Ninth Circuit held the safe harbor provisions in §512(c) of the Digital Millennium Copyright Act (DMCA) did not exempt Zazzle—a company that creates and sells items such as t-shirts and mugs bearing images uploaded by users—from copyright liability for willfully infringing 35 separate copyrighted works. In one of this blog’s most popular posts ever, I explain how platforms that commercialize their user-generated content can reduce their risk.
The advertising industry and the U.S. Chamber of Commerce are encouraging California Attorney General Xavier Becerra to postpone the anticipated July enforcement of the California Consumer Privacy Act, citing the law’s complexity.
The estate of the late musician Prince successfully brought a copyright infringement claim against an individual who unofficially recorded and uploaded videos containing performances of copyrighted songs. Accordingly to a federal district court in Massachusetts, the videos do not qualify for the fair use exception to copyright infringement because the uploader/defendant “did not imbue Prince’s musical compositions with new meaning or add any of his own expression to the underlying works.” Read more of the court’s reasoning.
In a controversy as old as the Internet itself, Germans are debating whether social media users should be permitted to remain anonymous.
Actor Steven Seagal will pay more than $300,000 to resolve U.S. Securities and Exchange Commission claims that he failed to tell Twitter and Facebook followers he was being paid to promote an initial coin offering.
Twitter has a special process for reviewing tweets by public figures—including President Trump—that have been flagged for potentially violating the platform’s rules. This profile of Twitter’s top lawyer, Vijaya Gadde, describes it.