The Law and Business of Social Media
February 02, 2024 - Artificial Intelligence, Section 230 Safe Harbor, Copyright, Online Promotions, Advertising

Social Links: AI Continues to Make the Headlines in 2024

Welcome to 2024 from Socially Aware! We’ve been tracking developments in the law and business of social media and related topics from end of last year into the beginning of the new year.

Here are some of the trending developments that have caught our attention.

Michael Cohen, Donald Trump’s former lawyer, used Google’s AI tool, Bard, to find cases in support of his release from post-prison supervision. Cohen’s lawyer, David Schwartz, neglected to verify the cases before including them in his brief and, as it turned out, some of the cases were fake. The use of AI in legal research has judges and courts on alert, as the legal system grapples with governing, and in some cases banning, AI for such purposes.

The U.S. Copyright Office plans to implement a “group registration option” that would allow online publishers to register their news websites, most of which are updated frequently, as collective works with a deposit of identifying material, instead of the entire website. Typically, copyright registration requires the publisher to submit two complete copies of the “best edition of the work,” but this is often not practical for constantly changing online news sites.

The LGBTQ+ dating app Grindr avoided a lawsuit brought against it by an underaged user who claimed that, due to a design defect, the app did not verify his age and connected him with four men who sexually abused him. U.S. District Judge Otis Wright in the Central District of California dismissed the case, holding that Section 230 barred the plaintiff’s claims. The court rejected the plaintiff’s invocation of the Ninth Circuit’s decision in Lemmon v. Snap, which held that Section 230 did not apply to negligent design claims that did not treat the defendant website as a “publisher or speaker” of “information provided by another information content provider.” Unlike in Lemmon, the court held, the plaintiff’s claims here were, in fact, based on Grindr’s publication of third-party content, namely geolocation data and user profiles. The court also rejected the plaintiff’s argument that FOSTA’s sex trafficking exception to Section 230 should apply, noting that plaintiff did not allege that Grindr’s own conduct violated the underlying sex trafficking laws and that, even if Grindr “turned a blind eye” to sex trafficking on its app, as the plaintiff alleged, that was not sufficient to come within the FOSTA exception to the Section 230 safe harbor.

The No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act, or No AI FRAUD Act, is a proposed bill from the House of Representatives that would create a new federal framework to regulate use of AI technology to clone people’s voices and likenesses. Introduced by Representatives María Elvira Salazar (R, Florida) and Madeleine Dean (D, Pennsylvania), the act intends to unify the current patchwork of state laws surrounding the right of publicity. According to Dean, “Not only does our bill protect artists and performers, but it gives all Americans the tools to protect their digital personas. By shielding individuals’ images and voices from manipulation, the No AI FRAUD Act prevents artificial intelligence from being used for harassment, bullying, or abuse.”

In late December 2023, several states considered enacting laws in 2024 that would curtail or restrict the use of AI in political campaigns, echoing growing concerns about the new technology’s use in creating and disseminating misinformation about candidates and issues, often referred to as “deepfakes.” According to Bloomberg Law, “Deepfakes are images or videos of a person’s likeness or other related things that have been digitally altered in a bid to misrepresent what happened in reality.” These proposed bills would require disclosures if AI technology was used to generate image, video, or audio of candidates. Approximately 25 states have introduced or passed legislation regarding AI in political campaigns.

Google is implementing a Chrome browser feature that cuts off websites’ access to third-party cookies. The initial implementation of the feature, referred to as Tracking Protection, rolled out to 1% of Chrome users in early January. When activated, the feature blocks a website’s access to third-party cookies, which websites use to track user behavior and target them with ads and other marketing content.

The new year also saw news organizations ask Congress for clarification that the use of copyrighted journalistic content to train generative AI large language models is not fair use. Condé Nast CEO Roger Lynch commented, “if Congress could clarify that the use of our content for training and output of AI models is not fair use, then the free market will take care of the rest.” Senator Josh Hawley, one of the lawmakers who is considering introducing AI legislation, had this to offer: “If the AI companies—which are really just the big tech companies—if their reading of fair use prevails, fair use is going to be the exception that swallows the rule. . . . We’re not going to have any copyright law left.”