The Internet Movie Database (IMDb) has filed suit to overturn a law that requires the popular entertainment website to remove the ages or birth dates of people in the entertainment industry upon request.

Vine might not be history after all.

Twitter users posted more than one billion election-related tweets between the first presidential debate and Election Day.

Facebook is testing a feature that allows company Page administrators to post job ads and receive applications from candidates.

People who create or encourage others to use “derogatory hashtags” on social media could be prosecuted in England and Wales.

A new “tried it” checkmark on pins will allow Pinterest users to share the products and projects they’ve purchased or attempted.

Did social media ads allow political campaigns to circumvent state laws prohibiting the visible promotion of candidates within a certain distance of polling places?

The Eight Circuit held that a college has the right to expel a student from its nursing program for inappropriate social media posts about his classmates, including the suggestion that he would inflict on one of them a “hemopneumothorax”—a lung puncture.

Law enforcement officials are increasing their use of social media to locate missing persons.

An unemployed single mother in California is facing several misdemeanor charges for selling her ceviche over social media.

Coming soon to a vending machine near you: Snapchat Spectacles (but only if you live in a densely populated area like New York or Los Angeles).

Social media analytics firms claim that social media did a better job at predicting Trump’s win than the polls.

Instagram now allows users to hide offensive comments posted to their feeds. Take that trolls!

Soon you’ll be able to watch Twitter content like NFL Thursday Night Football on a Twitter app on Apple TV, Xbox One and Amazon Fire TV.

“Ballot selfie” laws—laws that prohibit posting online photos of completed election ballots—are being challenged in Michigan and New Hampshire.

Google may be recording you regularly.

YouTube content creators can now communicate with their followers in real time.

AdBlock Plus has launched a service that allows website operators to display “acceptable” ads to visitors using the popular ad blocking software. Irony, anyone?

The EU might soon require the same things of chat apps like Skype that it requires of telecom businesses.

A controversial proposal aims to give the EU’s 500 million consumers more digital streaming content choices.

An Austrian teen whose parents overshared on social media looks to the law for recourse.

Baltimore County officials warned government employees to watch what they say on social media.

With so many alternative content providers around these days, why do we still watch so much TV?

Here’s a list of 50 Snapchat marketing influencers who Mashable says are worth following.

Instagram now allows users to zoom in on photos in their feeds and at least 11 brands are already capitalizing on the new feature.

Pinterest acquired Instapaper, a tool that allows you to cache webpages for reading at a later time.

A social-media celebrity with 500,000 followers and a lot of people interacting with his or her content could bring in how much for a single post?!

Snapchat’s first investor shares his secret for identifying the next big app.

SEC steps up scrutiny of investment advisers’ use of social media.

As younger audiences’ primary source of news, social media has understandably affected photojournalism.

Should social media companies establish guidelines for when they will—and will not—heed police officers’ requests to suspend suspects’ accounts?

Meet the officer behind a small New England city’s police department’s viral Facebook page.

Wondering whether you should hit “reply all” when someone has mistakenly included you on an email chain? The New York Times has one word for you.

Twitter took steps to remedy its harassment problem.

In addition, over the last six months, Twitter suspended 235,000 accounts that promoted terrorism.

The Washington Post is using language-generation technology to automatically produce stories on the Olympics and the election.

Video ads are going to start popping up on Pinterest.

Does it make sense for big brands to invest in expensive, highly-targeted social media advertising? Procter & Gamble doesn’t think so.

These brands are using Facebook in particularly effective ways during the Olympic games.

Since we first reported on the phenomenon nearly two years ago, Facebook has become an increasingly common vehicle for serving divorce papers.

Across the country, states are grappling with the conflict between existing laws that prohibit disclosing ballot information or images and the growing phenomenon of “ballot selfies”—photos posted to social media of people at the polls casting their ballots or of the ballots themselves.

Creating dozens of Facebook pages for a single brand can help marketers to increase social-media-engagement and please the Facebook algorithm gods, according to Contenly.

Here’s how Snapchat makes money from disappearing videos.

A Harvard Business Review article advises marketers to start listening to (as opposed to managing) conversations about their brands on social media.

For intel on what it can do to keep teens’ attention, Instagram goes straight to the source.

Facebook Messenger joins the elite “one billion monthly users” club just four years after its release as a standalone app.

A Canadian judge ordered a couple convicted of child neglect to post to all their social media accounts his decision describing their crime.

Leslie Jones of Ghostbusters highlights Twitter’s trolling problem. One tech columnist says the platform needs to rethink its application programming interface strategy to enable users and communities to insulate themselves from abuse.

Don’t drive and Facebook Live.

Google erased Dennis Cooper’s 14-year-old blog without warning or explanation. We recently examined the outcome of lawsuits challenging a platform’s right to remove user content (spoiler alert: the platforms usually win).

Twitter now lets anyone apply to get verified.

Researchers say there’s a correlation between an increase in the psychological stress that teens suffer and the amount of time they’re spending on social media.

A Playboy model who “fat-shamed” a woman by photographing her and posting it to Snapchat risks prosecution.

Forensic psychologists explain why people post evidence of their crimes to social media.

We may soon have a federal law making revenge porn illegal. Our blog post from 2014 took a look at some of the legal issues raised by revenge porn.

There’s now a dating app that sets people up on Pokémon Go dates. Want to know more about the most popular mobile game of all time? Read our Pokémon Go Business and Legal Primer.

Earlier this year, I helped moderate a lively panel discussion on social media business and legal trends. The panelists, who represented well-known brands, didn’t agree on anything. One panelist would make an observation, only to be immediately challenged by another panelist. Hoping to generate even more sparks, I asked each panelist to identify the issue that most frustrated him or her about social media marketing. To my surprise, the panelists all agreed that online trolls were among the biggest source of headaches.

This contentious group proceeded to unanimously bemoan the fact that the comments sections on their companies’ social media pages often devolve into depressing cesspools of invective and hate speech, scaring off customers who otherwise would be interested in engaging with brands online.

And it isn’t just our panelists who feel this way. Many online publishers have eliminated the comments sections on their websites as, over time, those sections became rife with off-topic, inflammatory and even downright scary messages.

For example, Above the Law, perhaps the most widely read website within the legal profession, recently canned its comments section, citing a change in the comments’ “number and quality.”

The technology news website Wired even put together a timeline chronicling other media companies’ moves to make the same decision, saying the change was possibly a result of the fact that, “as online audiences have grown, the pain of moderating conversations on the web has grown, too.”

Both brands and publishers are right to be concerned. Unlike consumers who visit an online branded community to voice a legitimate concern or share an invaluable insight, trolls “aren’t interested in a productive outcome.” Their main goal is harassment, and, as a columnist at The Daily Dot has observed, “People are generally less likely to use a service if harassment is part of the experience.” That’s especially true of online branded customer communities, which consumers mainly visit to get information about a brand (50%) and to engage with consumers like themselves (21%).

Of course, it’s easy for a brand to eliminate the comments section on its own website or blog. But, increasingly, brands are not engaging with consumers on their own online properties; they’re doing it on Facebook, Instagram, Twitter and other third-party social media platforms, where they typically do not have an ability to shut down user comments. Some of these platforms, however, are taking steps to rein in trolls or eliminate their opportunities to post disruptive comments altogether.

The blog comment hosting service Disqus, for example, recently unveiled a new platform feature that will allow users to “block profiles of commenters that are distracting from their online discussion experience.” The live video streaming app Periscope also recently took measures to rein in trolls, enabling users to flag what they consider to be inappropriate comments during a broadcast. If a majority of randomly selected viewers vote that the flagged comment is spam or abusive, the commenter’s ability to post is temporarily disabled. And even Facebook, Instagram and Twitter have stepped up their efforts to help users deal with harassment and unwanted messages.

Brands, however, are seeking a greater degree of control over user comments than what is being offered even by Disqus and Periscope. Given that branded content and advertising are crucial components of many social media platforms’ business models, we can expect to see platforms becoming more willing to provide brands with tools to address their troll concerns.

In fact, the user-generated content site Reddit has already taken steps in this direction. Because of its notorious trolling problem, Reddit has had trouble leveraging its large and passionate user base. Last year, in an effort to capitalize on the platform’s ability to identify trending content and create a space where brands wouldn’t be afraid to advertise, Reddit launched Upvote, and passionate user base. A site that culls news stories from Reddit’s popular subgroups and doesn’t allow comments.

Other platforms will presumably follow Reddit’s lead in creating comment-free spaces for brands. Although this may prove to be good news for many brands, one can’t help to feel that this inevitable development undermines—just as trolls have undermined—the single most exciting and revolutionary aspect of social media for companies: the ability to truly engage one-on-one with customers across the entire customer base.

*    *    *

This post is a version of an op-ed piece that originally appeared in MarketWatch.

For other Socially Aware posts addressing online marketing issues, please see the following:  Influencer Marketing: Tips for a Successful (and Legal) Advertising Campaign; Innovative Social Media Marketing Cannot Overlook Old-Fashioned Compliance; and Will Ad Blockers Kill Online Publishing?  Also, check out our Social Media Marketing infographic.

 

The Great Instagram Logo Freakout of 2016.

A UK council policy reportedly grants its members power to spy on residents by setting up fake Facebook profiles.

Guess who spends more of their workday on social media, women or men?

Lessons from one of YouTube’s first (and most successful) stars.

Should sharing tragic images on social media be against the law?

A team of Google employees proposed adding new emojis to represent women in professional situations.

Has social media forced everyone to brand themselves?

Nearly 500 startup companies offer tech solutions for the legal industry.

Using social media to speed up organ donation.

Suicide on Periscope prompts France to open inquiry.

Now an online service helps people to prepare their “digital legacy.” Will the “social media assets after death” issue never die? (Sorry, we couldn’t resist.)

Study finds social media manipulation is the most common form of sextortion.

We’re trying something new here at Socially Aware: In addition to our usual social-media and tech-law analyses and updates, we’re going to end each work week with a list of links to interesting social media stories around the Web, primarily things that caught our eye during the week that we may or may not ultimately write about in a future blog post.

Here’s our first list – enjoy!

Should prisoners be allowed to have Facebook pages?

Why do older people love Facebook? A New York Times writer asked her 61-year-old dad.

Judge upholds ex-cop’s murder conviction despite defense’s claim that juror’s Facebook posts evidenced a dislike for police.

The CIA’s venture capital arm is investing in companies that develop artificial intelligence to sift through enormous numbers of social media postings and decipher patterns.

Another cringe-worthy social media marketing campaign gaffe, this time by KFC Australia.

Facebook will now allow businesses to deliver automated customer support through chatbots.

VOTEThe U.S. District Court for the District of New Hampshire recently struck down on First Amendment grounds a 2014 amendment to New Hampshire Revised Statute 659:35 that made it illegal for New Hampshire voters to post pictures of their completed ballots to social media. While several states have laws that disallow ballot sharing, RSA 659:35 was the first “to explicitly ban voters from sharing their marked ballots on social media.”

The case, Rideout v. Gardner, was filed by the New Hampshire Civil Liberties Union on behalf of three voters who were being investigated by the New Hampshire Attorney General’s Office for violating the law banning ballot selfies during the September 2014 Republican primary elections.

The court first determined that RSA 659:35 was a content-based restriction on speech because it necessarily required regulators to “examine the content of the speech to determine whether it includes impermissible subject matter”—i.e., photographs of completed ballots.

The court then held that the statute could not meet the strict scrutiny standard that applies to content-based speech, “which requires the Government to prove that the restriction furthers a compelling interest and is narrowly tailored to achieve that interest.”

Paraphrasing a 2011 Supreme Court case, the Rideout court noted that, “[f]or an interest to be sufficiently compelling, the state must demonstrate that it addresses an actual problem.” The state had argued that the law was needed to prevent vote buying and voter intimidation, but the court was not convinced. In fact, the plaintiffs had produced evidence that vote buying had not been the subject of a single prosecution or complaint in New Hampshire since 1976.

The court also noted that the state had “failed to identify a single instance anywhere in the United States in which a credible claim has been made that digital or photographic images of completed ballots have been used to facilitate vote buying or voter coercion.”

Finally the court held that the law was not sufficiently narrowly tailored, because the “few who might be drawn into efforts to buy or coerce their votes are highly unlikely to broadcast their intentions via social media.” Thus, investigations for violation of RSA 659:35 “will naturally tend to focus on the low-hanging fruit of innocent voters who simply want the world to know how they have voted for entirely legitimate reasons” and will likely “punish only the innocent while leaving actual participants in vote buying and voter coercion schemes unscathed.”

Moreover, the state had an obvious, less-restrictive alternative: “[I]t can simply make it unlawful to use an image of a completed ballot in connection with vote buying and voter coercion schemes.”

iStock_000040880696_LargeSocial networking platforms have long faced the difficult task of balancing the desire to promote freedom of expression with the need to prevent abuse and harassment on their sites. One of social media’s greatest challenges is to make platforms safe enough so users are not constantly bombarded with offensive content and threats (a recent Pew Research Center study reported that 40% of Internet users have experienced harassment), yet open enough to foster discussion of complex, and sometimes controversial, topics.

This past year, certain companies have made some noteworthy changes. Perhaps most notably, Twitter, long known for its relatively permissive stance regarding content regulation, introduced automatic filtering and stricter language in its policies regarding threatening language. Also, Reddit, long known as the “wild wild west” of the Internet, released a controversial new anti‑harassment policy and took unprecedented proactive steps to regulate content by shutting down some of the site’s more controversial forums.

According to some, such changes came as a result of several recent, highly publicized instances of targeted threat campaigns on such platforms, such as “Gamergate,” a campaign against female gaming journalists organized and perpetrated over Twitter, Reddit and other social media platforms. Below we summarize how some of the major social networking platforms are addressing these difficult issues.

Facebook

Facebook’s anti-harassment policy and community standards have remained relatively stable over time. However, in March 2015, Facebook released a redesign of its Community Standards page in order to better explain its policies and make it easier to navigate. This was largely a cosmetic change.

According to Monika Bickert, Facebook’s head of global policy management, “We’re just trying to explain what we do more clearly.”

The rules of conduct are now grouped into the following four categories:

  1. “Helping to keep you safe” details the prohibition of bullying and harassment, direct threats, criminal activity, etc.
  2. “Encouraging respectful behavior” discusses the prohibition of nudity, hate speech and graphic content.
  3. “Keeping your account and personal information secure” lays out Facebook’s policy on fraud and spam.
  4. “Protecting your intellectual property” encourages users to only post content to which they own the rights.

Instagram

After a series of highly publicized censorship battles, Instagram updated its community standards page in April 2015 to clarify its policies. These more-detailed standards for appropriate images posted to the site are aimed at curbing nudity, pornography and harassment.

According to Nicky Jackson Colaco, director of public policy, “In the old guidelines, we would say ‘don’t be mean.’ Now we’re actively saying you can’t harass people. The language is just stronger.”

The old guidelines comprised a relatively simple list of do’s and don’ts—for example, the policy regarding abuse and harassment fell under Don’t #5: “Don’t be rude.” As such, the new guidelines are much more fleshed out. The new guidelines clearly state, “By using Instagram, you agree to these guidelines and our Terms of Use. We’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in a disabled account.”

According to Jackson Colaco, there was no one incident that triggered Instagram’s decision. Rather, the changes were catalyzed by continuous user complaints and confusion regarding the lack of clarity in content regulation. In policing content, Instagram has always relied on users to flag inappropriate content rather than actively patrolling the site for offensive material.

The language of the new guidelines now details several explicit rules, including the following:

  1. Nudity. Images of nudity and of an explicitly sexual nature are prohibited. However, Instagram makes an exception for “photos of post‑mastectomy scarring and women actively breastfeeding.”
  2. Illegal activity. Offering sexual services, buying or selling drugs (as well as promoting recreational use) is prohibited. There is a zero-tolerance policy for sexual images of minors and revenge porn (including threats of posting revenge porn).
  3. Harassment. “We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages…We carefully review reports of threats and consider many things when determining whether a threat is credible.”

Twitter

Twitter has made two major rounds of changes to its content regulation policies in the past year. These changes are especially salient given the fact that Twitter has previously been fairly permissive regarding content regulation.

In December 2014, Twitter announced a set of new tools to help users deal with harassment and unwanted messages. These tools allow users to more easily flag abuse and describe their reasons for blocking or reporting a Twitter account in more specific terms. While in the past Twitter had allowed users to report spam, the new tools allow users to report harassment, impersonations, self‑harm, suicide and, perhaps most interestingly, harassment on behalf of others.

Within “harassment,” Twitter allows the user to report multiple categories: “being disrespectful or offensive,” “harassing me” or “threatening violence or physical harm.” The new tools have also been designed to be more mobile-friendly.

Twitter also released a new blocked accounts page during this round of changes. This feature allows users to more easily manage the list of Twitter accounts they have blocked (rather than relying on third-party apps, as many did before). The company also changed how the blocking system operates. Before, blocked users could still tweet and respond to the blocker; they simply could not follow the blocker. Now, blocked accounts will not be able to view the profile of the blocker at all.

In April 2015, Twitter further cracked down on abuse and unveiled a new filter designed to automatically prevent users from seeing harassing and violent messages. For the first time, all users’ notifications will be filtered for abusive content. This change came shortly after an internal memo from CEO Dick Costolo leaked, in which he remarked, “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.”

The new filter will be automatically turned on for all users and cannot be turned off. According to Shreyas Doshi, head of product management, “This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of the Tweet to other content that our safety team has in the past independently determined to be abusive.”

Beyond the filter, Twitter also made two changes to its harassment policies. First, the rules against threatening language have been strengthened. While “direct, specific threats of violence against others” were always banned, that prohibition is now much broader and includes “threats of violence against others or promot[ing] violence against others.”

Second, users who breach the policies will now face heavier sanctions. Previously, the only options were to either ban an account completely or take no action (resulting in much of the threatening language not being sanctioned at all). Now, Twitter will begin to impose temporary suspensions for users who violate the rules but whose violation does not warrant a full ban.

Moreover, since Costolo’s statements, Twitter has tripled the size of its team handling abuse reports and added rules prohibiting revenge porn.

Reddit

In March 2015, Reddit prohibited the posting of several types of content, including anything copyrighted or confidential, violent personalized images and unauthorized photos or videos of nude or sexually excited subjects.

Two months later, Reddit unveiled a controversial new anti-harassment policy that represented a significant shift from Reddit’s long‑time reputation as an online free-for-all. The company announced that it was updating its policies to explicitly ban harassment against users. Some found this move surprising, given Reddit’s laissez-faire reputation and the wide range of subject matter and tone it had previously allowed to proliferate on its site (for example, Reddit only expressly banned sexually explicit content involving minors three years ago after much negative PR).

In a blog post titled “promote ideas, protect people,” Reddit announced it would be prohibiting “attacks and harassment of individuals” through the platform. According to Reddit’s former CEO Ellen Pao, “We’ve heard a lot of complaints and found that even our existing users were unhappy with the content on the site.”

In March 2015, Reddit also moved to ban the posting of nude photos without the subjects’ consent (i.e., revenge porn). In discussing the changes in content regulation, Alexis Ohanian, executive chairman, said, “Revenge porn didn’t exist in 2005. Smartphones didn’t really exist in 2005…we’re taking the standards we had 10 years ago and bringing them up to speed for 2015.” Interestingly, rather than actively policing the site, Reddit will rely on members to report offensive material to moderators.

Reddit’s new policy defines harassment as: “systematic and/or continued actions continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.”

As a result of the new policies, Reddit permanently removed five subreddits (forums) from the site: two dedicated to fat-shaming, one to racism, one to transphobia and one to harassing members of a progressive website. Apart from the expected criticisms of censorship, some commentators have condemned Reddit for the seemingly random selection of these specific subreddits. Even though these subreddits have been removed, many other offensive subreddits remain, including a violently anti-black subreddit and one dedicated to suggestive pictures of minors.

Google

In June 2015, Google took a major step in the battle against revenge porn, a form of online harassment that involves publishing private, sexually explicit photos of someone without that person’s consent. Adding to the damage, such photos may appear in Google search results for the person’s name. Google has now announced that it will remove such images from search results when the subject of the photo requests it.

Amit Singhal, senior vice president of Google Search, stated, “This is a narrow and limited policy, similar to how we treat removal requests for other highly sensitive personal information, such as bank account numbers and signatures, that may surface in our search results.” Some have questioned, though, why it took so long for Google to treat private sexual information similarly to other private information.

As social media grows up and becomes firmly ensconced in the mainstream, it is not surprising to see the major players striving to make their platforms safer and more comfortable for the majority of users. It will be interesting, though, to watch as the industry continues to wrestle with the challenge of instituting these new standards without overly restricting the free flow of content and ideas that made social media so appealing in the first place.