Cover Story

Free speech or censorship? Social media litigation is a hot legal battleground

  •  
  •  
  •  
  •  
  • Print.

emojis and american flag

Photo illustration by Brenan Sharp/Shutterstock

The now-retired U.S. Supreme Court Justice Anthony Kennedy, in an opinion on a 2017 First Amendment case, called the cyber age a revolution of historic proportions, noting that “we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.”

Kennedy said cyberspace, and social media in particular, was among the “most important places ... for the exchange of views.” He compared the internet to a public forum, akin to a public street or park. Although Justice Samuel A. Alito concurred in the opinion, he also chastised Kennedy for his “undisciplined dicta” and “unnecessary rhetoric.”

But Kennedy’s lofty language in Packingham v. North Carolina accurately observed that the greatest battleground for free expression both nationally and globally occurs online with social media. “That language reflects Justice Kennedy’s long-standing view that the public forum doctrine should not remain frozen in time, limited to protecting public squares and public parks, while new forums for public debate go unprotected,” explains free-speech expert Kevin O’Neill, a professor at Cleveland-Marshall School of Law. “It will be interesting to see whether today’s judges respond to his call.”

This hot battleground raises serious concerns about the future of free speech, including attempts at censorship by government actors critical of comments on social media, the shifting standards of private platforms to censor online expression and the rise of hate and extremist speech in the digital world.

Law Day logoFind out more about Law Day 2019 »

The ABA, recognizing the urgency and timeliness of these issues, has chosen “Free Speech, Free Press, Free Society” for this year’s Law Day theme. ABA President Bob Carlson notes that these themes have dominated public discourse and debate recently.

Government Blocking

One issue involves government officials blocking or removing critical comments online. In a sense, this violates the core First Amendment principle that individuals have the right to criticize government officials. In the landmark free-press/libel decision New York Times Co. v. Sullivan (1964), Justice William Brennan wrote that there is a “profound national commitment that debate on public issues should be uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”

More recently, the case of Knight First Amendment Institute for Columbia University v. Trump presents these issues in pristine form. President Donald Trump and a staffer named Donald Scavino were accused of violating the First Amendment by blocking several people from Trump’s engine of self-expression, his personal Twitter account, @realDonaldTrump. The plaintiffs’ tweets were not vulgar, but they criticized the president and his policies. For example, one of the plaintiffs was blocked after tweeting: “To be fair, you didn’t win the WH: Russia won it for you.”

In May 2018, Judge Naomi Reice Buchwald of the U.S. District Court for the Southern District of New York ruled that the president violated the blocked users’ First Amendment rights by engaging in impermissible viewpoint discrimination. She reasoned that while Twitter is a private company, Trump and his staffer exercised government control over the content of the tweets by blocking users who criticized the president in the interactive space on Twitter. The judge determined that this interactive space was a designated public forum and that the president could not discriminate against speakers because of their viewpoints.

The government appealed the decision to the New York City-based 2nd U.S. Circuit Court of Appeals. In its appellate brief, the government argues that the district court decision is “fundamentally misconceived” in part because “the @realDonaldTrump account belongs to Donald Trump in his personal capacity and is subject to his personal control, not the control of the government.” In other words, the government contends that Trump’s Twitter feed is not the speech of the government and thus not subject to First Amendment dictates.

On the other hand, the Knight First Amendment Institute at Columbia contends that the interactive space on Twitter, where individuals can tweet responses to the president’s expression, represent a designated public forum—a space the government has intentionally opened up for the expression of views. The Knight Institute contends that Trump and Scavino violated the most fundamental of all free-speech principles: that the government cannot engage in viewpoint discrimination of private speakers.

“The case is a game-changer for both free speech and the right to petition the government,” says Clay Calvert, director of the Marion B. Brechner First Amendment Project in the University of Florida College of Journalism and Communications. “The district court’s ruling highlights not only the importance of online social media platforms’ forums for interacting with government officials, but also confirms that when government officials use nongovernment entities like Twitter to comment on policy and personnel matters, the First Amendment comes into play.”

“I think it is potentially very important,” agrees constitutional law expert Erwin Chemerinsky, dean of the University of California at Berkeley School of Law and a contributor to the ABA Journal. “It is not just about Trump, but ultimately about government officials at all levels to exclude those who disagree with them from important media of communications.”

The decision is important also because there are countless disputes involving state and local government officials who have blocked users or removed comments that are critical of them. In April 2018, Maryland Gov. Larry Hogan agreed to a settlement with the American Civil Liberties Union of Maryland in a federal lawsuit over the blocking of those who criticized him from his Facebook page.

Under the settlement, the government admitted no liability but did agree to a new social media policy and the creation of a new “Constituent Message Page” that allows individuals to post their political expression, even if critical.

The blocking of critical speakers from Twitter feeds or comment pages on government pages is far from the only First Amendment issue on social media. The internet has led to a cottage industry of defamation lawsuits arising from intemperate online expression. For example, a federal district court in California recently reasoned that the president did not defame Stormy Daniels, an adult film actress who claimed she engaged in an intimate relationship with Trump in 2006. Daniels, whose real name is Stephanie Clifford, says that in 2011 she faced threats from an unknown man who said she must leave Trump alone. Daniels worked with a sketch artist to produce a picture of the man after Trump was elected president. Trump tweeted: “A sketch years later about a nonexistent man. A total con job, playing the Fake News Media for Fools (but they know)!” Daniels sued the president for defaming her, but the U.S. District Court for the Central District of California in Clifford v. Trump (2018) dismissed the suit, explaining that Trump had engaged in protected rhetorical hyperbole rather than protected speech.

Private censorship

Much of the censorship on social media does not emanate directly from the government. Often, the censorship comes from social media companies that police content pursuant to their own terms-of-service agreements. Outcries of political censorship abound. Recent controversies include radio show provocateur Alex Jones being removed from Facebook, YouTube and Apple for engaging in hateful speech; Facebook at least temporarily removing a newspaper’s online postings of sections of the Declaration of Independence; and uneven or inconsistent application of hate speech removal policies.

President Trump entered the arena—via Twitter as he usually does—accusing Google of censoring more conservative speech. He tweeted: “Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Evelyn Aswad, a law professor at the University of Oklahoma, notes that “American social media giants have chosen to move away from their initial First Amendment inclinations for a variety of reasons, including pressure from European authorities as well as advertisers and others to ‘clean up’ their platforms.”

Positions taken by the European Union are influential because social media companies operate on a global scale, reaching millions of users living in EU countries. University of Maryland law professor Danielle Keats Citron explains that it’s difficult to determine empirically the extent of censorship there. “What I can say is that the more companies use algorithms to filter hate speech and the more the European Union pressures companies to produce results in 24 hours or less, the more censorship creep is likely,” explains Citron, who wrote the 2018 Notre Dame Law Review article “Extremist Speech, Compelled Conformity, and Censorship Creep.”

Danielle Citron

Photo of Danielle Citron courtesy of University of Maryland Carey School of Law

“Given what we do know about the bluntness of algorithms coupled with vague definitions, the more speech we will see filtered and removed,” Citron adds. “That speech will likely include critiques of hate speech and dissenting speech. That is my worry.”

Yale Law School professor Jack M. Balkin explains that free speech law in the 21st century is no longer “dualist,” consisting of a territorial government as the censor and a private individual or group of individuals as the speaker. In the 21st century, free speech is what he terms a “triangle” in his 2018 Columbia Law Review essay “Free Speech is a Triangle.” This world consists of at least three categories of speakers: nation-states, internet infrastructure companies and a variety of individual speakers.

A key concern among many is that social media companies, because they are operating on a global scale, will censor material based on the requirements of those countries that censor the most, countries that certainly don’t protect freedom of speech like the United States. “As companies alter speech rules and speech operations in a wholesale way (rather than retail via country), then the strictest regime prevails,” Citron explains. “This is a considerable threat to free expression.”

“What concerns me is that we entrust a few unaccountable and self-interested tech companies to govern online discourse,” says University of Detroit Mercy law professor Kyle Langvardt, who wrote “Regulating Online Content Moderation” for the Georgetown Law Journal in 2018. “It seems obvious to me that this is an unacceptable way for a liberal society to do business.”

One possible response in the United States—with its tradition of protecting a greater deal of speech than other countries—is to hold these online platforms to First Amendment standards. Given Justice Kennedy’s language about the importance of cyberspace as a vast public forum—the question becomes whether the First Amendment could be applied to limit the censorial actions of private companies.

A significant hurdle to this is the state action doctrine, a key concept in constitutional law. The U.S. Supreme Court explained in the Civil Rights Cases (1883) that the 14th Amendment limits “state action” and not “individual invasion of individual rights.” In other words, the Constitution and the Bill of Rights limit the actions of governmental actors, not private actors.

Last year, a federal district court in Texas articulated the traditional view and ruled in Nyabwa v. Facebook that a private individual could not maintain a free-speech lawsuit against Facebook, writing: “the First Amendment governs only governmental limitations on speech.”

However, at times, the U.S. Supreme Court has stretched the state action doctrine. Perhaps most famously, the court ruled in Marsh v. Alabama (1946) that a privately owned company town was subject to First Amendment principles even though it was technically private. “Ownership does not always mean absolute dominion,” wrote Justice Hugo Black in recognizing the free-speech rights of a Jehovah’s Witness to distribute literature on the company-owned streets. “The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”

Could a court expand on the Marsh v. Alabama ruling and modify the state-action doctrine to hold a social media entity like Facebook to First Amendment constraints? Most legal experts view this as unlikely.

“At least as things currently stand, it is unlikely any court would see platforms as being fully state actors, at least for the purposes of the First Amendment,” says St. John’s University law professor Kate Klonick, author of the 2018 Harvard Law Review article “The New Governors: The People, Rules, and Processes Governing Online Speech.”

Klonick says that Marsh v. Alabama is the high-water mark. “While the court recently found in dicta that social media spaces were akin to the ‘public square,’ and access to them could not be blocked by the state, that’s still a far cry from saying Facebook or Twitter or Google are stepping fully into the role of the state and thus should be held to a First Amendment standard,” she says.

O’Neill says that though the days of “company towns” are long gone, it might be possible to invoke the old “governmental function” doctrine to establish the social media giants as state actors as social media becomes more and more pervasive. “But courts have long been reluctant to expand the exceptions to the state action doctrine, so I’m doubtful about this,” he says.

Langvardt explains that even if constitutional doctrine were modified to reach such a result, it would not be practical given the realities of online speech. “In any case, I’m not sure that calling Facebook a state actor would help all that much,” he says. “Content moderation moves much faster than traditional governmental censorship, and I doubt that courts are constitutionally equipped to keep up.”

Social media companies should base their speech policies on the protections in Article 19 of the International Covenant on Civil and Political Rights, Aswad argues. “For companies to avoid infringing on international freedom of expression protections, a three-part test should be met: (1) companies should make sure their speech codes are not vague and (2) companies should select the least-intrusive means of (3) achieving legitimate public interest objectives when infringing on speech.”

Policing hate-filled content

If such private entities are not subject to First Amendment constraints, what should be the obligation of social media platforms when it comes to regulating private expression, particularly expression that advocates hate or includes calls for violence?

These issues are becoming more important, particularly as there is an increase in hate and extremist speech on the internet.

“While I don’t believe any hard numbers exist, with an ever-increasing increase of available online platforms, it seems very likely that hate speech has risen significantly over the past decade,” says Shannon Martinez, program manager for Free Radicals Project, a group that provides support for those seeking to leave hate groups. “The internet is the main recruiting ground for most of today’s violence and hate-based groups.”

While comprehensive data may be difficult to nail down, some groups have documented rises in internet hate speech during the last presidential campaign and in the month following. The Anti-Defamation League reported that from Aug. 1, 2015, through July 31, 2016, there were more than 2.6 million tweets it considered anti-Semitic, with nearly 20,000 of them aimed at journalists. And after the 2016 election, the Southern Poverty Law Center compiled data from more than 1,800 extremist Twitter accounts and noted a rise in anti-Muslim images and memes between Nov. 8 and Dec. 8. Twitter later suspended some of those accounts.

Martinez believes that more should be done to address hate speech. “In America, however, we do not have laws which govern or define hate speech per se,” Martinez explains. “I believe that we should challenge ourselves to re-examine what we classify as harm or violence-inducing speech. Currently, we rely heavily on private tech companies to implement and uphold terms-of-service agreements to take down hate speech; however, this leaves vast swaths of the web completely unfiltered and unrestricted.”

Under the First Amendment, hate speech is a form of protected speech unless it crosses the line into narrow unprotected categories of speech, such as true threats, incitement to imminent lawless action, or fighting words. Controversy abounds over what actually constitutes hate speech. In her recent book, Hate: Why We Should Resist It with Free Speech, Not Censorship, Nadine Strossen writes that hate speech “has no single legal definition, and in our popular discourse it has been used loosely to demonize a wide array of disfavored views.”

The classic First Amendment response is the counterspeech doctrine traced to Justice Louis Brandeis’ concurring opinion in Whitney v. California (1927) in which he wrote that the preferred remedy to harmful expression is “more speech, not enforced silence.” But many worry that the counter-speech doctrine is sometimes inadequate to address online hate. “I think counter-speech is important, but often not enough,” Chemerinsky says. “If the speech crosses the line of true threats or harassment or incitement, then action can be taken and sometimes must be taken.”

However, others believe that having the government intervene would lead to more problems. “Policing hate speech online should be left to private entities like Facebook and Twitter to better enforce their own terms of use and service,” Calvert says. “They need to do more.”

“Ideally from a pro-free speech perspective, social media companies—much like private universities—would aspire to comport with First Amendment principles, and at a minimum not discriminate against political speech based on viewpoint,” Calvert explains. “Realistically, however, these are for-profit businesses that privilege profits and their own financial gains above constitutional goals.”

But this raises the question of whether such private entities will do more to respect freedom of expression and regulate the type of speech that perhaps does need to be removed. “There is no one-size-fits-all answer to this question because these platforms operate differently and with different commitments to a transparent process for users,” says Suzanne Nossel, CEO of PEN America, a human rights and First Amendment advocacy organization for writers. “But we are concerned about the discretion that exists at the hands of these platforms, and we advocate for greater transparency for the public to understand how Facebook, Twitter, Instagram, etc., make decisions that affect their individual online expression.”

Langvardt says that the most likely path would be for Congress to step in and create some sort of administrative-like system that would handle online censorship issues and complaints. But even that will present a First Amendment problem of another sort—that private online platforms have their own First Amendment rights of editorial discretion. “These platforms see themselves as the New York Times, and content moderation as a form of editing,” explains Langvardt. “I think this is a perverse position for the country’s pre-eminent censors to take, but there it is.”

“The queasy answer is that the very largest platforms should be subject to some kind of ongoing administrative oversight of their censorship practices,” Langvardt says. “It’s an ugly solution, but I think our society will eventually wind up there if we continue to care about free speech.”

What is clear is that Kennedy was correct when he talked about the importance of the cyber age on free expression. “The online world and social media have drastically changed the way we engage with each other and how we consume information,” Nossel says. “The law will necessarily begin setting boundaries to define acceptable forms of online expression, and it’s already doing that to some degree.”

See also:

How 2 Supreme Court cases from 1919 shaped the next century of First Amendment law

Student free speech case ‘chipped away’ at after 50 years, but ‘overall idea’ remains

 


Correction

In print and initial online versions of "Social Clashes," April, page 40, Kate Klonick's name was misspelled.

The Journal regrets the error.

David L. Hudson Jr., who teaches at Belmont University College of Law, is a regular contributor to the ABA Journal. This article originally appeared in the April 2019 ABA Journal with the headline "Social Clashes: Digital free speech is a hot legal battleground."

Give us feedback, share a story tip or update, or report an error.