This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
9th U.S. Circuit Court of Appeals

Jul. 1, 2021

9th Circuit panel embraces mandating online filters, censoring users’ speech

The majority opinion encourages Congress to amend Section 230 to legally mandate that platforms adopt automated content filters for “dangerous” content, notwithstanding the fact that the First Amendment contains no exception for dangerous speech.

Aaron Mackey

Senior Staff Attorney, Electronic Frontier Foundation

The 9th U.S. Circuit Court of Appeals' decision last week in three consolidated cases seeking to hold online services liable for terrorist attacks -- coupled with the panel's endorsement of mandating automated filters to remove extremist content -- will result in the removal of lawful speech. However, expanded use of automated filters will likely take down even more protected expression than they already do, sweeping up human rights work, journalism, and a host of otherwise important speech in its overbroad net.

In Gonzalez v. Google, 2021 DJDAR 6167 (9th Cir. June 22, 2021), the court held that online services hosting user-generated speech could be liable for damages based on claims that terrorists used their platforms to spread their hateful messages online, recruit members or otherwise inspire perpetrators of attacks, and claim credit for attacks after they occurred. The services are potentially liable even though the complaints contain no allegations that they played any direct role in the attacks or otherwise directly assisted the individuals who perpetrated the attacks.

The panel held that the federal law immunizing online intermediaries from civil claims based on their users' content, 47 U.S.C. Section 230, does not apply to allegations that a platform shared advertising revenue with users who created the alleged terrorist content. That relationship, the 9th Circuit held, can serve as the basis for civil liability under the Anti-Terrorism Act's aiding and abetting prohibition, and thus allowed claims based on these allegations to proceed in one of the cases, Taamneh v. Twitter.

Of particular concern, two members of the panel, Judges Ronald M. Gould and Marsha S. Berzon, called on the full 9th Circuit to take the cases en banc to allow for even greater liability for hosting speech related to terrorism by overturning the court's previous decisions interpreting Section 230. That outcome that would dramatically limit one of the most important laws protecting internet users' speech.

To be certain, the deaths, injuries, and trauma that resulted from the terrorist attacks at the heart of the complaints in Gonzalez and the related cases are criminal acts of violence. And the plaintiffs can and should pursue federal civil claims against the individuals and organizations responsible for that violence.

But in holding that online intermediaries could be liable for those attacks based on hosting user-generated speech far removed from the violent acts themselves, the Gonzalez decision is likely to result in the removal of protected expression about terrorism, international politics, conflicts abroad, and human rights abuses.

Filters Already over Censor Legitimate Speech

As Abdul Rahman Al Jaloud, Hadi Al Khatib, Jeff Deutch, Dia Kayyali, and Jillian C. York write in "Caught in the Net: The Impact of "Extremist" Speech Regulations on Human Rights Content," large online platforms such as Facebook, YouTube and Twitter already employ faulty and blunt automated systems to flag and remove content they deem to be extremist, ironically resulting in the removal of valuable content like documentation of human rights violations.

In light of Gonzalez, these platforms are likely to double down on their automated filtering efforts, compounding errors and removing even more lawful speech out of concern that some user content may later be used in lawsuits like Gonzalez. The decision will also likely increase wrongful takedowns based on claims that another's content endorses extremism or terrorism, a heckler's veto used to target the speech of politically disempowered groups in the United States and abroad.

Beyond Section 230, the 9th Circuit also created a huge First Amendment problem because it creates liability for intermediaries even when they do not know, much less intend to distribute, user-generated content that could later serve as grounds for liability under Gonzalez.

This chilling legal regime will result from services mitigating their legal risks by rewarding overzealous censorship. When hosting users' speech tangentially related to terrorism or terrorists is grounds for potential liability, intermediaries have zero incentive to host that speech, even when it does not incite violence, constitute a true threat, or otherwise fall within one of the First Amendment's narrow exceptions.

The organization I work for, the Electronic Frontier Foundation, submitted an amicus curiae brief in Gonzalez arguing that expanding intermediary liability under these circumstances would harm users' speech and violate the First Amendment. The court accepted the brief but expressly stated that it would not consider the First Amendment argument because it was not raised by the parties.

Court Misunderstands Technology's Limits

Rather than recognizing this threat to online speech, the 9th Circuit panel embraces it. The majority opinion encourages Congress to amend Section 230 to legally mandate that platforms adopt automated content filters for "dangerous" content, notwithstanding the fact that the First Amendment contains no exception for dangerous speech.

The court's suggestion betrays a startling lack of knowledge about the current state of technology and the practical realities of moderating user-generated content at scale -- that is, produced by millions or billions of users. The court is mistaken to assume the technology used to remove child sexual abuse material can be easily used to "detect and isolate at least some dangerous content."

The automated filtering tools used to identify child sexual abuse material rely on algorithms that create unique "hash values" of known abusive images, which function like digital fingerprints. These tools are largely limited to identifying only a copy of an image they have already hashed. This is in part because automated technology's ability to do more than find exact matches is notoriously error prone. As Emma Llansó, Joris van Hoboken, Paddy Leerssen, and Jaron Harambam report in "Artificial Intelligence, Content Moderation, and Freedom of Expression," tools designed to detect nudity often flag images of deserts because they confuse sand with skin tones. And online blogging platform Tumblr's automated nudity filter was comically bad, flagging a wide range of innocuous content, from pictures of a patent application to a comic featuring a cat.

The law also plays a role in the success of child sexual abuse media filters. It's easier to target images reflecting child sexual abuse because under the First Amendment, child pornography enjoys no protection and is illegal to create, possess, sell, or otherwise distribute. New York v. Ferber, 458 U.S. 747 (1982). Because the content is itself illegal, filters do not have to understand anything about the content beyond the fact that it matches a known, illegal piece of media.

As the "Artificial Intelligence" authors report, tools designed to flag images and other media cannot adequately recognize context within and surrounding images. That context, however, is crucial to factual and legal determinations regarding whether any piece of expressive material is protected by the First Amendment. For example, automated filters cannot identify nuance, satire, and other legally significant context required for a fair use analysis. EFF maintains a Hall of Shame that documents these overbroad takedowns by automated tools used to identify purportedly copyrighted material, which have ensnared people publishing recordings of static and public domain government works like the Mueller Report.

Mandating automated filters that will censor both legitimate and toxic speech will not solve the deeply rooted social, economic, and political problems that give rise to extremism in the first place. Instead, these filters will only undermine the internet's role as forum for democratic debate that the First Amendment envisions, hampering our ability to learn directly about terrorist violence and to self-govern and identify solutions to stop extremism and acts of terrorism in the first place.

#363399


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com