This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

U.S. Supreme Court

Feb. 8, 2024

Will SCOTUS reconsider virtual child porn laws in light of deepfake culture?

Deepfake culture, the First Amendment and child pornography legislation are still at odds, but the proliferation of AI content in this arena suggests a renewed need for federal regulation.

Krista L. Baughman

Partner, Dhillon Law Group Inc.

Phone: (415) 433-1700

Email: kbaughman@dhillonlaw.com

Brooklyn Law School; Brooklyn NY

Krista's practice focuses on First Amendment and anti-SLAPP law and litigation.

When it comes to sexually explicit images generated by AI, Taylor Swift is just the tip of the iceberg. There is profound national concern about the proliferation of deepfake images, especially those involving children. While pornography made with real children is illegal nationwide, what about content that looks like child pornography, but was not created using minors? There is currently no federal law regulating this content – and the last time Congress passed such a law, in 2002, it was struck down on First Amendment grounds.

Would a similar law suffer the same fate in 2024?

Child pornography, meaning sexually explicit content created using real children, is outside the protections of the First Amendment and has long been prohibited by federal law. Almost 30 years ago, Congress sought to expand that protection by passing the Child Pornography Prevention Act of 1996 (CPPA), which banned content that “appears to be” child pornography but was produced by means other than using real children, such as through the use of youthful-looking adult actors or computer-imaging technology. The CPPA also banned content that was advertised, promoted, presented, described, or distributed in such a manner that “conveys the impression” that it depicts child pornography.

In 2002, the Supreme Court decided the case of Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002), which addressed a facial challenge to the CPPA brought by an adult-entertainment trade association and other producers of sexually explicit content. The challengers argued that the “appears to be” and “conveys the impression” language was so vague and overbroad that the law would chill the creation of First Amendment-protected works.

Challengers pointed to the Academy Award-winning film American Beauty, which uses young-looking adult actors to depict a teenage girl engaged in sexual relations with a middle-aged man, and certain remakes of the Shakespearean tragedy Romeo and Juliet, which depict the sexual lives of teenagers, as examples of artistic works that technically violate the CPPA. Should the law remain intact, challengers argued, these types of works may not be created for fear of liability.

In defending the law, the Government argued that virtual child pornography needed to be regulated because, although it does not directly harm children, it threatens children in indirect ways, including by “whet[ting] the appetites” of child molesters and drumming up demand for creation of such materials. The Government also argued that as imaging technology improved, it would become increasingly difficult to prosecute those who create content depicting real children, as experts would encounter more and more difficulty in determining what was real and what was fake.

The Supreme Court sided with the challengers and struck down the CPPA on First Amendment grounds. The Court noted that although Miller v. California, 413 U.S. 15 (1973) permits the banning of “obscene” material, the CPPA was not limited to obscenity and would ban content (like American Beauty and Romeo and Juliet) that had redeeming artistic value. Nor did the law implicate the concerns addressed in New York v. Ferber, 458 U.S. 747 (1982), which recognized the State’s interest in stamping out content produced by child sexual abuse regardless of its artistic value, because no real children were involved in the production of virtual child pornography.

The Supreme Court rejected the Government’s concerns that virtual child pornography might lead indirectly to abuse of minors, stating “the causal link is contingent and indirect.” The Court also found unavailing the Government’s contention that without the CPPA it would be unable to effectively prosecute those who produce real child porn, holding “the argument … [that] protected speech may be banned as a means to ban unprotected speech … turns the First Amendment upside down.”

In a concurring opinion, Justice Thomas reasoned that “technology may evolve to the point where it becomes impossible to enforce actual child pornography laws,” in which case “the Government should not be foreclosed from enacting a regulation of virtual child pornography that contains an appropriate affirmative defense or some other narrowly drawn restriction.” Three other justices similarly noted that “rapidly advancing technology” may change the Court’s ultimate holding and that Congress was not required to “wait for the harm to occur before it can legislate against it.”

We may have reached that point today. The proliferation of AI content virtually indistinguishable from real content suggests a renewed need for federal regulation. And the next time SCOTUS considers these issues, the result may be different.

For instance, Congress could pass a law narrowly tailored to banning only the “hard core of child pornography” rather than works with arguable artistic value like Romeo and Juliet, as Judge Rehnquist’s dissent notes. Alternatively, a law could shift the burden to the accused to prove that the speech is lawful by showing the materials were not produced using actual children – an idea that the Free Speech Coalition majority suggested might have merit if the legislation was drafted appropriately.

Requiring creators of solely AI-generated content to provide proof that no children were involved, and to clearly label their products as such (e.g., “made exclusively with virtual images”), seems neither problematic to the First Amendment nor a particularly heavy lift for producers and distributors. Given the compelling interest of protecting our nation’s children, and the available options to create a constitutional version of the CPPA, we should expect to see regulation in this field in the very near future.

#377089


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com