This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Apr. 24, 2025

The ugly freedom of 'feeding speech'

How big tech defends algorithmic addiction in the name of the First Amendment.

Victor S. Dorokhin

Email: victordorokhin1@gmail.com

Victor S. Dorokhin, PhD, is a legal expert, attorney, lecturer, and the author of the book "Law, Morality, and Economics."

See more...

The ugly freedom of 'feeding speech'
Shutterstock

In October of last year, the Daily Journal published my article titled "Why big tech regulation can't wait," in which I outlined the urgent need for legal regulation of major online platforms and the profound harm their services can inflict on minors. I argued that platforms like Facebook, Instagram, TikTok and Snapchat are not just vehicles of communication or entertainment - they have become environments where vulnerable adolescents are exposed to a constant stream of psychologically damaging content. These platforms' algorithms are designed to maximize engagement, often at the cost of users' mental health, and particularly at the expense of children who are not equipped to navigate such manipulative systems.

The topic of Big Tech regulation has once again come to my attention with the recent passage of California's SB 976, around which a legal battle is now unfolding in the Ninth Circuit District Court.

This bill was signed into law by Gov. Gavin Newsom in September 2024 and is vividly titled the "Protecting Our Children from Social Media Addiction Act."

In this legislation, the authors proposed several groundbreaking provisions that could fundamentally reshape the way social media platforms interact with minors.

The key point is that the law introduced the concept of an "addictive feed" - any part of a website or application that displays media (such as videos or posts) recommended or ranked based on users' personal data or activity. This refers to any content stream tailored to users based on information such as their clicks, likes, or device data.

The bill prohibits operators of addictive services from providing such channels to minors unless certain conditions are met, such as obtaining parental consent. It also restricts notifications to minors between midnight and 6 a.m. and during school hours, unless parental consent is granted, addressing manipulative push notifications. A parent-controlled mechanism is introduced to let parents manage their child's platform interactions, giving them real tools to protect their children online.

Lastly, the bill ensures that parental consent doesn't waive future claims for harm, holding platforms accountable for any damage to minors' mental health or well-being, even if parental controls are used.

Together, I think, these four provisions mark a bold legal shift toward prioritizing the mental health of minors over the engagement-driven design of today's digital platforms. Thus, SB 976 is a direct response to the growing understanding of the negative role of social media in the deterioration of minors' mental health. It targets the addictive architecture of these platforms and seeks to curb their most manipulative elements. In doing so, it shows that meaningful reform is possible, even in the face of intense lobbying pressure.

However, as I suggested in my October article, such legislation would irreversibly face legal pressure from the Big Tech lobby - SB 976 was immediately challenged by NetChoice, a trading association representing five companies: Google (YouTube); Meta (Facebook and Instagram); Nextdoor; Pinterest; and X.

In its lawsuit, NetChoice high-mindedly describes its mission as promoting online commerce and speech, increasing access and opportunity for consumers over the Internet, and minimizing burdens that can prevent businesses from making the Internet more accessible and useful. However, in reality, NetChoice essentially acts as a political influence tool for the largest digital corporations. The organization operates under the banner of protecting "freedom of speech" and "innovation," but its mission ultimately boils down to blocking any regulation that could limit the profitable but often socially questionable business models of its members.

Whether it's laws aimed at protecting children from algorithmic addiction or attempts to curb other manipulative practices in online systems, NetChoice almost always finds itself in the role of the opponent (NetChoice & CCIA v. Paxton (Texas, 2021); Moody v. NetChoice, LLC (Florida, 2021); NetChoice v. Reyes (Utah, 2023), etc.) Using courts, lobbying mechanisms, and public policy pressure, the organization shields the interests of Big Tech under the rhetoric of "digital freedoms," which, upon closer inspection, often turns out to be a defense of corporate arbitrariness.

In this case - NetChoice v. Bonta - this lobbying group has once again employed its usual tactic of invoking the First Amendment to argue that SB 976 is unconstitutional.

It is important to note from the outset that NetChoice chose a dual strategy in challenging the law, filing both a facial challenge - arguing that the challenged provisions are unconstitutional in all their applications - and an as-applied challenge, claiming that these provisions violate the rights of specific association members and their digital services in particular circumstances. They also argue that the law is vague and unclear, particularly in relation to freedom of speech, which requires stricter clarity standards.

In particular, the arguments presented by NetChoice consist of the fact that the editorial and curatorial decisions made by social media platforms - such as how to structure user feeds or moderate content - are considered protected speech. Thus, the core demand is, in fact, to have algorithmically personalized feeds - like those Facebook and Instagram constantly offer - recognized as a form of direct expression protected by the freedom of speech, just like the news streams abundantly provided by print media every day.

Let me coin a new term that I believe is quite fitting - NetChoice's lawyers are fighting to secure for their Big Tech clients what might be called a freedom of "feeding speech." That is, an unrestricted right to "feed" minors a round-the-clock, algorithmically curated stream of personalized content. According to their view, this so-called "feeding speech" deserves the same protection under the First Amendment as traditional forms of expression.

To support this claim, NetChoice's lawyers rely on the opinion in Moody v. NetChoice, which they interpret as confirming their position that algorithmic content processing is protected by the First Amendment. Consequently, they argue, any restriction on access to algorithmic feeds amounts to a limitation on free speech. Moody v. NetChoice concerns the challenge to the constitutionality of laws passed by the states of Florida and Texas, which sought to prohibit major internet platforms (such as Facebook, X, and YouTube) from moderating content or "censoring" users based on political or ideological grounds. NetChoice filed suit, arguing that these laws violate the First Amendment because algorithmic feeds and content moderation constitute a form of editorial expression. Like NetChoice v. Bonta, this lawsuit was a facial challenge - an attempt to strike down the law entirely.

This was a landmark case for NetChoice and its stakeholders, as it had the potential to establish an important precedent, given that it reached the Supreme Court. I believe the lawyers for NetChoice were eager to secure a ruling similar to the landmark case Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), in which the Court unanimously (9-0) held that a Florida law was unconstitutional because it violated the freedom of the press guaranteed by the First Amendment.

However, while it may not seem obvious at first glance, NetChoice has essentially lost this case.

First, they failed to overturn the Florida and Texas laws because they are facially unconstitutional - that is, to have the laws declared inherently incompatible with the First Amendment regardless of how they are applied. The Supreme Court remanded the case to the lower courts for further consideration on an as-applied basis, and crucially, the Court did not endorse the idea that content moderation on platforms is unequivocally a form of editorial speech fully protected by the First Amendment. Moreover, the decision resulted in a split among the Justices, as reflected in the numerous concurring and dissenting opinions.

For example, Justice Barrett noted that a function qualifies for First Amendment protection only if it is inherently expressive (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 568 (1995)). However, in her opinion, with the advancement of technologies such as recommendation algorithms and artificial intelligence, it is becoming increasingly difficult to determine whether platform actions fall under the scope of the First Amendment. If the system merely suggests content to users based on their preferences, or if AI makes decisions independently without human input, the link between such actions and the expression of human will become less clear. As a result, First Amendment protection in such cases may be called into question.

Justice Jackson also noted that not every action taken by social media platforms will be classified as speech protected by the First Amendment. In her opinion, courts must carefully analyze not only which entities fall under regulation, but also how the regulated actions function, before deciding whether they constitute a form of speech protected by the First Amendment.

Justice Thomas questioned the very possibility of the court considering such cases under a facial challenge, stating, "It is high time the Court reconsiders its facial challenge doctrine." Furthermore, Thomas believes that lower courts should consider the common carrier doctrine when handling such cases. This means that if a law concerns platforms providing public services or acting as intermediaries for the exchange of information, courts may apply additional standards to assess how such platforms should operate under regulation - specifically, they should not be equated with editorial boards and editorial policies.

Finally, Justice Alito stated (and was joined by Gorsuch and Thomas) that platforms largely rely on opaque, automated algorithms to moderate content, without human involvement or understanding of how decisions are made. He challenges the assumption that such algorithmic actions are inherently expressive in the same constitutional sense as a human editor's choices. He also argued that modern social media platforms are the 21st-century equivalent of the old "public square" and should be considered common carriers.

In sum, as we see from this analysis, NetChoice's position in its direct challenge to laws that somehow limit the ability of social media platforms to algorithmically structure user feeds does not stand up to serious criticism. While the Supreme Court still holds traditional views on editorial speech, it is becoming increasingly open to considering alternative doctrinal approaches. Of course, it takes time to fully recognize the emergence of a new problem in the regulation of algorithmic feeds, a problem that requires a new legal framework and new legal reasoning. In my view, however, the U.S. Supreme Court is moving in the right direction and will sooner or later come to the realization that the daily algorithmic processing of petabytes of data by artificial intelligence bears no meaningful resemblance to the expressive work involved in creating traditional media.

Now, let us take a closer look at what NetChoice is doing in the present lawsuit, where they attempt to defend what can only be described as a claimed "freedom of feeding speech." Have they drawn any lessons from the Supreme Court's decision in Moody v. NetChoice? From what I see in the complaint - the answer is no.

I have not found a single direct substantive argument in support of their challenge to the California law as applied, even though such a challenge is explicitly stated in their filing. Instead, they continue to pursue their untenable facial challenge, showing no hesitation in misleading the court as to the nature and strength of their claims. I believe that such stubbornness and one-sidedness of the claims following the Supreme Court's decision regarding facial challenge practices in Moody v. NetChoice, while not an abuse of discretion on the part of the plaintiff, should explain a great deal to the court that will decide this case about the fairness of NetChoice and its members.

Finally, let's briefly address the second part of the plaintiff's argument - the claim that SB 976 violates the right of minors to access information. NetChoice cites here the well-known case Brown v. Entertainment Merchants Association, 564 U.S. 786 (2011), where the U.S. Supreme Court ruled that a California law prohibiting the sale or rental of "violent video games" to minors without parental involvement was unconstitutional. In this case, the Court held that video games, like books, movies, and music, are a form of expression protected by the First Amendment. The state's attempt to restrict minors' access to certain types of content violated free speech rights.

In many ways, however, this decision was made because the state of California, which relied on studies purportedly proving the harms of violence in video games, failed to prove these material facts. While much of the research was conducted by Prof. Craig Anderson, Chief Justice Scalia noted that no court had ever found these studies persuasive - and with good reason. Furthermore, even the authors of these studies acknowledged the methodological weaknesses of their work. In the end, considering that video games are inherently expressive, meaning they meet the criteria for protected expression under the First Amendment, the state's attempts to restrict minors' access to them without convincing evidence of actual harm were deemed unconstitutional.

In the case of NetChoice v. Bonta, it seems to me that the situation is different - regarding the harm of social media to minors, there are official documents from the American Psychological Association. The APA highlights that adolescents are particularly susceptible to negative online experiences, such as cyberbullying and exposure to harmful content, which can lead to increased anxiety, depression, and poor body image. The organization recommends that adolescents be routinely screened for signs of problematic social media use that may impair their daily functioning. Furthermore, the APA calls on social media companies to take responsibility for protecting youth by implementing design improvements that reduce inherent dangers and prioritize the well-being of young users.

I wouldn't want to delve into this topic in the context of a journal article, but I believe that such data from the APA is serious enough not to be ignored.

To summarize all of the above and considering that the influence of social media undoubtedly has a negative impact on minors, as well as the fact that the addictive feeds lack any expressive characteristics and is instead the product of AI algorithms primarily designed to constantly capture the attention of minors, I believe that no facial challenge can be considered in this case, and the NetChoice lawsuit, in its current form, should be fully dismissed.

Given that serious research has also established an undeniable link between the effects of addictive feeds and mental harm in minors, this claim should be dismissed and as applied, assuming, of course, that plaintiffs ever seriously provide such arguments, rather than abstract and vague arguments about First Amendment inconsistencies.

Otherwise, what we will get is not a fair judgment where the court takes into account the interests of all groups, including the most vulnerable, but a judgment where the interests of minors are sacrificed to the interests of large corporations focused solely on profit and nothing else.

In other words, we will end up with a world of ugly freedom of "feeding speech."

#385122


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com