Feb. 6, 2026
Weakening Section 230 would chill online speech
Enacted 30 years ago through careful legislative deliberation, Section 230 remains the internet's strongest bulwark for free expression, protecting the services--and users--that make online speech possible.
David A. Greene
Senior Counsel
Electronic Frontier Foundation
Email: davidg@eff.org
David is civil liberties director at EFF. He teaches News Media Law in the Journalism Department at San Francisco State University and First Amendment at University of San Francisco Law School.
Section 230, "the 26 words that created the internet," was
enacted 30 years ago this week. It was no rush-job--rather, it was the result of
wise legislative deliberation and foresight, and it remains the best bulwark to
protect free expression online.
The internet lets people everywhere connect, share ideas
and advocate for change without needing immense resources or technical
expertise. Our unprecedented ability to communicate online--on blogs, social
media platforms, and educational and cultural platforms like Wikipedia and the
Internet Archive--is not an accident. In writing Section 230, Congress
recognized that for free expression to thrive on the internet, it had to
protect the services that power users' speech. Section 230 does this by
preventing most civil suits against online services that are based on what
users say. The law also protects users who act like intermediaries when
they, for example, forward an email, retweet another user or host a comment
section on their blog.
The merits of immunity, both for internet users who rely
on intermediaries--from ISPs to email providers to social media platforms, and
for internet users who are intermediaries--are readily apparent when compared
with the alternatives.
One alternative would be to provide no protection at all
for intermediaries, leaving them liable for anything and everything anyone says
using their service. This legal risk would essentially require every
intermediary to review and legally assess every word, sound or image before
it's published--an impossibility at scale, and a death knell for real-time
user-generated content.
Another option: giving protection to intermediaries only
if they exercise a specified duty of care, such as where an intermediary would
be liable if they fail to act reasonably in publishing a user's post. But
negligence and other objective standards are almost always insufficient to
protect freedom of expression because they introduce significant uncertainty
into the process and create real chilling effects for intermediaries. That is,
intermediaries will choose not to publish anything remotely provocative--even if
it's clearly protected speech--for fear of having to
defend themselves in court, even if they are likely to ultimately prevail. Many
Section 230 critics bemoan the fact that it prevented
courts from developing a common law duty of care for online intermediaries. But
the criticism rarely acknowledges the experience of
common law courts around the world, few of which adopted an objective standard,
and many of which adopted immunity or something very close to it.
Another alternative is a knowledge-based system in which
an intermediary is liable only after being notified of the presence of harmful
content and failing to remove it within a certain amount of time. This
notice-and-takedown system invites tremendous abuse, as seen under the Digital
Millennium Copyright Act's approach: It's too easy for someone to notify an
intermediary that content is illegal or tortious simply to get something they
dislike depublished. Rather than spending the time
and money required to adequately review such claims, intermediaries would
simply take the content down.
All these alternatives would lead to massive depublication
in many, if not most, cases, not because the content deserves to be taken down,
nor because the intermediaries want to do so, but because it's not worth
assessing the risk of liability or defending the user's speech. No intermediary
can be expected to champion someone else's free speech at its own considerable
expense.
Nor is the United States the only government to eschew "upload
filtering," the requirement that someone must review content before
publication. European Union rules avoid this also, recognizing how costly and
burdensome it is. Free societies recognize that this kind of pre-publication
review will lead risk-averse platforms to nix anything that anyone anywhere
could deem controversial, leading us to the most vanilla, anodyne internet
imaginable.
The advent of artificial intelligence doesn't change this.
Perhaps there's a tool that can detect a specific word or image, but no AI can
make legal determinations or be prompted to identify all defamation or
harassment. Human expression is simply too contextual for AI to vet; even if a
mechanism could flag things for human review, the scale is so massive that such
human review would still be overwhelmingly burdensome.
Congress' purposeful choice of Section 230's immunity is
the best way to preserve the ability of millions of people in the U.S. to
publish their thoughts, photos and jokes online, to blog and vlog, post, and
send emails and messages. Each of those acts requires numerous layers of online
services, all of which face potential liability without immunity.
This law isn't
a shield for "big tech." Its ultimate beneficiaries are all of us who want
to post things online without having to code it ourselves, and so that we can
read and watch content that others create. If Congress eliminated Section 230
immunity, for example, we would be asking email providers and messaging
platforms to read and legally assess everything a user writes before agreeing
to send it.
For many critics of Section 230, the chilling effect is
the point: They want a system that will discourage online services to publish
protected speech that some find undesirable. They want platforms to publish
less than what they would otherwise choose to publish, even when that speech is
protected and nonactionable.
When Section 230 was passed in 1996, about 40 million
people used the internet worldwide; by 2025, estimates ranged from five billion
to north of six billion. In 1996, there were fewer than 300,000 websites; by
last year, estimates ranged up to 1.3 billion. There is no workforce and no
technology that can police the enormity of everything that everyone says.
Internet intermediaries--whether social media platforms,
email providers or users themselves--are protected by Section 230 so that speech
can flourish online.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com