This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
Subscribe to the Daily Journal for access to Daily Appellate Reports, Verdicts, Judicial Profiles and more...

Technology,
Corporate

Oct. 17, 2024

Why big tech regulation can't wait

Regulation of Big Tech is urgently needed due to the harmful impact of social media on mental health, particularly among minors, and the inadequacy of current laws like Section 230 of the Communications Decency Act.

Why big tech regulation can't wait
Shutterstock

Almost a quarter of a century ago, Malcolm Gladwell published his famous book on societal tipping points. The main idea of this book is that small, seemingly insignificant changes can lead to significant shifts in social behaviour and trends.

I believe that we are now at a tipping point regarding Big Tech regulations, and here's why. The growing concerns about inadequate regulation, the impact of social media on mental health, and the spread of misinformation create a ripe environment for change. As public awareness and criticism of Big Tech practices intensify, small shifts in policy or public opinion could lead to significant transformations in how these companies operate and how they are regulated. This moment presents an opportunity to address the pressing issues related to privacy, user safety, and ethical responsibilities in the digital age.

Why do I think so? Because recently there are more and more publications on this subject. These publications are different and are written for both tabloids and respected magazines, but the general idea is that, apart from the obvious benefits, networks such as Facebook, Snapchat, Instagram, and TikTok can do significant harm. This is a clear trend.

One such great article was a paper published by the New Yorker a couple of weeks ago by Andrew Solomon, a well-known journalist and psychologist.

In the article "Has Social Media Fuelled a Teen-Suicide Crisis?" he narrates several devastating stories of teenagers who tragically took their own lives, highlighting the role of social media in intensifying their mental health struggles.

One of the stories follows Anna, a teenager from Colorado, who began suffering from deep insecurities, primarily fuelled by her exposure to Instagram. Social media became a lens through which she harshly judged herself, leading to isolation and anxiety. Despite her parents' interventions and attempts to restrict her phone use, Anna increasingly turned to online platforms for validation, which only worsened her state. After her death, her mother discovered that Anna had been viewing disturbing content related to self-harm and suicide, which had deeply impacted her mental state.

Similarly, the story of Englyn shows how a seemingly happy and confident young girl began to spiral after her exposure to troubling content on platforms like Instagram and TikTok. After being grounded and having her phone taken away, she tragically ended her life. Her parents later discovered that her feeds had been flooded with suicide-related videos, which might have played a role in her tragic decision.

Another case tells the story of C.J., a bright and well-liked boy who, like Anna, became deeply affected by his interactions on platforms like Facebook and Instagram. He thrived on online attention but struggled internally with negative comments and pressures from the virtual world. His emotional descent was further reflected in his reliance on his phone as a source of connection and validation. Sadly, this led him to take his life, leaving a note that revealed his internal battle between wanting to be a good person and feeling overwhelmed by the darkness in his mind.

These stories collectively paint a bleak picture of how social media algorithms can push vulnerable teens deeper into despair, exposing them to harmful content at a time when they are most impressionable and emotionally fragile. While the exact link between social media use and rising suicide rates remains complex, the narratives of these teens underscore the urgent need for better safeguards and more responsible content regulation.

As I have already noted, Solomon is not alone in raising this issue of the impact of social media on the psyche of minors. The American Psychological Association states that while social media can provide teens with vital opportunities for social connection, it also presents risks. Adolescents often seek validation and feedback from peers online, but their developing brains may be especially vulnerable to the negative effects of excessive social media use. The research shows that during critical developmental periods--particularly between ages 11 to 13 for girls and 14 to 15 for boys--high social media use can decrease life satisfaction, while reduced use correlates with greater life satisfaction.

Dangerous content on social media, such as material promoting disordered eating or self-harm, also presents a growing concern. Algorithms designed to keep users engaged can pull teenagers into harmful echo chambers, exacerbating issues like poor body image or depression. Cyberbullying and exposure to racism and hate speech are additional risks that can severely impact mental health.

All of these issues led to a Senate hearing involving representatives of several Big Tech companies in January of this year with quite the expected outcome.

During the Senate hearings, where the focus was on the role of social media companies in the rise of teenage suicides, the reactions from Big Tech executives, including Mark Zuckerberg of Meta, Shou Zi Chew of TikTok, and others, were notably defensive and detached from the emotional weight of the families' tragedies. The executives gave standard corporate responses, emphasizing the steps their companies were already taking to protect young users. Zuckerberg, for instance, acknowledged that technology could complicate parenting but insisted that Meta was working alongside parents to safeguard children. He mentioned that Meta's AI systems were automatically removing harmful content, claiming a "99% success rate."

Well, of course.

If taken seriously, I believe that the problem certainly exists and can and should be solved through legal regulation. However, is this possible in today's America? Let's try to figure it out.

Right now, the main regulation of content on social networks is based on Section 230 of the Communications Decency Act (CDA) of 1996. It grants tech companies and online platforms immunity from liability for user-generated content while also allowing them to moderate content without the risk of legal repercussions. As social media and interactive services like Facebook, Instagram, TikTok, and others have grown rapidly, this legal provision has become a powerful shield, protecting them from a myriad of lawsuits related to the content shared through their platforms.

One of the central features of Section 230 is the distinction it creates between platforms and publishers. It ensures that tech companies are treated as "intermediaries" rather than publishers, which means they are not held responsible for the statements, posts, images, or other materials that users generate. Unlike traditional media--such as newspapers or television networks--which are accountable for the content they publish, online platforms are shielded from liability for the vast array of content shared by their users.

Additionally, Section 230 encourages the voluntary moderation of content by platforms. The result of this legal rule is that platforms like Twitter or YouTube can host massive amounts of user content with minimal risk. This flexibility has been central to the explosive growth of social media, where millions of users interact daily, creating a flood of content that no platform could vet or oversee entirely.

So, it is clear that it gives tech companies too much power, allowing them to avoid accountability for harmful or dangerous content spread across their platforms, such as hate speech, disinformation, or material that can lead to mental health crises. Moreover, while companies are allowed to remove harmful content, the law does not require them to do so.

The problem is that this law is long outdated, as its main provisions have not changed so significantly since 1996. However, it is quite likely that the new law will never come into force, and here is why--technology companies have learned how to successfully lobby their interests.

See for example the popular success story of Chris Lehane, who has lobbied and is lobbying now for the Big Tech sector. He is a former political operative known for his work in the Clinton White House and Al Gore's presidential campaign. Lehane played a critical role in shaping the aggressive lobbying tactics of companies like Airbnb and Coinbase. His approach, described as the "dark arts" of politics, focuses on mobilizing user bases, intimidating politicians with large-scale spending, and crafting public narratives that align Big Tech's interests with broader political goals​.

All of his efforts have usually succeeded--Airbnb has solved its tax issues in San Francisco and then across the country. Coinbase has achieved positive reviews of its operations and improved the perception of cryptocurrency in general.

In addition to his work with Airbnb and cryptocurrency companies, Chris Lehane has also recently been hired by OpenAI to help with lobbying efforts as the company seeks to navigate the complex political landscape surrounding artificial intelligence (AI) regulation. OpenAI, recognizing Lehane's effectiveness in shaping public narratives and political strategies, brought him on as their Vice President of Global Affairs to help the company influence AI-related policy discussions.

As you'd expect such efforts don't go to waste in the AI regulatory arena either--we all know what Gavin Newsom did (or rather didn't do) at the end of September. He just vetoed the AI safety bill, known as SB-1047, which was aimed at establishing strict safety guidelines for the development of AI technologies. Newsom emphasized that the legislation might hinder California's AI industry and its economic competitiveness, as it received significant opposition from major tech companies like Google and OpenAI.

Thus, the state of California, where practically all Big Tech companies reside, is not capable of adopting any serious regulation in this area. And I believe that it will not be able to. All hope is for federal regulation, but even that is in doubt, while guys like Chris Lehane are doing their job (brilliantly, I must say) in lobbying the interests of the new robber barons from Big Tech.

Funny, but the situation in Europe in this area of legislation is much better than in the US. They recently passed the EU Artificial Intelligence Act (2024/1689). It is not just a forward-looking regulatory framework; it also has teeth when it comes to enforcing compliance with its provisions. The regulation enables the European Union to impose significant penalties on companies, including those based in the U.S., that fail to meet the required standards for transparency, safety, and ethics in their AI operations.

One of the most powerful tools in the EU's arsenal is the ability to levy fines of up to 6% of a company's global annual revenue. For Big Tech companies like Meta (Facebook), Google, or OpenAI, which generate billions in annual revenue, these fines could be staggering. For example, if Google or Meta were found to violate the AI Act, the potential fines could reach billions of dollars, given their massive revenue streams. This kind of financial penalty is not theoretical; the EU has already demonstrated its willingness to fine tech giants through the General Data Protection Regulation (GDPR). Under the GDPR, Meta was fined €1.2 billion in 2023 for data privacy violations related to the transfer of user data to the U.S., marking one of the largest fines under EU regulations.

Moreover, the penalties under the AI Act are not limited to financial punishments. The Act mandates that companies take full responsibility for any harm caused by their AI systems, including psychological, physical, or economic damage. In cases where an AI system has caused real harm, companies could face legal action from affected individuals or groups, further compounding their liability. In addition to fines, the EU could impose restrictions on the use of AI systems, including bans or mandatory changes to the functioning of the AI systems.

These examples make clear that the AI Act has real consequences for companies that fail to adhere to its strict rules. By creating a regulatory environment where the potential financial and legal penalties are immense, the EU ensures that Big Tech companies must take their responsibilities seriously when operating within its borders.

Thus, it is quite obvious now that the consumer of technology companies in the U.S. is in a much worse situation than in the European Union. This is not surprising, given the fact that all these companies are based in the US and lobby for their interests. And only the Chinese TikTok is unlucky--it was almost banned in the US under the pretext of protecting national interests, which of course is just a pretext and not the real reason. The real reason is competition for young users.

In general, I believe that the root of the problems lies, of course, in what is called in modern legal and economic theory the concept of 'opportunism.' Williamson wrote beautifully about this in his book "The Economic Institutions of Capitalism.. This remarkable economist preached the simple idea that transaction costs will always and everywhere exist because man by nature is an opportunist, he is always trying to get his hands on as many resources as possible--it is an axiom. Therefore, it is ridiculous to believe that Big Tech is free from this problem. On the contrary, it is precisely because there is no proper legal regulation that Big Tech is the most opportunistic actor in the modern American economy.

I have suggested in my book "Law, Morality and Economics" that transaction costs are not just about the costs of information gathering and processing, the costs of negotiation and decision-making, the costs of control and legal defence of contract performance, but are also moral in nature. I call these costs--moral costs. Who bears the moral costs in all of the above situations? Minors, of course--it is their mental distress that is to some extent the price of Big Tech's profits. These costs are imperceptible--that is a significant feature of moral costs--but only until something terrible happens, such as suicide or self-harm. However, they exist, and we must recognize that this is not only a moral problem but also an economic one.

Thus, there is an urgent need for legislation at both federal and state levels that will seek to eliminate the moral costs of Big Tech users (especially minors). We cannot turn a blind eye to this problem and reassure ourselves that companies can take care of it themselves--it is not a plausible situation from a law and economics perspective.

Moreover, we now see that algorithms are no longer created by humans, but apparently by various artificial intelligence systems. While we might believe that human-created algorithms can take into account the interests of users, since they were created by homo sapiens who might sometimes have a conscience, AI has no such limitations and will not take into account anyone's interests other than those of the company owners.

Let's face it--the whole Big Tech business is about money. It's no coincidence that OpenAI has declared itself a commercial company. And if Facebook knows how we live and relax, OpenAI knows what and how we work. Soon we will feel it.

#381438

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com