This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

self-study / Torts

Sep. 11, 2024

Legal liability concerns in artificial intelligence: What you need to know

Timothy Spangler

Partner Practus, LLP

Artificial intelligence (AI) is becoming more interwoven into the fabric of our daily lives. The explosive rise of these technologies - ranging from large-language models to autonomous vehicles - poses many novel challenges that our pre-existing legal frameworks are possibly ill-equipped to handle. Enter California's proposed AI law, Senate Bill 1047 (SB 1047), which admirably seeks to create new guardrails around AI deployment and governance. Unsurprisingly, SB 1047 is potentially a double-edged sword, laying the groundwork for future innovation but also running the risk of stifling it.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (to use its full name) has been a topic of fervent debate within Silicon Valley over the last several months, dividing opinion within the highest echelons of the tech community. Elon Musk threw his support to SB 1047, saying it was a "tough call" but should be passed, while Marc Andreessen argued that the new regulations would have a chilling effect on AI development, especially on open-sourced models.

In short, SB 1047 attempts to construct a comprehensive framework for AI regulation, establishing a series of guidelines aimed at protecting consumers, ensuring transparency and mitigating the societal risks of AI deployment. Most will readily agree that accountability and fairness in AI decision-making is a laudable goal. We would all likely benefit from requirements that companies disclose when we are interacting with AI systems, offer us greater clarity on how AI-based decisions are made and perform regular audits of the AI systems we interact with. With AI increasingly being used in critical areas--such as loan approvals, hiring processes and legal sentencing--understanding how an algorithm arrives at its decision is crucial. The black-box nature of many AI systems has long been a challenge, with deep learning models, for example, offering little insight into their decision-making processes.

Any new legislation, including SB 1047, inevitably runs the risk of stifling innovation, particularly for startups that lack the resources of tech mega-giants like Google, Microsoft, or Meta. Imposing stringent requirements could inadvertently create a regulatory moat that only the wealthiest companies can afford to navigate.

Forcing companies to provide detailed explanations to regulators, for example, could slow down the pace of innovation or even deter some businesses from exploring AI altogether.

AI is also a global field, so unilateral efforts by California to regulate it could have ripple effects across the United States and beyond. Silicon Valley is the world's preeminent tech hub, but Sacramento's regulatory approach could prompt local companies to relocate to jurisdictions with more lenient laws.

Further, technologies such as reinforcement learning, generative adversarial networks and quantum computing are progressing at breakneck speeds. Can any law crafted in 2024 adequately govern technologies that may look entirely different by 2025 or 2030? SB 1047 could rapidly become outdated or require frequent revisions, creating uncertainty for businesses trying to navigate an already complex regulatory landscape. Many observers also argue that ad hoc state-based legislation such as SB 1047 could contribute to a fragmented regulatory environment for AI within the United States, and that Washington should take the lead here.

Ultimately, the future of AI regulation will require a delicate balancing act--one that promotes responsible innovation while protecting the rights and well-being of individuals. SB 1047 may not be perfect, but it could represent a crucial focal point in the ongoing debate over how best to regulate the transformative power of AI.

It is important, however, to bear in mind that under current U.S. law, creators of an AI tool that cause damage can already be held legally responsible under a range of legal theories and regulations, depending on the nature of the harm, the use of the tool and the relationship between the AI creator and the user.

If the AI tool is considered a "product," creators could be held responsible under product liability law, which allows for liability when a product has a defect that makes it unreasonably dangerous. AI creators could also be liable under a traditional theory of negligence, which requires proving that the creators owed a duty of care to the plaintiff, they breached that duty by failing to act with reasonable care in the design, development or deployment of the AI, the breach caused harm to the plaintiff and the harm was a foreseeable consequence of the creators' actions. In some cases, courts might even impose strict liability on the AI creators, holding them responsible for the harm caused by the AI regardless of their intentions or how careful they were, in cases where AI is seen as an inherently dangerous product or activity.

In addition, a variety of existing statutes and regulations governing technology, such as data privacy and anti-discrimination laws, already apply to AI creators and could currently be the source liability if their tools violate them. For example, the Federal Trade Commission (FTC) and other governmental agencies may investigate or bring enforcement actions if AI tools engage in deceptive practices or violate consumer protection laws. The Securities Exchange Commission (SEC) has been conducting targeted examinations on the use of AI and machine learning by financial services firms and has also revised their rules to better cover the use of predictive data analytics by investment advisers and broker-dealers.

Critics might point out, however, that laws implemented before this current generation of AI arrived on the scene are not fully equipped to address the consequence of this technology being deployed at scale today. One difficulty in holding AI creators responsible could be establishing clear causation. AI systems can act autonomously and unpredictably, making it hard to pinpoint whether the creators' actions directly caused harm. Current law also does not recognize AI systems as legal entities, so liability must necessarily be directed at their human creators and operators, and the companies that employ them.

The fundamental issue, therefore, to applying existing law to AI is the question of "who" is responsible for the AI's behavior. In traditional products law, for example, liability usually lies with the manufacturer or seller. But AI systems, particularly those that are capable of learning and adapting over time, blur the lines of responsibility. Who is liable when an autonomous drone using AI malfunctions during a delivery, causing injury or property damage? The drone manufacturer, the software developer or the company that operates the drone fleet? What if the AI system, over time, changes its behavior based on data inputs from its environment, leading to unforeseen harm? Should the original developer be held responsible for the AI's self-learned decisions? AI systems result from the contributions of multiple parties--software developers, data scientists, hardware manufacturers and service providers--creating a complex web of potential liability.

As AI systems continue to evolve, our existing legal frameworks will need to be adapted or supplemented to address the unique challenges they present. One possibility is to establish a new category of liability specifically for AI systems capable of acting autonomously, which would hold developers or operators to higher standards of transparency and accountability.

Another approach could involve a shift toward risk-based regulation, similar to the European Union's AI Act, which classifies AI systems based on the level of risk they pose to society. This would allow regulators to apply more stringent liability rules to high-risk AI applications, such as those used in healthcare or criminal justice, while giving more leeway to lower-risk systems.

There is also the potential for AI insurance markets to evolve, with companies deploying AI systems required to purchase liability insurance to cover potential harms caused by their products. This could distribute the financial risk while also incentivizing companies to develop safer, more reliable AI systems.

AI is a transformative technology with enormous potential benefits, but it also presents unique legal and regulatory challenges. As courts and lawmakers grapple with these issues, the legal landscape for AI will need to evolve in step with the technology. Whether through adapting existing liability standards or creating entirely new regulatory frameworks, the challenge will be finding a balance that holds AI creators accountable without stifling innovation.

#1525

Submit your own column for publication to Diana Bosetti


Related Tests for Torts



self-study/Torts

Whiplash and Whiplash-Associated Disorder is no joke

By Reza Torkzadeh, Allen P. Wilkinson

self-study/Torts

Boating accidents and what you need to know before hitting the water

By Reza Torkzadeh, Allen P. Wilkinson

self-study/Torts

A refresher on Dram Shop liability

By Reza Torkzadeh, Allen P. Wilkinson