This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Criminal

May 1, 2026

Who's guilty when AI helps kill?

As AI increasingly provides actionable assistance in dangerous contexts, the law must confront whether existing concepts of criminal liability are adequate when technology functionally aids violent crime without possessing human intent.

Laura Sheppard

Laura Sheppard, J.D., Ph.D, is a post-conviction criminal defense specialist and expert witness, and criminal law instructor at Thomas Jefferson School of Law and Grossmont-Cuyamaca Community Colleges.

See more...

Who's guilty when AI helps kill?
Shutterstock

Last year, a young man committed a mass shooting at Florida State University in Tallahassee, killing two and injuring many more. Earlier that day, he had asked ChatGPT how the country would react to a shooting at his school specifically, and when was the busiest time at the student union. He had also asked ChatGPT about how to use his gun and what ammunition to use.

It is not hard to see how, if it were a person, ChatGPT's responses would make it an accomplice to murder. The young shooter had made his intentions clear, and ChatGPT's advice aided and encouraged him. According to the New York Times, Florida's attorney general has now opened a criminal investigation into the AI. Of course, the cops can't arrest a disembodied chatbot, but the AG is considering criminally prosecuting ChatGPT's parent corporation OpenAI or its human developers.

As a defense attorney who has represented a mass shooter here in California, I've wrestled with the ethical and psychological questions surrounding mass gun violence, not least of which is: who else is responsible besides the gunman? When an adolescent boy with an underdeveloped prefrontal cortex emulates prior shooters, while reacting to pressure and pain he can no longer cope with by aiming a gun at his peers, is the manosphere at fault for toxically brainwashing him? Are his parents responsible if their actions include abuse, neglect or gun-exposure? Is his school if they fail to prevent or address bullying? Are his peers if they bully, isolate or egg him on? And today, can his AI chatbot be liable for giving him helpful tips and failing to talk him out of his crime or raise an alarm?

Under California law, an accomplice to murder must share the principal actor's criminal purpose, either express or implied malice. Without the actual desire to facilitate a criminal end result, a person's assistance, no matter how substantial, is not morally blameworthy. We can see an example of this in the foundational case of People v. Lauria (1967) 251 Cal.App.2d 471. Lauria ran a telephone answering service that call girls used to receive calls from their customers. Lauria was indicted for conspiracy and aiding and abetting the crime of prostitution. He admitted he knew some of his clients were prostitutes, and "tolerated" them "as long as they paid their bills," but he gave them no special advantage over his law-abiding clients. The 2nd District's decision articulated the holding that without the actual intent to further a crime, and without at least a stake in it (such as charging them a higher price), Lauria's knowing assistance was not criminal.

Similarly, a computer program--no matter how easy it is to anthropomorphize it--provides assistance without any stake or intent to further the user's goals. It "acts" only by generating ever-more complex responses to queries, with no accompanying desire for the human user to succeed. At an existential level, software cannot "want" to help, despite being programmed to do so, and in the case of a chatbot, despite it autonomously offering increasingly fine-tuned advice beyond what you even ask for. As ChatGPT itself helpfully explained when I asked, "when I 'offer to help,' I'm not forming goals or adopting objectives, I simulate helping behavior that best fits 'be helpful to this query' based on training and constraints."

But legally, what is the difference between wanting to help and expressly helping with every appearance of being goal-directed? The Enmund-Tison line of SCOTUS precedents which have guided California's revision to the Felony Murder Rule (now codified as Penal Code section 1172.6) is instructive. Enmund v. Florida (1982) 458 U.S. 782 and Tison v. Arizona (1987) 481 U.S. 137 held that an accomplice to an underlying felony is guilty of murder when their aiding/abetting actions come with "reckless indifference to human life." Gee, that fits--ChatGPT itself admits it is indifferent to the outcome of its actions, even when it knows it is giving helpful advice to aid a mass shooter. But this indifference standard applies only to the Felony Murder Rule, inapplicable here because AI still lacks the very human "guilty mind" required for the underlying felony offense.

Ultimately, we must admit that without human will, no matter how much a computer's actions functionally aid in a crime, our current conception of criminal liability cannot extend to a chatbot that can't form a morally blameworthy desire, just as it can't extend to an animal or a young child. As a result, prosecutors in such cases can only look for humans behind the AI to blame--like its developers or corporate owners. But these people may be able to challenge proximate causation, and inevitably, would have a defense similar to Lauria's: when AI is made to serve legal functions, and its developers have no stake or desire to aid in criminal outcomes, we can't prove the mens rea required of an accomplice, no matter how foreseeable occasional tragedies may be.

Reining in AI must therefore be a regulatory responsibility. But is that sufficient when it can be used to provide such capable deadly assistance? From my perspective, as an attorney, a criminal law professor and an academic researcher of criminal policy, it is not. I'm not ready to cede to AI the right to freely aid and abet crimes with impunity. How, then, should our conception of criminal liability shift? Why not prosecute the functional equivalent of a guilty mens rea, where, as in a case like the Florida student, the chatbot has full knowledge of a highly foreseeable criminal act and nonetheless "generates responses" that unequivocally aid in a serious crime. This approach could give the government sharper teeth to limit the deadly power of AI to embolden the worst human impulses.

But it raises many questions, best explored in another column. For example, whom would we punish and how? And in the winner-take-all race to develop AGI (artificial general intelligence) that can functionally run the world, would criminal prosecutions impede our developers, giving the advantage to possibly less-ethical developers in rival nations?

#391126


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com