This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Evidence,
Ethics/Professional Responsibility

Feb. 4, 2026

AI-generated evidence in California courts: Authentication, hearsay and professional competence

See more on AI-generated evidence in California courts: Authentication, hearsay and professional competence

Marshall R. Cole

Trial Attorney
Nemecek & Cole

Email: mcole@nemecek-cole.com

See more...

AI-generated evidence in California courts: Authentication, hearsay and professional competence

In May 2025, an Arizona judge permitted what appears to be the first AI-generated victim impact statement in American legal history. Christopher Pelkey, murdered three years earlier in a road rage incident, "spoke" to the court through an AI recreation of his face and voice. Judge Todd Lang praised the presentation, stating "I loved that AI," and sentenced defendant Gabriel Horcasitas to 10.5 years for manslaughter--18 months beyond the prosecution's request. Defense counsel has since filed an appeal arguing that the AI-generated video unduly influenced sentencing.

This case represents more than a novel application of technology. It signals the arrival of AI-generated evidence in American courtrooms and presents fundamental challenges to established evidentiary frameworks. California practitioners must now navigate authentication requirements designed for an analog era, understand new transparency obligations under state law, and fulfill evolving professional competence standards--all while the technology continues to advance at an accelerating pace.

I. Authentication challenges under traditional evidentiary rules

California Evidence Code section 1400 requires authentication before admitting any writing into evidence: The proponent must introduce "evidence sufficient to sustain a finding that it is the writing that the proponent claims it is." Section 1401 makes this authentication mandatory. These provisions function adequately for traditional documents with identifiable human authors and verifiable chains of custody. They prove far more problematic when applied to content generated by artificial intelligence systems.

The traditional authentication methods enumerated in Evidence Code sections 1410 through 1421 strain under the weight of AI-generated evidence. Determining who is able to lay the foundation for an AI-generated document is not easy. Is it the individual who crafted the prompt? The data scientist who trained the underlying model? Neither may have direct knowledge of the specific output's accuracy or provenance.

The federal courts have begun addressing these challenges. The Advisory Committee on the Federal Rules of Evidence has proposed amendments specifically targeting AI-generated outputs. The draft would expand Rule 901(b)(9) to require proponents to demonstrate that AI outputs are "reliable" rather than merely "accurate," describe the training data and software employed, and prove the system produced reliable results. For suspected deepfakes, the committee has proposed a burden-shifting framework: The objecting party must first establish that a jury could reasonably find the evidence was manipulated; if successful, the burden shifts to the proponent to prove authenticity by a preponderance of the evidence.

Additionally, the committee has proposed new Rule 707, which would subject AI-generated evidence to standards similar to expert testimony under Rule 702. This approach would require validation of inputs, ensure opposing parties can examine the AI system's functionality, and determine whether the process has been validated under sufficiently similar circumstances.

California has not yet adopted corresponding amendments to its evidence code. Practitioners must therefore attempt to fit AI-generated evidence into authentication frameworks developed decades before such technology existed.

II. California's legislative response: Assembly Bill 2013

California is making it easier for potential litigants to understand and test the reliability of generative AI systems through Assembly Bill 2013, the Generative Artificial Intelligence Training Data Transparency Act, which takes effect Jan. 1, 2026. AB 2013 mandates transparency regarding the datasets used to train generative AI systems.

The statute applies to any developer who "designs, codes, produces or substantially modifies" a generative AI system made available to Californians, whether through free or commercial channels. Covered developers must publicly post detailed information about their training data, including data sources and ownership, the types of data employed, intellectual property status (copyrighted, trademarked, patented or public domain), whether datasets contain personal information as defined by the California Consumer Privacy Act, the presence of synthetic data, and data processing methodologies. These disclosure requirements apply retroactively to systems released or substantially modified on or after Jan. 1, 2022.

For litigators, AB 2013 creates valuable discovery opportunities. When opposing counsel offers evidence allegedly generated by a particular AI system, that system's developer must have publicly documented its training data, methodologies and known limitations. This documentation may reveal biases in training data, demonstrate the system's unsuitability for the claimed purpose, or provide grounds for authentication challenges. The statute's enforcement mechanism operates through the California Attorney General's consumer protection authority, with potential penalties for non-compliance.

III. The hearsay paradox: Machine-generated outputs as non-statements

While AI-generated evidence faces significant authentication hurdles, it often bypasses hearsay objections entirely through a counterintuitive doctrinal pathway. The hearsay rule, as defined in Evidence Code section 1200, requires a human declarant, and case law has consistently held that machines cannot make statements. Machine-generated outputs lack such a declarant, placing them outside the rule's scope.

This framework creates a clear path for AI evidence. The more autonomous the AI system--the less human intervention in generating outputs--the more readily it avoids hearsay scrutiny. Pure AI outputs may be offered for the truth of the matter asserted without satisfying any hearsay exception, while human-drafted evidence must navigate the complexities of exceptions and, in criminal cases, confrontation clause requirements.

IV. Professional competence in the AI evidence era

Rule 1.1 of the California Rules of Professional Conduct requires attorneys to provide competent representation, possessing "the legal knowledge, skill, thoroughness and preparation reasonably necessary" for the matter. In 2026, this competence requirement necessarily encompasses basic literacy regarding AI capabilities and limitations.

Attorneys need not become data scientists or machine-learning experts. However, competent practice requires the ability to recognize circumstances suggesting potential AI involvement in evidence creation, understand the general capabilities and limitations of current generative AI systems, know when to retain forensic or AI experts for authentication challenges, and understand AB 2013's disclosure requirements and their discovery implications.

The dead can now speak in courtrooms. Our responsibility as advocates and officers of the court is to ensure we can distinguish authentic voices from sophisticated simulations--and that our evidentiary system maintains the capacity to make that distinction as technology continues advancing. The Pelkey case will not be the last instance of AI-generated evidence appearing in litigation. It may not even prove the most controversial. As generative AI systems become more sophisticated and accessible, such evidence will appear across the full spectrum of legal proceedings. The attorneys who navigate this landscape successfully will be those who combine technological understanding with a commitment to professional integrity.

Marshall R. Cole is a trial attorney at Nemecek & Cole.

#389595

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com