This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Ethics/Professional Responsibility

Nov. 24, 2025

How lawyers can navigate AI and cybersecurity

As generative AI and digital systems reshape legal practice--particularly in bankruptcy courts--they raise pressing cybersecurity risks and compel a broader reevaluation of attorneys' professional and ethical obligations.

Selwyn D. Whitehead

Founder
The Law Offices of Selwyn D. Whitehead

See more...

How lawyers can navigate AI and cybersecurity
Photo: Shutterstock

The legal profession is undergoing a significant transformation with the integration of advanced technologies into the practice of law. Particularly generative artificial intelligence (GenAI), an advanced form of AI capable of using sophisticated predictive tools to create new content, as opposed to historic AI, using algorithms to find already existing content. This GenAI is then facilitated by the use of digital transmission media, system access and security protocols and the case management systems used by courts to receive, store and then govern the public's, parties' and practitioners' access to these case-related documents filed and stored on the court's websites. Each step along the content creation, data transmission and access chain requires enhanced cybersecurity protocols needed to protect the parties' data, their counsel's work product, the courts' integrity and the public's trust. These issues are no less important in the bankruptcy court environments where I spend the majority of my time and energy.

This evolution necessitates a reevaluation of the duty of technological competence for attorneys, as outlined in Rule 1.1 of the Model Rules of Professional Conduct. This article explores recent developments in the use of GenAI, the associated cybersecurity threats and the implications for legal practice, with a focus on case law from California state and bankruptcy courts, BAPs and district courts in the 9th Circuit and beyond, along with a selection of admonitions in the form of standing orders and rules of the court issued by a selection of members of the benches in California.

Issue 1.0: The use of generative AI in legal practice mandates technological competence

Rule 1.1 mandates that attorneys perform legal services with competence, including keeping abreast of changes in the law and its practice, as well as the benefits and risks associated with relevant technology. The rise of GenAI tools, such as ChatGPT and CoCounsel, have become prevalent as they have introduced new capabilities for legal research, document analysis, document drafting and case management geared towards increasing law firm productivity while reducing operational costs. However, the growing use of these tools has raised significant concerns about their accuracy and reliability, including their inherent potential for generating fictitious legal citations known as "AI hallucinations.." As a result, more and more courts are issuing standing orders, required standard operating procedures and rules governing the requirements for the use of GenAI in pleadings and other documents that are filed in -- and therefore come under the jurisdiction of -- the affected courts.

Case law analysis

The seminal California State Court case showing the perils of unfettered use of AI

A recent case in California illustrates the potential pitfalls of breaching our duty of technological competence. In Noland v. Land of the Free, L.P. the California Court of Appeal sanctioned plaintiff's counsel $10,000 for filing briefs containing fabricated case quotations generated by AI tools "that were undetected by plaintiff's counsel because he did not read the cases the AI tools cited." And because as of the date the court published its opinion, Sept. 12, 2025, the court found that "no California court has addressed this issue. "We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations--whether provided by generative AI or any other source--that the attorney responsible for submitting the pleading has not personally read and verified. Because plaintiff's counsel's conduct in this case violated a basic duty counsel owed to his client and the court, we impose a monetary sanction on counsel, direct him to serve a copy of this opinion on his client, and direct the clerk of the court to serve a copy of this opinion on the State Bar." (See, Noland v. Land of the Free, L.P., -- Cal.Rptr.3d --- (2025)).

9th Circuit precedents

The 9th Circuit has addressed attorney sanctions and disciplinary matters related to fabricated cases and fictitious citations, although not specifically in the context of AI. In Grant v. City of Long Beach, the court struck an appellant's brief for containing fabricated case law and fictitious citations, emphasizing the importance of accuracy and integrity in legal filings [See, Larry Grant v. City of Long Beach, 96 F.4th 1255, 1256, (9th Cir. 2024) at https://www.courtlistener.com/docket/68367399/larry-grant-v-city-of-long-beach/]. Similarly, in Caputo v. Tungsten Heavy Powder, Inc., the court addressed attorney sanctions and discipline frameworks applicable to misconduct cases. (See, Gregory Caputo, et al v. Tungsten Heavy Powder, Inc., No. 22-55142 (9th Cir. 2024), at https://law.justia.com/cases/federal/appellate-courts/ca9/22-55142/22-55142-2024-03-14.html]

Federal and bankruptcy cases inside and outside the 9th Circuit

Federal courts inside and outside the 9th Circuit have also dealt with AI-related citation issues. In United States v. Hayes, the Eastern District of California sanctioned an attorney $1,500 for citing a fictitious case generated by AI, highlighting the duty of candor to the court and the irrelevance of the source of the fictitious citation. The court also ordered the clerk of the court to service a copy of the sanctions order on "the District of Columbia Bar, of which...[defense counsel]... is a member, ... and the State Bar of California." The court also ordered the clerk of the court to service a copy of the sanctions order on "all the district judges and magistrate judges in the district." (See United States v. Hayes(United States v. Hayes, 763 F.Supp.3d 1054 (2025)).

In the bankruptcy context, In re Martin involved sanctioning debtor's counsel a penalty of $5,500 under Federal Rule of Bankruptcy Procedure 9011 for citing AI-generated fictitious cases, underscoring the need for attorneys to verify all legal authorities. The court also ordered the offending attorney and another of the firm's senior lawyers attend an in-person course on the dangers of AI. (See In re Martin (In re Martin, 670 B.R. 636 (2025)).

These cases highlight the fundamental duty of attorneys to verify the legal authorities they cite as well as the potential costs for failing to meet that duty. The California State Bar's guidance on AI use emphasizes the need for critical review and validation of AI-generated outputs to prevent the dissemination of false information. Attorneys must ensure that AI tools are used as aids, not substitutes, for thorough legal research and analysis.

California professional conduct standards

The California State Bar issued Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law in November 2023, which specifically addresses the obligations of attorneys using AI tools. The guidance notes that generative AI outputs may include information that is false, inaccurate or biased, and requires lawyers who use these outputs to critically review, validate and correct both the input and the output of generative AI to detect[ ] and eliminat[e] ... false AI-generated results. This guidance has been cited by California courts as establishing the professional standard of care for AI use in legal practice.

Arguments and rebuttals supporting harsh sanctions for AI citation misuse

Argument 1: Fundamental attorney duty violation: Attorneys have a fundamental professional obligation to read and verify all legal authorities cited in court filings, regardless of the source or method used to locate those authorities. Further, the duty of candor to the tribunal requires attorneys to ensure the accuracy of all factual and legal representations made to the court.

a. Anticipated rebuttals: Courts consistently reject arguments that unintentional reliance on AI tools excuse the citation of fictitious cases, emphasizing that intent is irrelevant when false statements are made to the court.

Argument 2: Harm to judicial system: Fictitious citations waste significant judicial resources as courts must spend time attempting to locate non-existent cases and opposing parties must expend resources exposing the deception. Further, such conduct undermines confidence in the legal profession and judicial system, promoting cynicism about the integrity of legal proceedings.

a. Anticipated rebuttals: Some attorneys argue for leniency based on the newness of AI technology, but courts note that AI limitations have been widely publicized since 2023, and attorneys should be aware of these risks.

Argument 3: Professional Conduct rule violations: Citation of fictitious cases violates specific California Rules of Professional Conduct regarding duty of candor (3.3) and frivolous positions (3.1). Further, Rule 1.1 requires attorneys to stay current with technology benefits and risks, making ignorance of AI limitations inexcusable.

a. Anticipated rebuttals: Arguments that sanctions should be educational rather than punitive are generally rejected, as courts emphasize the need for effective deterrence given the widespread nature of the problem.

Arguments and rebuttals against harsh sanctions for AI citation misuse

Argument 1: Intent-based analysis: Some courts consider whether counsel acted in bad faith versus making inadvertent errors when determining appropriate sanctions. Further, first-time offenders who demonstrate genuine remorse and implement corrective measures may warrant more lenient treatment.

a. Anticipated rebuttals: Most courts reject intent-based defenses, holding that the harm to the judicial system occurs regardless of the attorney's subjective intent or awareness.

Argument 2: Proportionality in sanctions: Sanctions should be proportionate to the misconduct and serve as effective deterrent without being purely punitive. Further, educational remedies such as mandatory AI training may be more appropriate than monetary sanctions for inadvertent violations.

a. Anticipated rebuttals: Courts generally find that monetary sanctions are necessary for effective deterrence, as warnings alone have proven insufficient given the increasing frequency of AI citation problems.

Argument 3: Evolving technology considerations: AI technology is rapidly developing, and attorney awareness of limitations may lag behind technological changes. Further, professional standards should evolve gradually as the legal profession adapts to new technologies.

a. Anticipated rebuttals: Courts emphasize that AI limitations have been widely publicized since 2023 and expect attorneys to exercise reasonable care in adopting new technologies.

Issue 1.1: Judges are also caught in the AI citation mess, highlighting the need for technological competence

The issue statement

As reported by multiple mainstream media, two federal judges -- Judge Julien Neals of the U.S. District Court for the District of New Jersey and Judge Henry Wingate of the U.S. District Court for the Southern District of Mississippi -- admitted that their staff used AI tools such as ChatGPT and Perplexity in preparing judicial rulings, resulting in opinions that referenced non-existent quotations and allegations. Judge Neals's intern used ChatGPT for legal research, which led to a judicial order with fabricated case quotes. Judge Wingate's law clerk used Perplexity, producing draft opinions with factual inaccuracies and fictitious parties or quotes. Both judges clarified that confidential or non-public information was not entered into the AI tools. They acknowledged these AI-assisted drafts contained serious mistakes and were replaced after lawyers raised concerns, but only following the court's review and public scrutiny. The incidents highlight risks of AI use -- unverified content, "hallucinated" legal citations and the absence of robust review processes.

These incidents triggered an inquiry by the Senate Judiciary Committee -- led by its Chairman, Sen. Charles E. Grassley -- into AI use and oversight in the judiciary, reflecting the broader challenge of technical competence and cybersecurity risks for the judiciary and legal professionals using emerging technologies. [See, https://www.judiciary.senate.gov/press/rep/releases/grassley-scrutinizes-federal-judges-apparent-ai-use-in-drafting-error-ridden-rulings] Also, upon receipt, Grassley released Neals', Wingate's and the Administrative Office of the Courts' responses to Grassley's request for here: https://www.judiciary.senate.gov/press/rep/releases/grassley-releases-judges-responses-owning-up-to-ai-use-calls-for-continued-oversight-and-regulation.

The missing technological competency links

The incidents involving Judges Neals and Wingate, and Grassley's lengthy oversight communications, make clear that judicial officers are now being scrutinized under the same, if not stricter, technical competence lens as attorneys:

  • Accountability: Judicial rulings with AI-induced errors call for a transparent correction process, record retention and internal review improvements.

  • Review protocols: Post-incident, chambers implemented new procedures: no draft can be docketed without independent review and direct verification, mandatory review by multiple staff and attachment of proof for all cited authorities.

  • Ethics and codes: Wingate explicitly refers to the Code of Conduct for United States judges, suggesting that technical diligence and competence must be part of what it means to "perform judicial and administrative duties competently and diligently."  

  • Policy development: Both judges cited the absence of firm rules guiding either their own or litigants' AI use, but noted that, pending national guidance, they have locally adopted interim practices. At the federal and state level, there is a move toward uniform policy, but at present, guidelines remain fragmented and varied.

  • Institutional gaps: The courts have sanctioned attorneys for similar failures, but only now is the judiciary's internal use of AI facing equivalent scrutiny.

Key takeaways for California attorneys and judicial officers

  • Technical competence for attorneys and judges now clearly includes the duty to fully verify, supervise and audit the use of generative AI and related technologies in all professional work. This extends to ensuring data security and avoiding disclosure of privileged or confidential information.

  • Risks from generative AI -- hallucinatory outputs, unverified citations and misattributed facts -- require explicit, systematic oversight and verification, with failures resulting in professional sanctions and reputational damage.

  • Cybersecurity is directly implicated in AI adoption; responsible practice means banning the entry of non-public data into unvetted tools and developing robust security protocols for AI-related workflows.

  • Judicial Officers are now being called to account for their own technical competence, with remedial policy writing, procedural reforms and ethics investigations mirroring the standards increasingly imposed on the bar.

Collectively, these incidents reflect a transition period in which both attorneys and judges must now adhere to ever-stricter standards of technical competence, and must respond with contemporary policies, layered verification and transparent correction protocols.

Issue 2.0: Cybersecurity threats to the judiciary, the court's response and the practitioner's duties

Summary of the issues

The federal judiciary has faced escalating cyberattacks targeting its case management systems, prompting enhanced cybersecurity measures. These attacks threaten the confidentiality and integrity of sensitive case documents, necessitating rigorous access controls and continuous system upgrades. The judiciary's commitment to cybersecurity is evident in its collaboration with federal agencies and the implementation of advanced security protocols, including the roll out and mandatory implementation of multi-factor authentication (MFA) requirements for anyone accessing the electronic case filing systems and court documents of all U.S. federal courts, no later than Dec. 31, 2025.

Background and history

Summary

The U.S. federal courts implemented multi-factor authentication (MFA) requirements for electronic case filing systems in response to a series of significant cybersecurity incidents affecting government systems between 2014 and 2021. The most critical incident occurred on Jan. 6, 2021, when the Judicial Conference of the United States notified federal courts of widespread cybersecurity breaches of government computer systems, prompting immediate security measures including special filing procedures for highly sensitive documents outside the CM/ECF system (In re Administrative Orders of Chief Judge, Not Reported in Fed. Supp. (2021)). This notification was connected to the broader SolarWinds cyberattack discovered in December 2020, which had previously affected federal systems including the U.S. Trustee Program in June 2020 (Securities and Exchange Commission v. SolarWinds Corp., 741 F.Supp.3d 37 (2024)). The Administrative Office of the U.S. Courts began implementing MFA for the Case Management/Electronic Case Filing (CM/ECF) system on May 11, 2025, with mandatory enrollment required by Dec. 31, 2025, to enhance system security and provide an added layer of protection against cyberattacks and unauthorized access.

Timeline of critical cybersecurity incidents

The federal judiciary's decision to implement MFA requirements was driven by a series of escalating cybersecurity threats targeting government systems. The Office of Personnel Management (OPM) experienced major data breaches in 2014 that affected over 21 million federal employees, exposing the vulnerability of federal information systems (In re U.S. Office of Personnel Management Data Security Breach Litigation, 928 F.3d 42 (2019)). These cyberattacks affected more than 21 million past, present and prospective government workers, demonstrating significant vulnerabilities in federal information systems (Id.). The SolarWinds cyberattack, discovered in December 2020, represented a particularly sophisticated supply chain compromise that affected numerous federal systems. This attack exploited SolarWinds Orion enterprise network management software, with hackers compromising the development and build environment to add malware to software updates received by approximately 18,000 customer organizations (Securities and Exchange Commission v. SolarWinds Corp., 741 F.Supp.3d 37 (2024)). The attack specifically impacted federal systems, including a cyberattack on the U.S. Trustee Program in June 2020 (Securities and Exchange Commission v. SolarWinds Corp., 741 F.Supp.3d 37 (2024)). The most direct catalyst for federal court security measures came on Jan. 6, 2021, when the Judicial Conference of the United States issued a notification to federal courts regarding widespread cybersecurity breaches of government computer systems (In re Administrative Orders of Chief Judge, Not Reported in Fed. Supp. (2021)). This notification prompted immediate action across the federal judiciary, with courts implementing emergency procedures to protect sensitive information.

Federal court response and special filing procedures

In response to the January 2021 cybersecurity notification, federal courts implemented immediate protective measures. The U.S. District Court for the Middle District of Florida established procedures requiring the filing of highly sensitive documents (HSDs) in paper format outside the electronic CM/ECF system (In re Administrative Orders of Chief Judge, Not Reported in Fed. Supp. (2021)). The court determined that good cause exists to require the filing of a 'highly sensitive document' based on the cybersecurity threats, defining HSDs as documents containing sensitive or confidential information that may be of interest to the intelligence service of a hostile foreign government and whose use or disclosure by a hostile foreign government would likely cause significant harm (Id.) These emergency measures specifically addressed law enforcement procedures, with courts declining to permit transmission of search warrants and other law enforcement investigative requests by email due to on-going concerns of cyber security threats (Id.) The court noted that while a blanket prohibition was unwarranted, the best practice was to avoid email transmission of sensitive law enforcement documents (Id.)

Implementation of multi-factor authentication requirements

The Administrative Office of the U.S. Courts responded to these cybersecurity challenges by implementing systematic long-term security enhancements, beginning with MFA requirements for the CM/ECF system. The implementation began on May 11, 2025, with a phased approach requiring all CM/ECF users to enroll in MFA by Dec. 31, 2025. The Administrative Office may begin randomly selecting users for enrollment beginning in August 2025 if they have not voluntarily enrolled. The MFA system requires users to provide at least two authentication factors from widely accepted categories: knowledge factors (something the user knows, such as passwords), possession factors (something the user has, such as a token), and inherence factors (something the user is, such as biometric characteristics) (33 C.F.R. § 101.615). This approach aligns with established cybersecurity principles that recognize authentication methods using multiple factors provide enhanced security over single-factor methods.

Security goals and objectives

The Administrative Office of the U.S. Courts implemented MFA requirements to enhance system security for CM/ECF by providing an added layer of security to accounts and to protect against cyberattacks and reduce the risk of unauthorized access. This implementation reflects broader federal cybersecurity initiatives recognizing that cyberattackers commonly exploit authentication weaknesses, making MFA use a critical enforcement focus for protecting sensitive government systems. The CM/ECF system represents a critical component of federal court operations, serving as the comprehensive case management system for all bankruptcy, district and appellate courts. As one court explained, CM/ECF allows courts to accept filings and provides access to filed documents online and gives access to case files by multiple parties, and offers expanded search and reporting capabilities (Witasick v. Minnesota Mut. Life Ins. Co., 803 F.3d 184 (2015)). The system's central role in federal court operations makes it an attractive target for cybercriminals seeking to infiltrate judicial systems or access sensitive case information.

Scope of implementation across federal courts

The MFA requirements apply to all federal courts using the CM/ECF system, which includes all federal district courts, bankruptcy courts and appellate courts, with the exception of the U.S. Supreme Court. This comprehensive implementation ensures uniform security standards across the federal judiciary while addressing the interconnected nature of the court system where vulnerabilities in one court could potentially compromise others. The phased implementation approach allows for orderly transition while maintaining court operations. The voluntary enrollment period through August 2025, followed by potential random selection for non-voluntary enrollees, balances security needs with practical considerations for court users who must adapt their workflows to accommodate the new authentication requirements.

Practical implications

The implementation of MFA requirements for federal court electronic filing systems has significant practical implications for attorneys and court users. All CM/ECF users must now enroll in MFA by Dec. 31, 2025, fundamentally changing how legal practitioners access court systems. The phased implementation allows for voluntary enrollment initially, with potential random selection beginning in August 2025 for those who haven't voluntarily enrolled. This represents a major shift in federal court technology practices, requiring attorneys to adapt their workflows and obtain additional authentication methods. The change affects all federal district, bankruptcy and appellate courts that use the CM/ECF system, potentially impacting thousands of legal practitioners nationwide.

Recent developments

The most significant recent development is the Administrative Office of the U.S. Courts' implementation of MFA requirements beginning May 11, 2025. This represents the federal judiciary's response to years of cybersecurity threats, including the 2020 SolarWinds attack and the January 2021 notification of widespread government system breaches. The timeline shows a progression from immediate emergency measures (special filing procedures for highly sensitive documents in 2021) to systematic long-term security enhancements (mandatory MFA by 2025). This aligns with broader federal cybersecurity initiatives and reflects the increasing sophistication of cyber threats targeting government systems.

Related issues

  • Privacy Act compliance and data protection obligations for government agencies handling personal information in federal court systems.

  • Federal Information Security Management Act (FISMA) and Federal Information Security Modernization Act (FISMA 2014) requirements for federal agency cybersecurity programs.

  • E-Government Act provisions governing electronic public access fees and system operation costs for PACER and CM/ECF systems.

  • Administrative Procedure Act compliance in implementing new technology requirements for court access procedures.

Commentary on this question

The history of cyberattacks reveals persistent vulnerabilities in information systems, highlighted by significant breaches such as the 2014-2015 incident involving a major health insurer that compromised the sensitive personal data of approximately 80 million individuals. This breach exposed systemic inadequacies, including the absence of multi-factor authentication (MFA), poor password management and excessive access permissions. Such failures allowed unauthorized remote access and extraction of data, emphasizing the critical need for enhanced cybersecurity measures like MFA to prevent unauthorized system intrusions and protect highly sensitive data from cybercriminals (157 Am. Jur. Proof of Facts 3d 367 (Originally published in 2016)). Further illustrating the enduring risk, the 2017 Equifax data breach affected over 148 million Americans, exposing personally identifiable information due to inadequate cybersecurity protocols despite public representations of secure systems. Courts in federal securities litigation have noted failures in basic safeguards such as encryption, patching and authentication controls, underscoring that MFA is a necessary component of a robust cybersecurity framework. These events triggered calls within federal entities and industries for heightened security standards, aiming to mitigate data breaches, safeguard users and maintain trust in electronic systems. The adoption of MFA requirements for federal court electronic filing systems consequently seeks to prevent unauthorized access, enhance data confidentiality and protect the integrity of court records against growing cyber threats (6 Bromberg & Lowenfels on Securities Fraud § 26:6 (2d ed.)), (eDiscovery for Corporate Counsel § 17:2).

Next steps for practitioners

For attorneys, these developments underscore the importance of safeguarding client information and maintaining secure communication channels. The duty of competence now extends to understanding and mitigating cybersecurity risks, as outlined in Rule 1.1 of the California Rules of Professional Conduct.

Recommendations for attorneys practicing in the ever-evolving digital world

To navigate these challenges, attorneys should consider the following recommendations:

  1. Attorneys should stay informed: Regularly update knowledge on AI tools and cybersecurity threats. Participate in continuing legal education programs focused on technology and ethics. 

  2. Implement best practices: Adopt robust verification processes for AI-generated content. Ensure all citations are accurate and supported by verified sources.

  3. Attorneys should undertake enhance cybersecurity measures: Utilize secure communication platforms and implement strong data protection protocols. Stay informed about the latest cybersecurity threats and defenses.

  4. Attorneys should collaborate with experts: When necessary, consult with technology and cybersecurity experts to ensure compliance with professional standards and protect client interests.

  5. Attorneys should advocate for clear guidelines: Engage with professional organizations and their bench-bar liaisons to develop clear guidelines on the ethical use of AI and cybersecurity practices in legal settings.

Conclusion

The duty of technological competence is evolving in response to the advancements in AI and increasing cybersecurity threats as the practice of law depends more and more on digital tools and access systems. These ever-changing conditions dictate ongoing obligations that require attorneys, judicial officers and court staff to adapt to new tools and threats. By staying informed and implementing best practices, exercising diligence in our use of AI tools to avoid ethical pitfalls, we can ensure the integrity of the legal profession and effectively integrate generative AI into our practices and within judges' chambers while safeguarding against cybersecurity risks. As the legal profession continues to evolve, maintaining competence in these areas will be essential to upholding the integrity and effectiveness of legal services and our courts. The cases discussed highlight the judiciaries and Congress's growing intolerance for AI-related citation misconduct and the importance of maintaining high standards of professional conduct.

#388737


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com