Technology,
Ethics/Professional Responsibility
Dec. 3, 2025
Seeing is believing, unless you're looking at a deepfake
Meteoric rise of deepfake AI-generated content creates risks for attorneys.
Anita Taff-Rice
Founder
iCommLaw
Technology and telecommunications
1547 Palos Verdes Mall # 298
Walnut Creek , CA 94597-2228
Phone: (415) 699-7885
Email: anita@icommlaw.com
iCommLaw(r) is a Bay Area firm specializing in technology, telecommunications and cybersecurity matters.
The old expression "I can't believe my eyes" should be a warning to attorneys facing the maddening pace of artificial intelligence development, which is fueling an explosion of deepfake videos, images and audio.
Deepfakes are created with AI to manipulate or generate images, videos or audio recordings to create an appearance of individuals doing or saying something that looks incredibly real. But they are completely fake. Deepfakes often use "face swaps" in which the head of one person is grafted onto another real or AI-generated body, or replicate the real voice of another.
Computer manipulations of video or audio are nothing new, but the ability of AI to create realistic fakes is. Previously, images had alignment problems or audio had a jittery quality, but today's AI has become so sophisticated that humans mistake deepfake videos more than half of the time, according to a study from the National Academy of Sciences.
Even when given a warning, it is difficult for humans to identify deepfakes. The Royal Society for Open Science conducted an experiment in 2023 in which participants were given a warning that at least one of the five videos was a deepfake. Participants correctly identified the deepfake only 21.6% of the time.
The number of deepfakes detected worldwide increased tenfold from 2022 to 2023: 1,740% in North America, 1,530% in the Asia Pacific, 780% in Europe, 450% in Africa and the Middle East, and 410% in Latin America, according to Sum and Substance, a UK-based identity verification company.
The implications for attorneys are enormous, whether protecting client data from misappropriation, impersonation of clients in remote communications, financial fraud in business transactions or the introduction of deepfakes in court. Law firms are particularly attractive targets to deepfakes because attorneys rely heavily on electronic communications to accommodate clients across different states, cities or countries, where in-person meetings are often impractical or expensive. Remote communications, including video conferencing increases the risk of fraud from deepfakes.
Law firms must worry not only about financial exposure from falling for deepfakes, but also about their ethical duty to protect confidential client information, and. state bars are increasingly requiring attorneys to be more technologically savvy.
There have been reported instances of deepfake videos or audio being used to impersonate high-profile clients and law firm partners as a way to gain access to sensitive information, or to authorize wire transfers and other financial transactions. Cybercriminals might also pose as third-party vendors, such as document handling services or financial institutions, asking for login credentials, financial details or case-related information.
The brazenness of these schemes is astonishing. Last year, CNN reported on an elaborate scam in which n Hong Kong law firm employee was duped into paying $25 million to cybercriminals based on a video call where all the participants who appeared to be his colleagues were actually deepfake creations.
Other examples have been reported in which spoofed emails and fake links to outside vendors have been used to gain access to account login information. In 2022, multinational law firm Orrick, Herrington & Sutcliffe was the target of a cyber-attack in which highly sensitive data of 637,620 clients, and their customers, was stolen from files stored by the law firm. The exact method used by the cybercriminals was not disclosed, but it was reported that access was gained through a file-sharing service which allowed the attackers to steal data over a two-week period. The data stolen from Orrick covered a wide range of personal information, including names, addresses, dates of birth, Social Security numbers, financials, driver's license and passport numbers, credit card numbers, and healthcare information such as diagnoses, treatments and insurance claims. A class action lawsuit was filed against Orrick in the U.S. District Court for the District of California. The firm paid $8 million to settle. (In re: Orrick, Herrington, & Sutcliffe Data Breach Litigation, Case No. 23cv4089 (N.D. Cal)).
Similarly, cybercriminals targeted a Florida-based law firm in 2022 and stole a variety of sensitive personal information, including names, addresses, date of births, Social Security numbers, and medical and health insurance information from the firm's clients. Gunster eventually paid $8.5 to settle a class action lawsuit. (Mary Jane Whalen and Christine Rona v. Gunster, Yoakley & Stewart, PA, Case No. 24civ80612-AMC (S.D. Fla.)).
The American Bar Association has reported that up to 42% of law firms with 100 or more employees have experienced a data breach. Despite the increasing use of AI techniques, such as deep fakes, spoofed emails and phony links to carry out cyberattacks against law firms, many firms report that management is unaware of what deepfakes are and have no formal plan in place to prevent cyber-crime or respond if it does happen.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com
