This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
News

Dec. 29, 2025

How legal professionals are using AI with Danny Abir

Danny Abir is the Managing Partner of ACTS LAW, one of the largest plaintiffs only litigation practices in California.  Abir serves on the board of multiple professional organizations, including on the board of governors for the Consumer Attorneys of California and as 2024 Treasurer/2028 President for the Consumer Attorneys Association of Los Angeles. 

How legal professionals are using AI with Danny Abir
Danny Abir, managing partner, ACTS Law

What specific task or workflow in your practice has AI changed most dramatically, and what does that look like day-to-day?

Three areas have seen the most dramatic change: demand letters, motion drafting, and finding information across our cases.

For demand letters, we used to have a single demand writer handling work from five or six case managers. Documents would pile up (medical records, billing, police reports) and the backlog was constant. Now our case managers draft their own demands using AI tools like ChatGPT, Claude, and EvenUp. They know their cases best, so they catch inaccuracies immediately instead of waiting for someone else to get up to speed. Complex demands still go to our experienced demand writer, but routine ones move much faster.

Motion drafting used to mean hunting through old files for something similar we could adapt, then spending hours or days on legal research and writing. Now we upload the defense's motion along with key facts and our draft, and AI helps identify every issue we need to address. It surfaces counterarguments we might not have considered. We also feed it past rulings from the specific judge so we can tailor our arguments to what that judge cares about.

The biggest transformation has been in mass tort cases. When you have hundreds of plaintiffs and need to complete plaintiff fact sheets, you're answering the same questions case by case, digging through intake notes and PDFs for each one. That used to take months. We moved our case management system to Azure and connected AI models to our database and documents. Now we give AI a set of questions, it searches across all relevant cases and returns a spreadsheet with the answers. What took months now takes a fraction of that time.

When you're using AI tools for legal research or drafting, how do you verify the output? What's your process for catching hallucinations or errors?

Our internal policy requires a human in the loop. Nothing leaves the office without thorough review. Who reviews depends on what it is. Motions, demand letters and anything involving legal analysis or client advice always goes through an attorney. But routine correspondence, like an email explaining to a client what a letter from the insurance company means, can be reviewed and sent by the paralegal who drafted it.

We know AI can fabricate case citations, so those get checked extensively. Beyond that, we verify that it followed instructions. If you asked for first person, did it actually write in first person? We also compare facts against source documents. AI can sometimes pull facts from a different conversation or mix up plaintiffs in related cases. We watch for that: making sure Plaintiff A's facts stay with Plaintiff A.

Some of our more advanced users also cross-check between tools. If they drafted something in ChatGPT, they'll run it through Claude or Lexis+ AI to validate it, and vice versa. But regardless of which tools are used, the person submitting the work is responsible for verifying accuracy before anything goes out.

Have you encountered a situation where AI led you astray or gave you problematic advice? What happened and what did you learn?

All the time. AI makes assumptions and inferences, and once it starts down a path based on those assumptions, it keeps going. You have to watch it and stop it when it veers off course.

A good example: we had AI writing victim impact statements with instructions that each one should be three pages. But some plaintiffs simply didn't have enough information to fill three pages. Instead of flagging the problem, the AI started making things up or pulling details from other victims to hit the page count.

We backtracked and changed the approach. First, we had it sort the cases by how much information was available: which ones could support three pages, which were better suited for two, which only had enough for one. Then we reviewed that sorting, ran test examples of each length, and iterated until it got it right.

The lesson is that when AI goes astray, it's usually because you led it there. Conflicting instructions, unclear context, or vague asks will send it down rabbit holes. The clearer you are about the role, the guardrails, and what output you actually need, the better the results. You're laying the foundation. If the foundation is off, everything built on top will be off too.

How are you thinking about confidentiality and data security when using AI tools? What guardrails have you put in place?

This is still a gray area. There aren't enough laws or rules governing confidentiality and data security with AI tools yet. So we've established some policies internally.

First, we remind our teams that AI chats can be subpoenaed and may become discoverable in litigation. If you're working with information you don't want disclosed, you have three options: redact it, create a hypothetical using the information without uploading the actual data, or do the task manually without AI. Sensitive information like Social Security numbers, birth dates, and medical record numbers never get uploaded. When we need to use portions of medical records or evidence, we redact everything except what's necessary for the specific task.

Second, we don't allow free AI tools. You must use company-approved tools where we have corporate accounts: Claude, ChatGPT, Perplexity, Lexis+ AI, and Westlaw. If someone wants to use a different tool, they bring it to us for review before using it.

Third, we always opt out of allowing our data to be used for AI model training. That setting is turned off across every tool we use. With free subscriptions, you typically can't opt-out and your data can be used to train the model. That's why we require paid business subscriptions where training is disabled by default for all users, not just some.

What kind of legal work do you think AI will never be able to do well, and why?

The reality is this is a moving target. Every week AI advances and can handle more. So I'll answer for today.

Today, AI cannot be compassionate. It doesn't understand humanity or empathy. It can sound like it has those characteristics, but it doesn't. It doesn't know how to address someone who's crying or angry. And because AI is designed to please you, it can fan the flames. If you express strong feelings, it may reinforce them rather than provide balanced perspective.

On the technical side, AI still struggles with document formatting. It can't properly structure pleading paper or consistently format documents and emails the way we need them. It's also not fully autonomous. Hallucinations are still common enough that you need a human in the loop for everything.

Legal research is another limitation. AI can help, but it's not as reliable as a human researcher because of constraints on context and memory. That's actually our biggest challenge right now. Until context and memory are solved, AI can handle one or two discrete tasks well, but long tasks or multi-step workflows aren't there yet.

How has AI affected your professional relationships with clients, colleagues and litigants in terms of services, communication, or expectations?

AI has helped us communicate more clearly with clients, but also when presenting our clients' cases to others. When explaining complex legal issues to a client whose understanding doesn't match reality, AI helps us break it down in a way that's easier to digest. And when we need to explain a client's issues to defense counsel, an insurance adjuster or a jury, AI helps us present that more effectively. Clients now expect faster responses and clearer communication, and AI has helped us meet those expectations.

That said, we've learned clients don't like talking to AI directly. AI receptionists can get stuck in loops and become annoying. Where AI helps is behind the scenes: explaining things better, answering questions in different ways, improving the quality of our human communication.

Among colleagues and across the industry, AI has leveled the playing field. A big powerhouse firm isn't the only threat in the arena anymore. A small five-person firm with a deep understanding of how to leverage AI can produce nearly the same results as a major defense firm with endless insurance money to spend on litigation. That changes the competitive dynamics significantly.

If someone just entering the legal profession asked you how to think about AI in their career development, what would you tell them?

The most frequent question I've been asked is whether or not AI will replace lawyers. My answer is simple - AI will not replace lawyers but lawyers who know how to use AI properly will replace lawyers who don't.

Use AI every day. Learn what it does well and what it doesn't. Learn to leverage its strengths. This gives you an advantage as you move forward, not just because you're more productive, but because you're keeping up, learning how to interact with these tools as they evolve.

Everyone is adopting AI. Your case management system, the way you do discovery, the court systems: all these tools and businesses are integrating AI into their applications and workflows. AI is your competitive advantage. If you don't adopt it, you're going to fall behind and become obsolete.

AI is here to augment your abilities. It's your superpower. Leverage it, adopt it, make it part of every workflow you have.

#389183

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com