This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
News

Dec. 23, 2025

How legal professionals are using AI with Professor Robin Feldman

Professor Robin Feldman is the Arthur J. Goldberg Distinguished Professor of Law at UC Law and the director of the AI Law & Innovation Institute. For nearly a decade, she has provided technical advice on AI policy to the U.S. government, including various congressional committees, the Army Cyber Institute, the Government Accountability Office (GAO) and other federal and state agencies.

How legal professionals are using AI with Professor Robin Feldman
Professor Robin Feldman

As someone who has been tracking AI for 20 years, have you been surprised at the recent speed with which it has affected the legal profession? Has anything turned out differently than you expected?

Frankly, I expected AI policy and regulation issues to take off a decade ago. Around that time, I was asked to help the GAO draft a report to Congress on the future of AI and its impact on society. I assumed that was the kickoff of government interest, but then there was nothing...until ChatGPT burst onto the scene.

From a scientific perspective, however, everything about AI and LLMs (large language models) is astounding. As a genius friend commented, "we always thought that when we got to this point with AI, we would understand more about how the mind works." And yet here we are, trying to understand both the mind and AI, as well as figuring out how they are similar and different.

I also would not have predicted that the kickoff for AI would be in the consumer sector, where the profits are questionable, rather than the business sector.

If someone just entering the legal profession asked you how to think about AI in their career development, what would you tell them?

AI is an extraordinarily powerful legal tool. But only if used properly.

For example, AI is a spectacular tool for sorting documents, organizing materials and analyzing data. And it can serve as a good starting place for research, but only a starting place. If you stop there, you will inevitably fail.

Above all else, AI provides an enormous advantage to the young. Those who grew up with phones as extensions of their fingers are likely to embrace this revolution far more effectively than those of us who are BBT (born before technology).

What kind of legal work do you think AI will never be able to do well, and why?

AI is not designed to do high-level legal analysis. The law evolves when legal minds seek out the interstices in the doctrine, the spaces where something hasn't been decided, and there is room to make an argument for the client. AI, in its current form, can never do that.

Why? Because AI is designed to give the statistically most likely answer. It is not designed to creatively push the boundaries of legal thought and find innovative pathways. For that, clients will always need a good lawyer.

How has AI affected your work with students? Is it something they are concerned about?

My students began using AI within weeks of ChatGPT's release. I worry about student use in two ways. First, if students rely on AI to do the work for them, they may never learn to do good legal analysis. Learning to think like a lawyer requires grappling with the cases. A sanitized summary won't do it.

Second, many students continue to assume that AI gives the right answer. Much of the time, it doesn't. Cases can be fictional, quotes are out of context, or logic is partly right and partly wrong. AI can be a good place to start, but not to end.

Imagine a partner taking a brief written by a first-year associate and submitting it to the court without anyone reading the brief or checking sources. At any decent law firm, the partner should be fired. The same is true for relying on AI.

How would you advise attorneys to avoid problems when using AI, such as hallucinations and errors?

Treat any AI response with deep skepticism. Consider an AI answer as you would consider a response from a first-year associate on day one of the job. As with anything in law, it must be checked and double-checked.

How extensively do you use AI in your own work? What guardrails have you put in place?

To write my recent book on AI, I spent a month using AI programs to learn the math and science underlying large language models. AI is an excellent teacher, particularly for subjects that are clear-cut and noncontroversial.

I have also used AI to help create charts and images, or to search for academics and commentators writing on a topic. The best guardrail is simply to say, "turn your brain on." AI is a great timesaver in many ways, but you cannot rely on AI in law. We are not there yet.

What is an issue in AI regulation that most people are not thinking about enough?

We are in the midst of a Cold War regarding the race for AI technology, and the winner will dominate the next generation, both economically and militarily. We cannot afford to lose. With any proposed legislation, we should ask whether it benefits our adversaries. If it does, we should consider rethinking the regulation.

#389161

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com