This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Ethics/Professional Responsibility

Jan. 31, 2023

The Chatbot lawyer

While the A.I. response is interesting, even thought-provoking, it is not on the mark. It mixed general ethics issues with those that are specific to lawyers, such as rules of professional conduct.

Teresa J. Schmid

Director, American Bar Association Center for Professional Responsibility

Email: teresa.schmid@schmidwatsonlaw.com

Teresa is a professional responsibility attorney and consultant in management and public policy. A former executive director for the State Bar of Arizona and the Oregon State Bar, and former assistant chief trial counsel for the State Bar of California, she is the secretary for the Professional Responsibility and Ethics Committee for the Los Angeles County Bar Association. The views expressed are her own.

In its most basic form, a chatbot is a language model that enables a user to post a query and receive aggregated information on the subject from a range of sources. For example, the Los Angeles County's traffic court has a Traffic Chatbot to help self-represented defendants resolve their cases.

Query: can a robot really chat with a human being? Recent enhancements to chatbots are in the news. In its Jan. 22, edition, the New York Times published an article that tracked the development of artificial intelligence from Alan Turing's 1950 "imitation game" to an enhanced form of open-sourced A.I. To test the new chatbot's capabilities, the Times asked it to write a sonnet, which it did. The newspaper printed the A.I. sonnet, which is technically correct, but arguably too prosaic to qualify as art. The exercise is sufficient to raise questions: Can A.I. think, create, or, by extension, advise a course of action?

Chatbots are well grounded in federal law. Congress officially encouraged the use of interactive media in The Communications Decency Act of 1996, codified at 47 U.S.C. section 230, and entitled: "Protection for private blocking and screening of offensive material." Paragraph 230(b)(1) recites: "It is the policy of the United States- to promote the development of the Internet and other interactive computer services and other interactive media." Paragraph 230(f)(2) included "access software provider" in its definition of interactive computer services. Paragraph 230(4)(C) then defines "access software provider" to include a provider of "software or enabling tools" that "transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content." Section 230 not only encourages providers and users of access software; it also protects them from civil liability in paragraph 230(c)(2). Caselaw has also honored the civil liability protection of section 230. In G.G. (a minor) v. Salesforce.com, Inc., 2022 U.S. Dist Lexis 87616, 2022 WL 1541408 (N.D. Ill May 16, 2022), the court cited section 230 to dismiss a complaint against the defendant, which the court identified as a protected interactive computer service. With such encouragement and protection in place, can a lawyer ethically use information collected from queries to a chatbot?

A chatbot user recently posed a query: how could a lawyer use a chatbot? The response listed a number of tasks that could be performed by a language model: legal research; document generation; summarization; predictive analytics; due diligence; and virtual assistance, including scheduling appointments and providing general information to clients. Any lawyer could immediately flag the ethical hazards of incorporating A.I.-generated information into legal work product without further proof of its accuracy. Would the chatbot recognize ethical issues as well? A second query asked what ethical issues lawyers would face in using A.I. The response identified several, which are summarized and partially quoted below, with capitals as in the original: Bias, since "AI systems can perpetuate and even amplify existing biases." Transparency, since AI can be "difficult to understand and explain;" Privacy, "particularly in the context of sensitive information like healthcare data or financial records;" Security, since "A.I. systems can be vulnerable to cyber-attacks;" Job displacement, since A.I. can "automate many tasks previously performed by humans;" Autonomy, since A.I "can make decisions without human intervention;" Human Supervision, since A.I. systems require "human supervision and intervention to ensure safety and ethical outcomes;" Fairness, since "A.I. systems are used in decision making processes that have significant impact on people's lives...Ensuring fairness in these decisions is a major ethical challenge." While the A.I. response is interesting, even thought-provoking, it is not on the mark. It mixed general ethics issues with those that are specific to lawyers, such as rules of professional conduct.

Like any other research tool, A.I. yields better results as the user poses better questions. The chatbot disclosed its core weakness when asked the ultimate question: what personal moral values does it apply to its responses? Its answer, quoted in pertinent part, was telling: "As a language model, I do not have personal moral values or emotions. I am trained on a large dataset of text, and my responses are based on patterns in that data. However, the training data that I was trained on is generated by human (sic), so it may contain moral values and biases of the society where the data is generated."

Providers of interactive computer services are generally candid about the services' limitations and post disclosures and warnings on their home websites. Proficient users of the Internet are circumspect about the quality of information it provides; users of the new generation of chatbots may prove to be equally cautious. After all, they are chatting with a robot.

This column contains purely informational, educational, or technical material. The views expressed herein represent the opinions of the author and have not been approved by the ABA House of Delegates or the Board of Governors and, accordingly, should not be construed as representing the position of the association or any of its entities.

#370745


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com