This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.


Apr. 11, 2023

Responsible AI as part of a company's ESG framework

How "Responsible AI" practices in combination with ESG frameworks can be applied both to mitigate risk and support a corporation's ESG goals.

Susan H. Mac Cormac

Partner, Morrison & Foerster LLP


Suz is a corporate partner at Morrison & Foerster who has been focused on integrating impact (with particular emphasis on mitigation of climate change) into mainstream companies and capital markets since 2000. She is a drafter of the California Social Purpose Corporation, founded MoFo's impact practice over 10 years ago, teaches a course on Social Enterprise at Berkeley Law School, and represents a range of clients from growth equity and sovereign funds to dedicated impact funds to family offices, together with social enterprises and companies focused on sustainability and ESG.

Stephanie Sharron

Partner, Morrison & Foerster

755 Page Mill Rd
Palo Alto , CA 94304

Phone: (650) 813-4018

Fax: (650) 251-3799

Cornell University Law School

Stephanie is a member of the firm's Technology Transactions Group, counsels companies in connection with technology and intellectual property transactions and related privacy, data security and internet safety issues.

Oluwabamise A. Onabanjo

ESG Analyst, Morrison & Foerster LLP

Artificial intelligence (AI) has attracted significant attention across all sectors of society by making predictions, recommendations, or decisions based on human-defined objectives. Most recently, generative AI technologies that produce content in response to user queries have quickly gained vast media attention due to their ease of use and accessibility. AI-based solutions are already widely deployed to support a broad range of industries and business operations. From autonomous vehicle technology, candidate selection in recruiting applications, bail-setting tools for defendants within our judiciary, and facial recognition (e.g., in retail, by TSA, and by law enforcement), to applications to diagnose disease, detect fraud in financial transactions, and run help desks, AI plays a key role in our lives. That role is certain to grow as these technologies mature. Someday, artificial generalized intelligence may emerge that enables something much closer to human sentience.

While AI solutions have the potential to contribute significant value, AI also can be used to create and spread misinformation or disinformation, perpetrate scams and other crimes, perpetuate unfair biases, and discriminate against protected classes of individuals. Additionally,l this technology can interrupt business operations, harm business reputation, drain corporate coffers in the event of data breaches, violate privacy, threaten data security and physical safety, and even start or accelerate conflicts between nation states. AI also often has significant power consumption requirements that can cause harm to the environment. Accordingly, if not managed responsibly, AI can result in significant risks. This article explains how Responsible AI practices in combination with Environmental, Social, and Governance (ESG) frameworks can be applied both to mitigate risk and support a corporation's ESG goals. By doing so, organizations will be best positioned to maximize the benefits associated with their AI-related activities.

Defining ESG and Responsible AI

Corporations implement ESG frameworks to help guide both the corporation and its stakeholders about the impact of certain risks on corporate performance (sometimes referred to as "outside in" effects) and the ability of the corporation's governance mechanisms to manage such risks. These risks extend not only to environmental and sustainability concerns but also to issues affecting society more generally, such as privacy, security, fairness (including DEI), and human rights, not to mention risks of and to financial systems (e.g., anti-money laundering). ESG can also address the impact of the corporation on the environment and sustainability, as is the case in the EU and UK as well as on individuals and society more generally. In the United States, while some propose taking this "double materiality" approach and certain corporations have elected to do so voluntarily, thus far the SEC has not extended its mandatory reporting requirements.

"Responsible AI" also addresses risk management and performance, but rather than guiding investors and other stakeholders on corporate performance, the focus is on helping the organization identify, assess, and mitigate the particular risks associated with use of AI. Thus, Responsible AI programs complement ESG frameworks. The National Institute of Standards and Technology (NIST) recently published a risk management framework that aims to help organizations apply Responsible AI principles. The framework focuses on the importance of the following critical criteria for AI solutions:

Designing these features into AI solutions mitigates against unintended consequences as well as accompanying regulatory and reputational risks.

Relationship between ESG and Responsible AI

AI directly impacts the risks addressed by ESG. AI may materially impact the environment due to AI's potentially significant energy consumption requirements, while also having beneficial environmental effects through use in environmental research or as a tool to assess environmental impact and reduce "greenwashing" practices. As with ESG frameworks, Responsible AI frameworks aim to mitigate adverse sustainability and environmental impact through, for example, energy-conscious design. By addressing AI's sustainability and environmental issues in a cohesive and consistent manner under both ESG and Responsible AI rubrics, organizations can align AI use with ESG goals.

In addition to climate risks, AI implicates other risks to individuals and society that ESG also aims to address. Responsible AI programs seek to identify all such potential risks, including fairness, bias, security, safety, and privacy concerns, and address them proactively. Furthermore, governance is at the core of both ESG and Responsible AI programs. ESG frameworks provide a structure for reporting on whether corporations have implemented strong governance practices to manage use of AI responsibly. Responsible AI programs in return address governance by building processes that seek to hold appropriate individuals throughout organizations accountable for risk management. By aligning ESG and Responsible AI governance, each can be strengthened. Accordingly, a Responsible AI program is a crucial companion to a company's ESG framework.

Responsible AI programs also complement ESG materiality assessments. Materiality assessments within ESG programs typically survey a company's operations, stakeholders, and ecosystem to better understand how to prioritize ESG efforts based on company needs and stakeholder priorities. When measured responsibly, ESG performance can be used to supplement financial performance factors, providing investors and other stakeholders with a more comprehensive view of the corporation's performance. ESG programs in combination with Responsible AI programs can help organizations understand their AI use cases, how external factors might impact the use case (as well as the impact of the AI use case on external factors if double materiality is part of the program), risk hotspots and how well the corporation is managing these risks.


AI holds tremendous promise for corporate growth and opportunity. However, use of AI can also adversely impact trust, reputation, and overall corporate operations and performance. By implementing Responsible AI practices, corporate performance as measured against ESG criteria can be bolstered. However, if corporations design, develop, and deploy AI solutions without adequate governance and risk mitigation through Responsible AI practices, ESG efforts may very well be undermined. Those corporations that establish corporate policies and a holistic approach to integrating Responsible AI and ESG frameworks will be best positioned to take advantage of the exciting potential of AI.


Submit your own column for publication to Diana Bosetti

For reprint rights or to order a copy of your photo:

Email for prices.
Direct dial: 949-702-5390

Send a letter to the editor: