This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Jul. 24, 2024

Managing Gen AI risk: A brief primer

See more on Managing Gen AI risk: A brief primer

Organizations can streamline Generative AI (Gen AI) risk management by identifying intended use cases, mapping and prioritizing risks, implementing mitigations, monitoring for changes, and measuring and adjusting mitigation efficacy over time. By Stephanie Sharron

Stephanie Sharron

Partner, Morrison & Foerster

755 Page Mill Rd
Palo Alto , CA 94304

Phone: (650) 813-4018

Fax: (650) 251-3799

Cornell University Law School

Stephanie is a member of the firm's Technology Transactions Group, counsels companies in connection with technology and intellectual property transactions and related privacy, data security and internet safety issues.

This article provides a brief primer on how organizations can streamline the management of Generative AI (Gen AI) risk. This process is loosely based on NIST's AI Risk Management Framework:

• Identify the intended use cases for the proposed Gen AI solution.
• Map and prioritize the categories of Gen AI risk for use cases across the lifecycle of the Gen AI solution.
• Identify and implement risk mitigations based on the mapped Gen AI risks.
• Monitor to ensure that use cases haven't changed and risk mitigations have been implemented.
• Measure the efficacy of the risk mitigations over time and adjust as appropriate.
• Periodically reassess the impact of any changes to use cases, the Gen AI solution, and corresponding Gen AI risks.

USE CASES

Gen AI has grabbed the imagination of consumers and enterprises alike due to its broad range of purposes. Not all uses, however, carry the same type or degree of risk:
• Certain uses might involve the processing of personal data and therefore raise privacy concerns, while others may not.
• Use to automate currently manual tasks may have inherent safety risks but also might have safety benefits; similarly, automation might improve wages for some workers but could result in the loss of valued jobs for others.
• Use in recruiting or lending might not present a risk to physical safety but could harm individuals by generating biased results or unfair outcomes.
• And when applied to create new works, Gen AI users might care about a possible loss of copyright in some works generated by Gen AI but not others.
Once proposed uses are identified, organizations can map the various risk categories to the use, assess the risk's impact, and identify available risk mitigations. With this information in hand, organizations will be better able to make informed decisions about when to permit the use of Gen AI and appropriate guardrails to deploy.

OVERVIEW OF KEY GEN AI RISKS

• Quality of results (e.g., accuracy, reliability, and validity)
Gen AI solutions can generate inaccurate, incomplete, inconsistent, or unreliable results. Whether these risks can be mitigated through further fine-tuning or configuration requires assessment.
• Transparency and explainability
The less information that is known about how a Gen AI solution is developed (including the data used to train and fine-tune the solution) and why it produces the results it generates, the harder it is for those relying on the results to assess the solution and verify that the AI solution is valid and reliable and that its results are accurate.
• Legal and regulatory risk
If the intended use falls within a regulated area, the use of a Gen AI solution for this purpose will likely be subject to regulation as well. AI-specific regulations are emerging in the U.S. and abroad.
• Intellectual property risk
Content providers have brought numerous claims against Gen AI vendors for violation of their rights in their content. Separately, when Gen AI is used to produce content, the results may not be protectable under certain intellectual property rights regimes. To protect confidential information, Gen AI tools that access data across an organization must be configured carefully to ensure proper internal policies and third-party commitments are observed. Gen AI vendors also should be contractually required to protect the confidentiality of information submitted by users.
• Privacy risk
If personal data is involved in the use of a Gen AI solution, there could be data privacy risks. The risk's significance depends upon the sensitivity of the personal data, how the Gen AI solution will be used, where the affected individuals reside, and what notices, consents, policies, and laws apply to that personal data.
• Risk of bias and unfairness
If the data set used to train or fine-tune the Gen AI solution is not representative or reflects existing biases, the output of the Gen AI solution may reflect the bias or unfairness of the underlying training data.
• Security risk
Malicious actors could use Gen AI to perpetrate phishing attacks, impersonate others ("deep fakes"), commit fraud, or spread false and misleading information. If malicious actors succeed in introducing false or harmful data into the solution's training data, this also can "poison" the model, leading to the generation of inaccurate or otherwise harmful output.
• Safety and product liability risk
Certain intended use cases could present risks to the physical or personal safety of individuals, groups of individuals, or larger segments of society.
• Cost and sustainability risks
Implementing Gen AI solutions can be cost- and resource-intensive, not just because of the cost of procurement, but also because of the costs associated with further fine-tuning the Gen AI solution using the organization's own data and customizing it to the organization's needs. The energy requirements of Gen AI solutions also may have cost and environmental impact, especially for those organizations developing and deploying Gen AI solutions.
• Workforce risks
Gen AI may negatively impact the workforce by replacing jobs. It also may benefit the workforce through more effective or efficient performance of tasks using Gen AI, by making certain jobs safer through the use of Gen AI, or by improving wages.

GOVERNANCE: PERFORMING THE AI RISK ASSESSMENT

Teams charged with evaluating both the likelihood of a risk and the risk's level of impact should be collectively experienced in assessing the various categories of risk listed above. Once informed about the proposed use cases, evaluation teams should rate the significance of each risk category in terms of legal and regulatory risk, risk to third parties, and risk to the organization's business, technology, brand, and reputation. These teams then can streamline the approval process for lower-risk use cases and highlight key concerns to prioritize for those that are higher-risk use cases. The risk assessment process should incorporate input from individuals external to the team that developed or deployed the AI solution to avoid bias arising from the evaluation of team members' own work product.
Assessments must be revisited periodically to reflect real-world experience; changes to the Gen AI solutions, use cases, and risks; and developments in the legal and regulatory landscape. Accountability at the executive level for AI risk-related decisions, and effective oversight of AI risk management both by management representatives and an organization's board, are also essential components of AI governance.

CONCLUSION

Organizations need streamlined processes for assessing Gen AI solutions. By applying a thoughtful and consistent approach to assessing and managing the use of Gen AI, organizations can accelerate adoption of lower-risk use cases that provide maximum benefits to the organization while ensuring that higher-risk use cases are more carefully vetted and adopted only when coupled with appropriate guardrails and oversight.
Stephanie Sharron is a partner at Morrison Foerster LLP.

#379873

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com