This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Health Care & Hospital Law

May 16, 2024

Healthcare entities must be cautious when using AI to avoid discrimination and maintain clinical oversight

Healthcare companies must ensure clinical oversight, especially where the tools assist with functions traditionally performed by licensed practitioners. Healthcare companies must also be wary of biases that could be inadvertently built into machine-learning algorithms and comply with Section 1557 nondiscrimination requirements.

Alice Hall-Partyka

Counsel, Crowell & Moring LLP

Shutterstock

Artificial intelligence (AI) can tackle many of today’s challenges, and the healthcare space is no exception. Tools leveraging machine learning technology have the capacity to improve care and health outcomes, streamline processes, and ease administrative burdens. However, as the opportunities and capabilities for these tools expand, healthcare entities must tread carefully to ensure that these tools do not adversely impact patients and comply with federal and state regulatory schemes that were not designed with AI in mind.

AI tools can be used in health care in a myriad of ways. Algorithms can help providers assess a patient’s risk of specific health outcomes and develop more accurate diagnoses. Providers can use AI models to automate medical records and administrative functions, and machine learning tools can help payers with utilization management processes. While the use of algorithms in health care is not new, the complexity of the AI tools available to healthcare companies is increasing. Sophisticated AI tools increasingly use “black box” models, in which machine-learning algorithms are trained to make predictions based on inputted information but the reasoning for the prediction is unknowable. As technology advances in this manner, the regulatory and policy questions facing AI in healthcare become more complicated, including as they pertain to professional scope of practice and nondiscrimination laws.

Healthcare companies looking to use any AI tools to aid with clinical decision-making must assess the extent to which the tools can be relied upon, particularly where the tools assist with functions traditionally performed by licensed practitioners. State scope of practice laws prevent unlicensed individuals and, in many states such as California, corporations from practicing medicine, and healthcare entities need to think carefully about how to implement tools in ways that supplement, rather than replace, clinical expertise. See Cal. Bus. & Prof. Code §§ 2400, 2052. This need for adequate clinical oversight applies not just to practitioners using AI to care for patients, but also to health plans seeking to incorporate AI tools into utilization management functions, where state laws require certain medical necessity determinations to be made by licensed physicians and healthcare professionals. See Cal. Health & Safety Code § 1367.01(e). Regardless, the simpler the tool, the easier it may be for healthcare entities to ensure clinical oversight. As more advanced tools are incorporated into clinical practice, and predictions are based on complex models that practitioners cannot verify, it will become even more important for providers and plans to have policies that provide guardrails around the use of the technology and ensure that licensed practitioners retain ultimate control over clinical decisions.

In addition, healthcare companies need to be wary of biases that could be inadvertently built into machine-learning algorithms. The Affordable Care Act, as passed in 2010, contains nondiscrimination requirements that apply to federally-funded healthcare programs and activities, including many providers and health plans. 42 U.S.C. § 18166. For some time, these entities have grappled with the relationship between AI technology and these nondiscrimination restrictions, known as Section 1557, particularly when AI tools are purchased from external developers or use black box models, for which it may be difficult to confirm that decisions are free of unintended bias. In April, the U.S. Department of Health and Human Services (HHS) confirmed in new regulations that covered entities must take steps to ensure that tools used to support clinical decision-making are not discriminatory. 89 Fed. Reg. 37,522 (2024). Specifically, HHS will require that covered entities make reasonable efforts to identify uses of patient care decision support tools in their health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability, and to mitigate the risk of discrimination resulting from any tool’s use in its health programs or activities. In explaining the “reasonable efforts” standard, HHS acknowledged that it may not always be possible to eliminate the risk of discriminatory bias, provided examples of what covered entities should be doing to identify discrimination, and enumerated factors that it will use to determine if a covered entity complies with these requirements on a case-by-case basis. While this guidance provides some clarity about how AI tools fit within the Section 1557 framework, and leaves covered entities with some flexibility, healthcare companies will likely continue to have questions about how to mitigate discrimination in compliance with these requirements particularly as tools become more advanced.

The legal questions relating to professional licensing and nondiscrimination just touch the surface of what healthcare entities need to consider when adopting AI technology. Federal and state agencies are actively promulgating regulations and issuing guidance on the wide-ranging scope of laws that impact the use of AI in health care, while legislators continue to consider and pass laws that govern this space. In October of 2023, the Biden Administration released an Executive Order focused on artificial intelligence that requires HHS to take various actions to ensure the safe and responsible deployment and use of AI in the healthcare and human services sectors. White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023). In response to that Executive Order, HHS has formed an HHS AI Task Force, that, within a year of its formation, must develop a strategic plan for the responsible deployment and use of AI and AI-enabled technologies in the health and human services sectors.

In addition to new guidance, healthcare companies should anticipate heightened enforcement and litigation in this area. The U.S. Department of Justice recently announced a record-breaking number of settlements and judgments under the False Claims Act from 2023 (with two-thirds of the settlement and judgment amounts pertaining to health care), and, as reported in Bloomberg Law in January, the agency is now focused on investigating the role generative AI may play in facilitating violations. For this reason, it is especially crucial that healthcare companies think through the potential regulatory implications of any new AI technology and establish necessary safeguards before implementation.

#378782


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com