by Elias Aidun, Associate Member, University of Cincinnati Law Review Vol. 93
I. Introduction
Healthcare systems around the world are constantly improving and evolving to achieve the ‘quadruple aim’ in healthcare to improve population health, improve the patient’s experience of care, enhance caregiver experience, and reduce the cost of care.[1] However, achieving the ‘quadruple aim’ for healthcare systems is increasingly difficult considering the aging populations, the growing burden of chronic diseases, and the rising costs of healthcare.[2] Additionally, in 2020, the COVID-19 pandemic highlighted the shortage of the primary care workforce and the disparities in available primary care.[3]
While a simple solution for healthcare systems to achieve the ‘quadruple aim’ does not exist, the application of artificial intelligence (“AI”) in healthcare could potentially address some of these challenges. As technology companies continue to develop and refine AI programs, the application of AI into healthcare systems becomes more of a reality. In 2019, Microsoft CEO, Satya Nadella, said, “AI is perhaps the most transformational technology of our time, and healthcare is perhaps AI’s most pressing application.”[4] In the same year, Google Health stated, “We think that AI is poised to transform medicine, delivering new, assistive technologies that will empower doctors to better serve their patients.”[5] Although AI can potentially transform and push healthcare systems closer to achieving the ‘quadruple aim,’ the introduction of AI will inevitably introduce new legal challenges and other critical issues.
This article explores the potential challenges and issues that may arise from the application of AI in healthcare systems. Part II provides background on how AI technology is generated and the applications of AI in healthcare systems. Part III discusses issues of liability, algorithm biases, the black box phenomenon, and cybersecurity as it relates to AI’s implementation in healthcare systems. Finally, Part IV offers a brief conclusion on how these arising issues should be addressed and provides possible solutions.
II. Background
At a basic level, AI is the creation of intelligent machines through algorithms or sets of rules, which allow the machine to mimic human cognitive functions, such as learning and problem solving.[6] As a result of AI’s cognitive functions, AI systems can anticipate problems, solve issues as they arise, and respond in intentional, intelligent, and adaptive manners.[7] However, AI’s strongest ability is to learn and recognize patterns and relationships from large multidimensional datasets.[8] For example, an AI system could potentially interpret a patient’s entire medical history and provide a likely diagnosis or treatment plan.
AI technology can be broken down into several subfields, including machine learning and deep learning.[9] Machine learning enables a machine to learn on its own by analyzing training data, to improve its performance over time.[10] The majority of AI advancements seen today are powered by machine learning models, such as Amazon and Netflix.[11] Amazon utilizes machine learning to personalize product recommendations, and Netflix applies machine learning to recommend TV shows and movies.[12] On the other hand, deep learning models are composed of layers of interconnected processing nodes, or neurons.[13] These layers can receive input, process the input, pass it on to the next layer, and so on.[14] Deep learning networks can perform complex tasks by adjusting the strength of the connections between each layer, enabling deep learning models to recognize patterns in data that are too complex for humans.[15]
Within the healthcare field, AI can be employed in endless ways. To highlight a few examples, AI can help improve diagnostic accuracy.[16] Within the cardiovascular field, the use of AI-based EKG algorithms for rhythm identification can produce more accurate interpretations than current EKG software.[17] Additionally, within the breast disease and cancer care areas of healthcare, the use of AI is thought to lead to rapid diagnosis and a more detailed evaluation.[18] The use of AI can also assist surgeons during intervention and lead to more precise and minimally invasive surgical techniques.[19] More generally, AI can also reduce the amount of time that healthcare professionals spend on monotonous tasks, allowing a greater focus on more challenging cases.[20]
While these are just a few examples of AI’s current abilities, researchers have great expectations for AI’s future capabilities. As research on the application of AI in healthcare continues, the potential uses are demonstrated by drug discovery, virtual clinical consultation, disease diagnosis, prognosis, medication management, and health monitoring.[21] Within the next five to ten years, as AI continues to develop and improve, researchers propose that AI algorithms will be able to combine data including imaging, electronic health, behavioral, and pharmacological data to provide robotic-assisted therapies.[22] In the long term, researchers propose that healthcare systems will shift from the traditional one-size-fits-all form of medicine to a preventative, personalized, data-driven disease management model that achieves improved patient outcomes in a more cost-effective delivery system.[23] Ultimately, the goal is to push healthcare systems closer to achieving the ‘quadruple aim’.
III. Discussion
While AI systems could potentially improve patient outcomes and the overall healthcare system, errors will inevitably arise. At some point, AI is likely to make a mistake, and the law will seek to make an injured patient whole, using a lawsuit or settlement.[24] To establish a medical malpractice claim, a plaintiff must demonstrate that the physician breached their duty of care to the patient.[25] Typically, this breach of duty is determined by the physician’s failure to comply with medical customs or act reasonably given the state of medical knowledge at the time.[26] However, with the introduction of AI systems, liability implications become increasingly complex.
A. AI’s Liability
In the event an adverse outcome involving the use of AI occurs, a unanimous and definitive answer to liability does not currently exist.[27] Since AI is generally a newer technology, the applicable tort law has yet to be developed.[28] Nonetheless, scholars and researchers have discussed multiple liability theories about AI’s use in healthcare settings.[29]
Under general tort law principles, a significant factor to consider is the degree of control of the medical professional and the AI system.[30] Traditionally, a physician is the one who determines a patient’s diagnosis and type of therapy and thus takes full responsibility for the case.[31] Compared to situations where the AI system determines a patient’s diagnosis without the need for a physician’s approval, authors suggest that liability should at least partially be placed on the AI developer.[32] However, the same authors also state that AI developers should not be considered liable just because their AI algorithm or system is unable to prevent harm in all instances.[33] The authors are hesitant to impose excessive liability on AI developers because past courts have been reluctant to extend product liability to software developers, particularly in the context of healthcare usage.[34] Additionally, the risk of liability for AI developers could delay and discourage innovation and technological development.[35]
Scholars and researchers argue that the handling of AI’s liability should be based on the degree of autonomy of the AI system or algorithm.[36] When AI is used to support a decision, and the medical professional makes the final determination, the professional would bear the liability risk.[37] Conversely, if an AI algorithm or system acts autonomously, and would be considered analogous to an employee, the supervisor or the institution could be vicariously liable.[38] The model of vicarious liability provides that the negligence of an assistant is attributed to the supervisor.[39] Thus, the model of vicarious liability highlights the importance of education and training for healthcare professionals to understand the AI system and algorithm.[40] However, the characteristics that make AI autonomous and advantageous also make AI incredibly complex and difficult to understand.[41]
B. Complexities of AI
At the root of the legal and liability issues arising from the use of AI tools in healthcare, are the characteristics of AI that do not allow operators to access the process in which AI achieves the results.[42] As AI becomes more autonomous and can process data in completely independent ways, the results achieved become unknowable by humans, and even by the developers.[43] As mentioned above, machine learning enables AI to improve its performance over time, and deep learning AI can be capable of recognizing patterns that are too complex for humans.[44] Thus, it is inherently difficult to explain why or how AI reaches its conclusion and to identify errors after the fact.[45] This inability to fully understand AI’s decision-making process is described as the “black box phenomenon”.[46] This phenomenon creates complexities, incompleteness, opacity, and unpredictability within AI’s application, and thus undermines the identification of fault for purposes of liability.[47]
In addition to the “black box phenomenon,” algorithm biases also represent a critical issue with AI’s use in healthcare settings.[48] Researchers have observed that AI’s results may be unreliable when AI’s algorithm is trained on data that does not include out-of-distribution data or outlier data.[49] Especially in low- income countries with a lack of healthcare services and healthcare coverage, data on certain patient groups is limited.[50] By developing AI algorithms with data that does not reflect the context of use, AI’s results may represent contextual biases.[51] As a result, the inequality of data in certain geographic areas and of certain patient groups may amplify or create health disparities among marginalized groups, enlarge racial and demographic disparities, and exacerbate inequities in health outcomes.[52]
C. Cybersecurity Challenges
In the United States, the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) establishes federal standards for protecting sensitive health information from disclosure without a patient’s consent.[53] Covered entities such as healthcare providers, health plans, and healthcare clearinghouses can be subject to fines and other penalties for disclosing protected health information without patient authorization.[54] However, with the increasing integration of AI in healthcare settings, one of the most pressing concerns about its application is the heightened vulnerability of sensitive health information.[55]
Healthcare data privacy is a critical concern because there is a lot to lose if not protected. Unauthorized access or breach of sensitive health information can have serious consequences for individuals such as identity theft, insurance fraud, and medical identity theft if perpetrators utilize stolen information to acquire prescription drugs or medical treatment.[56] Furthermore, healthcare providers risk the public’s faith in medical institutions and could face legal and financial repercussions for data breaches.[57]
As discussed previously in the Background, AI models are generated through machine learning and deep learning by analyzing vast amounts of training data.[58] The data required to train AI algorithms for healthcare applications requires access to large data repositories containing sensitive health information.[59] While this health information is typically de-identified or anonymized data that is not linked to an individual, it still represents a significant risk.[60] The removal of all potentially identifiable information from large datasets can be a daunting effort, and it is now clear that even with the most rigorous efforts there will always remain a theoretical risk of re-identification.[61] For example, datasets involving medical images of eyes, and retinal fundus photographs, have been used to identify gender, age, and cardiovascular risks.[62] In addition, it may be possible to identify individuals by linkage with other datasets, especially as patient information accumulates over time.[63] This potential risk of re-identification can present a serious concern for both individuals and healthcare providers, as it could potentially lead to identity theft, insurance fraud, and medical identity theft.
In addition to the concern of re-identification, there is also the serious threat of data breaches within AI healthcare applications. Data breaches are not a new threat to the healthcare industry but have been a prevalent risk since healthcare institutions have adopted technological advancements to store electronic patient information and records.[64] For instance, in August 2013, Advocate Health Care fell victim to a series of data breaches that compromised the data of 4.03 million patients.[65] Within this data breach, patients’ names, addresses, credit card numbers, clinical information, and health insurance information were compromised.[66] With the implementation and adoption of more AI applications in healthcare settings, the potential for further and more unique dangers involving patient information could prove to be disastrous. As many applications of AI in healthcare settings need to collect a variety of data on their patients to provide quality care, it can leave patient’s sensitive health information vulnerable to data breaches.
The advantages of AI present a double-edged sword to healthcare providers and the sensitive health information they hold. While AI algorithms can improve diagnostic accuracy, provide rapid diagnoses, and save time for health professionals, malicious actors can utilize AI to craft flawless phishing emails.[67] Members of the workforce who receive security awareness training are often able to spot phishing emails due to common red flags such as grammatical mistakes or brevity.[68] However, with the utilization of AI, phishing emails lack many of the red flags that workers are taught to look out for and are written in perfect English without spelling and grammatical errors.[69] Furthermore, AI algorithms can use stolen data and organize personal information to create highly specific and targeted phishing emails.[70] With AI tools capable of writing flawless phishing emails, these emails have an increased chance of evading email security gateways and are more likely to fool employees.
The adoption of AI by cybercriminals has led to an increase in healthcare data breaches and will most likely be conducted in greater numbers in the future. The Department of Health and Human Services (“HHS”) maintains a Data Breach Portal, which records data breaches affecting 500 or more individuals reported by healthcare providers, health plans, healthcare clearinghouses, and business associates subject to the requirements of the HIPAA Breach Notification Rule.[71] In 2020, the HHS Office for Civil Rights received 609 notifications of data breaches affecting 500 or more individuals, and also 63,571 notifications for data breaches affecting fewer than 500 individuals.[72] Of the 609 data breaches affecting 500 or more individuals, more than 400 of those data breaches utilized phishing emails as the initial access vector.[73]
As cybercriminals continue to utilize AI in phishing attempts, it is not only the increased sophistication causing concern, but also how AI has lowered the bar for cybercriminals to conduct attacks and gain access to sensitive healthcare data.[74] Before the implementation of AI, cybercriminals had to spend time carefully researching potential targets and crafting emails with no guarantee of success. However, with AI and large language model tools like ChatGPT, cybercriminals can automate much of this work, making the process more efficient and accurate.[75] Thus, with AI, phishing attacks are likely to be conducted on healthcare providers in far greater numbers and with more sophistication.
D. Misuse of AI By Healthcare Providers
Cybercriminals are not the only ones who have abused the applications of AI in healthcare settings. For example, a health insurance company allegedly used a faulty AI algorithm to deny elderly patients coverage for extended post-acute care and treatments deemed necessary by their doctors.[76] In November 2023, a class action lawsuit was filed against UnitedHealth over their use of an AI algorithm, which illegally denied elderly patients care owed to them under Medicare Advantage Plans.[77] Within the lawsuit, the plaintiffs claim that the allegedly defective AI model enabled UnitedHealth to prematurely and in bad faith discontinue payments to its elderly beneficiaries, causing them medical or financial hardships.[78] The families in the class action accuse UnitedHealth of using the faulty AI algorithm to deny claims as part of a financial scheme to collect premiums without paying coverage to their beneficiaries.[79] The families believe UnitedHealth does this because the vast majority of policy holders will not appeal the faulty AI algorithm’s decision, leaving elderly beneficiaries to pay out-of-pocket costs or abandon the remainder of their prescribed care.[80]
In June 2023, the American Medical Association applauded the use of AI to “speed up the prior authorization process,” but also called for health insurers to require human examination before denying their beneficiaries care.[81] So while it may appear that healthcare providers and health insurance companies can utilize AI applications to improve diagnostic accuracy, provide rapid diagnoses, expedite authorization processes, and improve overall patient outcomes, there are serious concerns that AI may be applied in unethical manners.
IV. Conclusion
The application of AI has tremendous potential to transform the healthcare system, but simultaneously, it opens a door to unique challenges and issues. Currently, in the United States, no specific regulatory pathways for AI-based technologies exist, but the Food and Drug Administration (“FDA”) evaluates them under the existing regulatory framework for medical devices.[82] Furthermore, in January 2021 the FDA published the “Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan,” which outlined actions to properly oversee AI medical devices, including good machine learning practices and methods for eliminating algorithm bias.[83] However, this existing framework focuses on AI tools that support physician’s decisions, rather than autonomous AI tools, and hold AI developers accountable for the real-world performance of their AI systems.[84]
As AI continues to improve and rapidly expand to all healthcare settings, answers regarding allocation of liability, possible algorithm bias, the “black box phenomenon,” and cybersecurity are becoming increasingly necessary.[85] To address some of these issues, it will be necessary for AI systems to explain to medical professionals how AI arrived at its conclusion and decision by providing the evidence that underlies their reasoning.[86] This approach will allow medical professionals to choose to refuse the AI’s suggestion if they believe any errors have occurred, providing a basis for appropriate liability.[87] Nonetheless, future legislation must outline the contours of medical professional’s responsibility, while also balancing the professional’s ability to influence and correct AI.[88]
[1] Donald Berwick et al., The triple aim: care, health, and cost, 27 Health Affairs 759 (2008), available at: https://pubmed.ncbi.nlm.nih.gov/18474969/.
[2] Id.
[3] Matt McNeill, Extraordinary Impacts on the Healthcare Workforce: COVID-19 and Aging, Del. J. of Pub. Med. (Dec. 1, 2022) available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC9894049/.
[4] Randy Bean, Can AI Be Applied to Revolutionize Healthcare and Medical Outcomes?, Forbes (May 6, 2024), https://www.forbes.com/sites/randybean/2024/05/06/can-ai-be-applied-to-revolutionize-healthcare-and-medical-outcomes/#:~:text=In%202019%2C%20Microsoft%20CEO%20Satya,revolutionize%20healthcare%20and%20medical%20outcomes.
[5] Id.
[6] John McCarthy, What is artificial intelligence? at 2, (2004), http://cse.unl.edu/~choueiry/S09-476-876/Documents/whatisai.pdf.
[7] Shukla Shubhendu & Jaiswal Vijay, Applicability of Artificial Intelligence in Different Fields of Life, 1 Int’l J. of Sci. Eng’g and Rsch 28, 28-35 (2013).
[8] Id.
[9] Ekin Keserer, The six main subsets of AI: (Machine learning, NLP, and more), Akkio, https://www.akkio.com/post/the-five-main-subsets-of-ai-machine-learning-nlp-and-more (last visited Nov 3, 2024).
[10] Id.
[11] Id.
[12] Id.
[13] Id.
[14] Id.
[15] Id.
[16] Clara Cestonaro et al., Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review, 10 Frontier Med., Nov. 2023 at 1.
[17] Cheuk To Chung et al., Clinical significance, challenges and limitations in using artificial intelligence for electrocardiography-based diagnosis, 23 Int’l. J. of Arrhythmia, October 2022, at 24.
[18] Emiroglu Mustafa et al., National study on use of artificial intelligence in breast disease and cancer, 123 Bratislava Med. J. 191, 191-196 (2022).
[19] Mathieu Pecqueux et al., The use and future perspective of Artificial Intelligence—A survey among German surgeons, 10 Frontiers Pub. Health, Oct. 2022 at 1.
[20] Cestonaro, supra note 16.
[21] Junaid Bajwa et al., Artificial intelligence in healthcare: transforming the practice of medicine, 8 Future Healthcare J. 188, (2021).
[22] Id.
[23] Id.
[24] Jonathan Mezrich, Demystifying Medico-legal Challenges of Artificial Intelligence Applications in Molecular Imaging and Therapy, 17 PET Clinics 41, (2022).
[25] Agustina Saenz et al., Autonomous AI systems in the face of liability, regulations and costs, 6 npj Digit. Med. 185, (2024).
[26] Id.
[27] Cestonaro, supra note 16.
[28] Id.
[29] Id.
[30] Joseph Sung & Nicholas CH Poon, Artificial intelligence in gastroenterology: where are we heading?, 14 Frontier Med. 511, (2020).
[31] Id.
[32] Cestonaro, supra note 16.
[33] Mezrich, supra note 24.
[34] Id.
[35] Id.
[36] Jonathan Mezrich, Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy, 219 Am. J. of Roentgenology 1, (2022).
[37] Id.
[38] Id.
[39] Id.
[40] Emanuele Neri et al., Artificial Intelligence: Who is responsible for the diagnosis?, 125 La Radiologia Medica 517, (2020).
[41] Cestonaro, supra note 16.
[42] Id.
[43]Id.
[44] Keserer, supra note 9.
[45] Cestonaro, supra note 16.
[46] Id.
[47] Id.
[48] Diego López et al., Challenges and solutions for transforming health ecosystems in low-and middle-income countries through artificial intelligence. 9 Frontier Med., Dec. 2022 at 1.
[49] Id.
[50] Id.
[51] Cestonaro, supra note 16.
[52] George Bazoukis et al., The inclusion of augmented intelligence in medicine: A framework for successful implementation, 3 Cell Reports Medicine ScienceDirect, Jan. 2022, at 1.
[53] Health Insurance Portability and Accountability Act of 1997 (HIPAA), Ctr. for Disease Control, https://www.cdc.gov/phlp/php/resources/health-insurance-portability-and-accountability-act-of-1996-hipaa.html (last visited Nov 22, 2024)
[54] Id.
[55] Karen Cabuyao, Artificial intelligence and cybersecurity in healthcare, Int’l Hosp. Fed’n (Oct. 3, 2023), https://ihf-fih.org/news-insights/artificial-intelligence-and-cybersecurity-in-healthcare/.
[56] Soumit Roy, Privacy Prevention of Health Care Data Using AI, 37 J. of Data Acquisition and Processing 769, 769-771 (2022).
[57] Id.
[58] Ekin Keserer, supra note 9.
[59] Laura Cascella, Artificial Intelligence Risks: Data Privacy and Security, MedPro Group, https://www.medpro.com/artificial-intelligence-risks-privacysecurity (last visited Nov 22, 2024).
[60] Id.
[61] Luc Rocher et al., Estimating the success of re-identifications in incomplete datasets using generative models, 10 Nature Commc’n 3069, (2019).
[62] Ryan Poplin et al., Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, 2 Nature Biomedical Eng’g 158, (2018).
[63] Id.
[64] Steve Adler, Healthcare Data Breach Statistics, The HIPAA Journal (Dec. 23, 2024), https://www.hipaajournal.com/healthcare-data-breach-statistics/.
[65] Edward Kost, 14 Biggest Healthcare Data Breaches [Updated 2024], UpGuard, https://www.upguard.com/blog/biggest-data-breaches-in-healthcare (last visited Nov 22, 2024)
[66] Id.
[67] Steve Alder, Why AI Will Increase Healthcare Data Breaches, The HIPAA Journal (Oct. 12, 2023), https://www.hipaajournal.com/editorial-why-ai-will-increase-healthcare-data-breaches/.
[68] Id.
[69] Id.
[70] Id.
[71] Id.
[72] Id.
[73] Id.
[74] Id.
[75] Id.
[76] Complaint at 1-5, The Est. of Gene B. Lokken and the Est. of Dale Henry Tetzloff, et al. v. UnitedHealth Grp., Inc., (D. Minn. Nov. 14, 2023) (No. 0-23-cv-03514).
[77] Elizabeth Napolitano, UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims, CBS News (Nov. 20, 2023), https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/.
[78] Id.
[79] Id.
[80] Id.
[81] Id.
[82] Urs Muehlematter et al., Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis, 3 Lancet Digit. Health 195, (2021).
[83] Artificial Intelligence and Machine Learning in Software as a Medical Device, Food and Drug Admin., https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device (last visited Nov 3, 2024).
[84] Kavitha Palaniappan et al., Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector, 12 Healthcare 562, (2024).
[85] Cestonaro, supra note 16.
[86] Id.
[87] Id.
[88] Neri, supra note 10.
Cover Photo by Hush Naidoo on Unsplash.
