With the rapid advancement of generative artificial intelligence (genAI), Large Language Models (LLMs) have permeated all aspects of human life—ranging from the realms of communication and academia to transportation and defense—at unprecedented speeds. Among these, the medical domain stands out as one of the most significant areas to which LLMs have expanded their reach and scope of application, such as diagnosis.
Alongside the slew of medical advancements they bring, however, it is crucial not to overlook the ethical challenges that accompany these innovations. For the sake of our safety and well-being, these concerns must be proactively identified and addressed, whether through mitigation or remediation. In this regard, philosophical research plays an indispensable role in guiding progress along a path that upholds human values and promotes societal good.
With this sentiment at heart, on April 9, Seoul National University’s Institute of Philosophy invited Professor Matthew J. Dennis (in Ethics and Technology, the Eindhoven University of Technology) to share his insights on the ethics of generative AI in healthcare and medicine. With a focus on how emerging technologies—including artificial intelligence—challenge our definition of fairness, autonomy, well-being, and creativity, Professor Dennis also serves as co-director of the Eindhoven Center for Philosophy of Artificial Intelligence and is a Senior Fellow at the Ethics of Socially Disruptive Technologies research consortium.
Professor Matthew J. Dennis sharing his insights on the ethics of generative AI in healthcare and medicine
To begin, Professor Dennis posed three central questions that framed the direction of his talk: How do emerging technologies disrupt existing philosophical concepts? How do they create ethical challenges? And more specifically, does the use of AI for health and medicine create new ethical challenges that did not exist before?
According to Professor Dennis, AI currently contributes to the transformation of medicine in at least two significant ways. The first involves advancing medical research, for which, despite the potential for technical deskilling from the outsourcing of tasks to modern technologies, the ethical risks remain relatively negligible. The second contribution—much more ethically contentious—involves the creation of new tools for the purpose of medical transcription, triage, and diagnosis. This is the dimension of medical innovation that Professor Dennis deems essential to examine.
Reliability, Responsibility, and Privacy
The ethical challenges posed by new AI tools can be broadly categorized into three main areas: reliability, responsibility, and privacy.
AI diagnostic services, which listen in on patient consultations, must provide highly accurate notes; otherwise, there may be severe medical repercussions. However, excessive precision or over-specification may bring about its own set of issues, such as identifying non-existent symptoms in patients. This hinders the diagnostic process for doctors who must review the extensive notes generated and causes unnecessary anxiety in patients who await their verdict. Without careful oversight in their development and implementation, these AI tools may become unreliable in use.
Proceeding further, one must acknowledge that the margin of human error in patient consultations is not insignificant, with current accuracy rates averaging 70%. AI diagnostic services can be valuable in reducing misdiagnoses, increasing accuracy by a substantial margin of 20%. Yet, even with a smaller margin of error, when misdiagnoses do arise, responsibility no longer rests on the shoulders of an individual doctor but is instead distributed across multiple entities—most likely large tech companies. This diffusion makes it much more difficult to pinpoint who is accountable for a particular mistake, especially when the consequences are catastrophic and irreversible. Challenges also arise in the realm of litigation, as patients may find themselves in the daunting position of having to sue companies with extensive legal teams.
The issue of responsibility also extends to the process of triaging. The clientele of the tech companies that provide these AI tools typically consists of medical providers, who select the triage protocol they deem most fit for the nature of their services. This, however, opens the door for potential bias, as the standards of prioritization in treatment may be influenced by factors such as patients’ insurance premiums. In such cases, patients risk being denied their fundamental right to fair and equitable healthcare.
Lastly, and perhaps the most frequently cited concern in discussions of AI in medicine, is the privacy of sensitive data. While these AI systems may not directly store sensitive data, models are trained on such data and may reproduce sensitive information when prompted. For instance, when asked for specific photographs, some LLMs have been known to provide actual photographs from training data instead of synthesizing ones based on the given prompt. Thus, the risks of patient confidentiality and data leakage are a present reality.
On a more optimistic note, however, AI does have immense, ever-growing potential in assimilating vast amounts of medical data for consultations, allowing human doctors to more readily assist their patients in their diagnosis and recovery. As Professor Dennis noted, “Though each person has a medical history particular to them, we share a huge amount of DNA. What goes wrong in my body may go wrong in other people’s bodies as well.” In other words, the rich repository of medical data not only enhances individual healthcare processes but benefits the collective human community. To fully realize this potential, we must continue to uphold ethical standards and rigorously address the risks that may compromise them. Only then can we ensure that the technological advancements in medicine remain on a humanistic path, preserving our health, dignity, and autonomy.
All in all, events like this talk reflect the Institute of Philosophy’s enduring commitment to engaging with tangible, real-world issues through a philosophical lens. The regular guest lectures exemplify the Institute’s mission to “investigate the various essential aspects of human life, such as politics, economics, society, and culture, and present a framework for the proper understanding and evaluation of such phenomena.” Thus, these efforts continually affirm the fundamental relevance of philosophical study in our rapidly evolving world.
Written by Hee Seo Lee, SNU English Editor, heeseolee@snu.ac.kr