Artificial Intelligence and Reimagining Patient-Physician Relationships
Robyn Gaier, PhD
Topic: This presentation focuses upon how patient-physician relationships must continue to evolve as artificial intelligence (AI) becomes more embedded in the practice of providing health care, particularly in the United States. Shifts in the dynamics of patient-physician relationships are not new. For example, in the twentieth century, the strict paternalism of ‘doctors know best’ gave way to more patient autonomy in health care decision-making. One key factor in this shift to more patient autonomy was the accessibility of information. It is now our interaction with information systems and processes that necessitates a further shift in patient-physician relationships. Conclusion: If utilizing more advanced AI in healthcare settings is to improve the quality of care, then patient-physician relationships must further evolve to safeguard both the expertise of physicians as well as patient autonomy. The Impact of this Presentation: Two ways in which the growing utilization of AI in healthcare decisions could have the unintended effect of undermining trust in patient-physician relationships will be examined and discussed. First, the lack of complete transparency in AI datasets and algorithms complicates the role of physicians as experts. How much trust should physicians place upon the use of AI when AI deviates from their initial assessments, or when they are unable to explain how the information was derived? Second, while AI can add much-needed efficiency in providing health care, it can also unintentionally contribute to a sense of depersonalization among patients. Physicians and patients alike must be intentional about taking the time to process information and to have genuine conversations. Additionally, physicians must not neglect their obligation to listen to their patients and to understand each patient’s values and priorities, which are not simply outcomes based. Ironically, the increased efficiency that is possible with the incorporation of more advanced AI threatens to short-change these vital aspects of communication, which are fundamental to providing quality health care. Why This Topic is Important: AI is changing how we interact with information and, in turn, with each other. AI certainly has the potential to improve the quality of health care, but only if AI is utilized and implemented in ways that do not undermine the fundamental trust at root in patient-physician relationships.
Ethically and Practically Improving Prenatal Care for the Homeless Community
Eileen Phillips, DBE, HEC-C
Pregnancy increases comorbidities and mortality within the homeless population. This paper explores current socioeconomic and structural barriers which prevent access to prenatal care for the homeless. Resultant recommendations are that improved prenatal healthcare through social support, trust-building with medical providers, and care access improves health outcomes. Research data, support models, and interviews with the homeless substantiate that improved resources have beneficial outcomes such as: optimizing pregnancy outcomes, moving individuals out of a homeless state, and reducing strain on the healthcare system. Currently, higher occurrences of poor health result from inadequate shelter, inconsistent healthcare, and lack of a simplified method to attain social services. These issues are overlooked by society causing homeless pregnant people to be trapped in a cycle of violence, and oppression resulting in mistrust and reduced autonomy. Restoring the health and dignity of these individuals would aid in breaking the cycle of homelessness, poverty and social injustice thereby restoring people to participating members of society.
Reimagining the Physician-Patient Relationship for the Age of AI
Nir Ben-Moshe, PhD
Much has been written about artificial intelligence (AI) in medicine in general and about AI and the physician-patient relationship in particular. I want to highlight two approaches in much of the work on AI in medicine and resist them. First, most prior work on the particular ethical challenges pertaining to the adoption of AI in medicine has addressed them from the perspective of a more general normative theory, often in the form of guidelines promulgated by both medical and technical organizations. This approach is suited to producing admonitions intended to limit certain undesirable outcomes or actions from a broader societal perspective. Second, sometimes a dilemma is presented in the literature in the form of substitutionism versus extensionism. According to substitutionists, AI will surpass physicians in the performance of key clinical tasks, such as diagnosis, prognosis, and treatment plans, and so will eventually make physicians obsolete. According to extensionists, AI will not necessarily replace physicians but will simply extend and improve on their capabilities.
The problem with the first approach is not only that there may be disagreement about general normative theories, but also that the guideline-promulgating approach may be inefficacious and incapable of providing comprehensive responses to applied ethical concerns beyond mere admonitions. And, more importantly, this approach does not seriously consider the relations between AI on the one hand, and the craft of medicine and the physician-patient relationship on the other hand. I believe that these relations must be understood before the related ethical challenges may be receive satisfactory treatment. The problem with the second approach is that it offers a false dilemma: AI will neither necessarily substitute nor extend physicians. In lieu of these options, I will argue that AI can and should transform the very nature of the craft of medicine and the physician-patient relationship. It can do so by facilitating the realization of what has been considered—arguably for many centuries, if not millennia—a normative ideal of this craft and of this relationship, albeit in a novel way. In other words, my aim is to offer a novel and comprehensive way of thinking about how AI can and should affect medical practice; I do so by offering an account of how AI can and should bring about a new version of a well-known ideal of the craft of medicine and the physician-patient relationship, thereby transforming them both.
I will argue that, in analogy to Marx’s views about the emergence of a utopia that is partly brought about by technological advancement, and in which people will be freed from labor to pursue more meaningful activities, AI can and should bring a golden age to medicine. I understand this claim in the context of various interpretative and deliberative models that have been offered of the physician-patient relationship. More specifically, the kind of utopia I have in mind is one in which physicians and patients are freed to focus, as equals, on the values within of their relationship. And even more specifically, I make a case that AI can and should allow for an actualization of the ideal of the physician as friend to the patient. I then make a case that AI can and should allow for a return to the ideals of the physician as craftsman, who works in accordance with the craft’s end and values. Furthermore, and again in analogy to Marx’s views, I discuss how this new understanding of the physician-patient relationship can reduce physician alienation. Here I have in mind not merely the old kind of alienation in which society may sometimes require the physician to be a mere ‘technician’, but also newer kinds of alienation that are introduced specifically by AI: alienation from one’s own knowledge and skills and, given a responsibility gap, alienation from one’s agency. I conclude with more practical considerations and worries.
Normothermic Regional Perfusion: The Ethical Dimension
Nir Ben-Moshe, PhD
Transplant surgeons are allowing terminal patients to die—with patient consent—then restarting their hearts while clamping off blood flow to their brains. This procedure, “normothermic regional perfusion with controlled donation after circulatory death” (NRP), allows surgeons to remove organs for donation from bodies with heartbeats. My aim in this paper is two-fold. First, I show that one of the main ethical concerns with NRP, the concern that the physician is intending to kill the patient, can be neutralized. Second, I show, that, nevertheless, there is something to this concern if it is understood in the context of the good of the patient as the end of medical practice. The upshot is that NRP might not be ethically acceptable, all-things-considered.
I first draw an analogy to Dan Brock’s discussion of the difference between intentional killing and allowing to die. Brock argues that euthanasia, for example, is an act of intentional killing, but that this is not what bears on its moral status. Rather, we should ask whether the act of killing is morally justified. Hence, I examine whether NRP is, all-things-considered, a morally justified form of killing. I make use of the idea that the good of the patient, as the end of medical practice, includes the patient’s medical good and the patient’s perception of the good, which concerns his values and preferences. I argue that NRP does not advance any component of the patient’s medical good, and that even if the patient consented to the procedure, and so we are respecting their values and preferences, this autonomous choice is not associated with any medical good of the patient.
I discuss several objections to my argument, including the objection that in NRP the physician is dealing with a corpse, and so my talk of the good of the patient and of the physician intentionally killing the patient is moot, since there is no patient. In order to show why, conceptually, this is not the case, I draw an analogy to the case of DNRs. I argue that in NRP, the patient’s heart is restarted as if they don’t have a DNR, but with the intention of inducing brain death, and so killing them, as if they do have a DNR. Therefore, NRP is, conceptually, akin to a form of DNR that includes restarting the heart. In both cases, there is a patient with a medical good who is being killed.
The Nurse Practitioner Ethicist & A Philosophy of Advanced Practice
Jesse Kay, MSN, MA, APRN, CPNP-AC, CCRN
Nursing began as a social practice within the homes of families [1]. It advanced into a profession and today has many advanced practice roles. One of these roles, the nurse practitioner (NP), originally began in 1965 when pediatric nurse Loretta Ford and Pediatrician Henry Silver sought to reach the underserved children in their community [2]. In today’s society, the NP role has expanded to include patient care in inpatient settings through evaluating, diagnosing, treating, prescribing, and promoting health.
Another more recent advanced nursing role, the nurse ethicist, has evolved from challenges at the bedside necessitating advanced training in ethical reasoning beyond typical for nurses to resolve ethical dilemmas and assist with moral distress. Some NPs have obtained education in ethical reasoning and moral decision-making to assist in ethics. Although nurse ethicists are unique and can fulfill roles such as clinical ethics consultants, the NP role has always been set apart: “NPs are and always will be nurses, but they possess unique skills and have a unique role that sets their profession apart” [3].
These unique skills and training set NPs apart and necessitate that they practice the internal morality of advanced nursing practice [4]. This reveals that the NP “practices the internal morality of nursing and honors the internal morality of medicine by practicing a healing and helping relationship stewarded by [the] telos of healthcare with a primary focus on service to the patient’s good” [4, p5] This does not mean NPs practice medicine but rather that they uniquely practice from a philosophy of advanced practice [4]. A philosophy of nurse practitioner practice will be put forward with the new role of the nurse practitioner ethicist.
Remembering Human Flourishing & Health in Scientific Advancement
Jesse Kay, MSN, MA, APRN, CPNP-AC, CCRN
In discussing the ethical implications of healthcare, science, and technology, bioethics has been purposed to protect the vulnerable. It could be said that bioethics focuses on safeguarding the flourishing of human beings as they relate to each other in societies. But what is human flourishing and how does it relate to advancing medical innovations?
In our pluralistic society, there are varied accounts of flourishing. According to the World Health Organization (WHO), “Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity…[and] The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being…[and] The health of all peoples is fundamental to the attainment of peace and security.” Many ethicists and authors have criticized WHO for providing too broad a definition of health that can assume that health is the ultimate good or the WHO’s idea of human flourishing. Levin believes that health cannot be the ultimate good otherwise it would undermine the direction of medicine and science that human flourishing should guide.
In positive psychology, there are two major types of flourishing. These are eudaimonic and hedonic flourishing. Hedonic flourishing focuses on promoting pleasure, avoiding pain, and pursuing life satisfaction and a positive mood. Elliot points out that unfortunately many flourishing accounts rely too heavily on the subjective well-being of hedonic flourishing without remembering eudaimonic flourishing. Ekman and Simon-Thomas caution that often relational flourishing is sacrificed to promote hedonic flourishing in positive psychology. They believe that relational well-being is crucial as “people need a sense of belonging, connection, and meaningful contribution to something beyond the self to flourish.” This is where eudaimonic flourishing comes in.
In Aristotle’s Nicomachean Ethics, he asserts that humanity’s ultimate end is eudaimonia, which is now known to be best glossed as human flourishing rather than happiness. Aristotle believed that “the good for man is an activity of the soul in accordance with virtue…in a complete lifetime.” He believed in a certain way of life that Elliot elucidated when he said, “we act this way to acquire, maintain, and enjoy the individual and social goods appropriate to our humanity and which make life attractive, such as health, security, work, family, knowledge, art, and friendship.” In other words, eudaimonic flourishing provides human beings with a purpose and recognizes the importance of relating to other human beings rather than simply focusing on promotion of pleasure and avoidance of pain. But eudaimonic flourishing also recognizes that the subjective goods of well-being, including health and pleasure, are important to promoting overall human flourishing.
In order for scientific advancement to be grounded and guided by human flourishing and health as a guidepost to help humanity, human flourishing and health should be defined. According to Curlin and Tollefsen, “’health’ here is meant in a limited, circumscribed, and embodied sense: what Kass describes as ‘the well-working of the organism as a whole,’ realized and manifested in the characteristic activities of the living body in accordance with its species-specific life-form” [6]. In other words, health is a good that enables human beings to function well in accordance with how their bodies were made. In addition, and in agreement with eudaimonic flourishing, human flourishing is pursuing and enjoying the specific telos of “conform[ing] to what a thing characteristically is.” In our pluralistic society, there are varied philosophical and religious accounts on the purpose of humanity, but most views agree that human relationships are key to flourishing.
This account will remind us that in order to wisely steward our medical innovations and scientific advancements we must remember that health is a subjective good.
Marketing With a Conscience: Reclaiming Integrity in Healthcare
Melissa Fors Shackelford, MBA
In healthcare, trust is everything—and yet marketing practices can sometimes erode it through overpromising, jargon-heavy messaging, or campaigns that overlook equity and inclusion. This keynote takes a candid look at how healthcare marketing can drift from its ethical foundation, from misleading claims to the unintentional stigmatization of vulnerable populations. Melissa shares examples of how hospitals, health systems, and life sciences organizations have restored trust by leading with purpose, transparency, and compassion. Attendees will leave with practical strategies for designing marketing that respects patients and communities, strengthens credibility, and drives growth without compromising integrity.
The Good, The Bad, and The Ugly: How to Write an Ethics of AI Paper
Amitabha Palmer, PhD, HEC-C
Since 2019, I have been publishing and reviewing ethics of AI in medicine articles, and I’ve noticed that submissions tend to have common weaknesses. This session will help aspiring authors avoid these pitfalls and provide practical guidance on writing excellent applied ethics of AI papers. I will address four interrelated areas: scope, technological specification, ethical analysis, and recommendation formulation.
Scope: The first critical decision is whether you are developing ethical theory, analyzing a particular tool in a concrete setting, or conducting empirical research on attitudes toward AI applications. Avoid writing about “AI in medicine” broadly—the category is too heterogeneous. Instead, focus on narrow medical contexts: rather than “AI in cancer care,” consider “AI in oncological palliative care.” Also clarify whether you are examining tool development, testing, or deployment, and whether the technology substitutes for human work, supplements it, augments human capabilities, or performs entirely novel functions. These scope choices have major implications for your ethical analysis.
Technological Specification: Many papers fail to specify ethically-relevant technical features—whether a system is closed or continuously learning, operates in real-time or batch processing, has explainability constraints, or relies on particular training data and deployment contexts. Grounded analysis articulates which technical characteristics matter for the specific harms or value conflicts at stake, rather than making generic claims about “autonomy with diagnostic AI.”
Ethical Analysis: Move beyond applying abstract principles like beneficence, autonomy, and justice. Instead, identify the concrete, contextual value tensions your technology creates—what kind of caring is displaced, which decisions lose autonomy, what justice concern is actually at issue. Critically, distinguish ethical problems intrinsic to the technology itself from those arising from deployment policies, human factors, or institutional affordances.
Recommendations: Translate your ethical analysis into concrete, implementable actions—whether addressing technical design, deployment policy, training, governance, or combinations thereof. Recommendations should directly solve the ethical problem you’ve identified rather than gesturing vaguely toward oversight.
Simulating Changing Preferences: On Personalized AI Decision Aids in Medicine
Clint Hurshman, PhD
Digital duplicates are AI systems, fine-tuned to simulate the minds—especially the values, preferences, and beliefs—of specific individuals. They may take the form of interactive chatbots that can be queried in real time to acquire information about what a patient values most in medical decision-making. The use of digital duplicates in medical decision-making has been advocated as a way to promote care that accords with patients’ values. For example, when a patient is incapacitated, a digital duplicate could help to inform surrogate decision-makers of patients’ likely preferences among possible treatment options, or even give consent on their behalf. Or, if a patient has decision-making capacity, a digital duplicate attuned to their values could help them to weigh the costs and benefits of treatments.
These proposals raise a number of practical and ethical concerns. This talk focuses on just one: if a digital duplicate is used to inform decision-making, how should it account for the fact that people change over time? Should it simply model the values the patient currently holds, or that she held the last time she had decision-making capacity? Should it model the values the patient is likely to hold at some future time? Or should it model the values that the patient would hold under some ideal, counterfactual conditions?
This paper aims to answer this question by drawing from the existing bioethical literature on surrogate decision-making. The substituted-judgment standard—according to which surrogate decision-making should aim to decide as the patient would have decided for herself, if she were able—has been criticized on the grounds that it is difficult to predict how a patient would choose for herself, and that not all features of patients’ actual decision-making procedures (e.g. irrational phobias) are appropriate for surrogates to consider. In response to such worries, Stout (2022) argues for a “mixed judgment standard,” according to which decision-makers should not merely aim to predict how the patient would decide for herself, but “take up the evaluative perspective of the patient… and use that perspective to supply content to her own decisional procedure” (p. 543).
I argue that a similar standard should be incorporated into the development of digital duplicates. Creators of digital duplicates are therefore justified in abstracting away preferences and other tendencies in decision-making that are irrational or that the patient does not reflectively endorse. This means that the digital duplicate may differ from the actual patient in key respects, and that surrogates engage in a form of paternalism. However, I argue that this approach nevertheless promotes patient autonomy better than the alternatives.
Beyond the Obvious: Uncharted Ethical Frontiers of AI in Medicine
David Beyda, MD
Artificial intelligence is rapidly transforming healthcare, yet most ethical conversations remain focused on familiar ground: diagnostic accuracy, bias, and privacy. These questions matter, but they overlook emerging challenges that will soon redefine the physician–patient relationship.
This presentation examines five emerging ethical frontiers in AI for medicine. The first is authority. When AI generates care plans with confidence, patients and insurers may defer to the machine over the physician, eroding clinical accountability and shaping how future doctors are trained. The second is empathy. AI systems now simulate compassion so convincingly that patients may feel more comforted by machines than by their doctors, raising questions about the authenticity and trustworthiness of these systems.
The third frontier is counterfactual medicine. AI could show patients the lives they “might have had” if they had chosen differently. Insightful, but potentially harmful if it chains patients to regret. The fourth is triage. In moments of scarcity, AI may quietly embed hidden values, favoring patients deemed more “compliant” or “efficient,” reshaping allocation ethics without public awareness. The fifth is the health biography. Long-term risk profiles can label patients as “noncompliant” or “low survival,” altering how clinicians and patients see themselves, turning predictions into prescriptions.
These scenarios are not distant speculation; they are already emerging. By examining these five frontiers: authority, empathy, counterfactuals, values, and identity. This presentation challenges healthcare leaders to look beyond whether AI works and ask what it means for medicine, care, and human dignity.
Ethics Analysis for Innovation Involving the Maternal-Fetal Dyad
David Mann, MD, DBe, HEC-C
Physicians proposing innovative fetal interventions have specific ethical responsibilities to the maternal-fetal dyad participants in their experiments. These ethical responsibilities are related to but distinct from the physician’s fiduciary duties/responsibilities to the maternal-fetal dyad patient. This distinction is realized during the informed consent process because a patient is consenting based on the amount of burden/risk they are willing to accept to achieve a direct clinical benefit; whereas, the research participant is consenting to be subjected to minimal burdens/risks, i.e., protected, to achieve an altruistic benefit for future patients. The maternal-fetal dyad participant in an innovative intervention is consenting to be subjected to minimal burdens/risks to achieve a theoretical but likely benefit to their fetus.
Ethics in AI Image Processing – Navigating Challenges and Ensuring Ethical Innovation
Zbigniew Starosolski, PhD
This lecture will explore core principles of AI-based image processing. It covers topics such as model transparency, explainability, common errors, and biases in AI workflows. The session will also examine AI-based image manipulations and their effects on both clinical and pre-clinical decisions. It concludes with best practices for researchers using AI tools in image analysis and offers a list of publicly accessible resources to support these practices.
By the end of this lecture, participants will be able to:
- Define AI Ethics in Imaging: Clearly explain the core principles of AI ethics and how they specifically apply to both clinical and preclinical image processing, integrating established medical ethics (autonomy, non-maleficence, beneficence, and justice).
- Identify Core Ethical Challenges: Describe the three primary ethical issues in AI-driven image analysis:
◦ Data Privacy & Security: Discuss the risks linked to handling sensitive health data and the shortcomings of existing regulations.
◦ Algorithmic Bias: Recognize the sources of bias in AI models and explain their potential effects on clinical diagnoses and preclinical research outcomes.
◦ Transparency & Explainability: Clarify the “black box” problem and emphasize the importance of transparency for trust, accountability, and error validation. - Analyze the Impact of AI on Practice: Examine the benefits and risks of integrating AI into diagnostic and research workflows, including over-reliance issues and challenges in establishing accountability for AI errors.
- Navigate the Regulatory Landscape: Summarize key principles from emerging national and international guidelines (e.g., FDA, EU, WHO) relevant to AI in healthcare and research, including animal studies.
- Evaluate Ethical Dilemmas through Case Studies: Review real-world examples involving deepfakes, biased facial recognition, and flawed healthcare algorithms to demonstrate how ethical principles can be upheld or violated.
- Apply Ethical Best Practices: Suggest technical, governance, and practical strategies to reduce ethical risks of AI, such as using representative data, Explainable AI, and maintaining a “human-in-the-loop” approach.
Formulate an ethical framework by synthesizing the lecture’s ideas to guide the responsible development and application of AI in preclinical and clinical imaging.
Can Ms. Adriana Smith be Harmed?
By: Ryan Lemasters
This paper offers an ethical analysis of the case involving Ms. Adriana Smith, which attracted significant media attention beginning in early May of 2025 due, in part, to the controversial decisions made by Emory University Hospital (Atlanta, GA). Although media coverage has been extensive, the case has received little scholarly examination, except for Lewis, Quinn, and Mutcherson (forthcoming). This paper therefore highlights some of the key ethical issues and points of disagreement surrounding the case of Ms. Smith. Section 1 offers a brief overview of the case. Section 2 introduces the key moral principles that should be considered in evaluating it. Section 3 examines whether Ms. Smith can be harmed, which we argue is an important point that has been missing in coverage of the case. Section 4 develops new questions and outlines possible avenues for advancing the analysis of the ethical dimensions of the case. While a full ethical analysis of Ms. Smith’s case is beyond the scope of this paper, the goal is to lay the groundwork for critical reflection on some of the most significant ethical issues and points of disagreement involved.
The Ethics of Hope in Pediatric Care: Supporting Families Without False Reassurance
Natalie Peters, MAPS, BCC
Hope is a central coping mechanism for families navigating pediatric illness, yet it poses ethical challenges when prognosis is poor or uncertain. In pediatric settings, clinicians often struggle to balance honesty with compassion, fearing that frank communication may extinguish hope or damage trust. Conversely, overly optimistic framing or avoidance of difficult truths can lead to false reassurance, moral distress among clinicians, and erosion of trust when outcomes do not align with expectations.
Chaplains frequently engage families at moments when hope is explicitly named, questioned, or redefined. Their unique role allows them to identify how families understand hope, what it protects, and how it influences decision-making. This presentation examines hope not as a binary concept (present or absent), but as an ethically complex, evolving process that requires careful interdisciplinary support.
Implications of an AI Risk-Stratification Tool in Liver-Transplant Selection for Alcohol-Related Liver Disease.
Bharat Rai, MSc
Background: Despite being a leading indication for liver transplant, alcohol related liver disease (ALD) and the subsequent evaluation for post-LT alcohol relapse risk, still relies on subjective assessment methods. To date, no reliable clinical, objective criteria have been adopted in widespread clinical practice. We sought to evaluate whether an objective, machine-learning (ML) relapse-risk model (“Ethos”) could optimize clinical decision making and better inform selection criteria.
Methods: We retrospectively analyzed 1,565 adults evaluated for ALD-related LT at three major Mayo Clinic centers between 2000 and 2024. After exclusion of 1,010 patients who did not undergo liver transplant, 555 patients went on to receive liver transplant and were subsequently categorized, according to the LT Selection Committee decisions (Directly approved [DA, n = 217], Deferred-then-Approved [DtA, n = 338]). Heavy relapse, following transplant, defined as documented alcohol use, was identified through chart review. Ethos was trained on demographic, clinical, behavioral, and psychosocial variables available at first evaluation and internally validated (70 % sensitivity, 60 % specificity; area-under-the-curve 0.65). A counter-factual analysis estimated its prospective impact on committee decisions, wait-times, and costs.
Results: DtA candidates spent significantly longer awaiting final LT selection decision as compared to those in the DA cohort (6.3 months v. 3.2 months, respectively, p < 0.001). The rate of heavy relapse after LT did not differ significantly between the two cohorts (DA 7.9 % vs DtA 4.3 %, p = 0.08), and post-LT mortality did not differ significantly DA (6.5 %) than DtA (4.4 %, p = 0.32). 2025 inflation adjusted monthly costs were calculated for the pre and post transplant periods and found to be, on average, $19,614 and $17,090, respectively. After internal validation, Ethos would have identified 45 (60 %) of the AUD-deferred patients as low risk and flagged 12 of the 17 DA patients who later relapsed, enabling targeted intervention.
Conclusion: Liver transplant selection criteria remain dependent on subjective variables, and without the use of stringent objective criteria, patients who were deferred based solely on alcohol-use concerns experienced significantly increased wait-time without significantly decreasing incidence of heavy relapse; thereby considerably increasing associated healthcare expenditure. The use of an ML risk-stratification model during the LT selection process to aid in clinical decision making may offer the possibility of early and accurate identification of patients at risk for post liver transplant alcohol relapse, streamlining healthcare delivery and improving healthcare resource allocation.
Ethics of Polygenic Embryo Screening: Procreative Autonomy, Beneficence, and Choosing for Future Children
Sophia Lindekugel, MD
Despite limited clinical validation and concerns regarding interpretation, generalizability, and equity, polygenic embryo screening (PES) has entered the clinical marketplace. PES is a novel reproductive genetic technology available to patients undergoing in vitro fertilization (IVF). PES uses polygenic risk scores derived from whole genome sequencing to estimate the risk an embryo will develop certain diseases or traits later in life. Risk estimates compare an embryo’s relative risk to the absolute population risk and companies include an embryo ranking generated by proprietary algorithms. Significant ethical and clinical questions related to use of PES remain unexplored. This qualitative study examines clinician and patient perspectives on PES to better understand how this technology is interpreted and communicated in reproductive care. Semi-structured interviews were conducted with reproductive endocrinologists (27 interviews) and patients (26 interviews) who underwent IVF. Interviews were audio-recorded, transcribed verbatim, and analyzed using team-based qualitative methods. Thematic analysis highlights issues related to procreative autonomy, procreative beneficence, the child’s open future, and openness to the unbidden. Ethical tensions emerge related to perceived reproductive responsibility, decision-making under conditions of uncertainty, and impact on child’s future health. Patients also express conflict over risk interpretation and demonstrate misunderstanding of how PES differs from established forms of preimplantation genetic testing, conflating selection of screened embryos with improved pregnancy rates. These findings underscore the need for empirically informed ethical guidance and counseling frameworks to support clinicians and patients as PES continues to evolve. Understanding stakeholder perspectives is essential to informing responsible clinical practice, patient counseling, and future professional recommendations regarding polygenic embryo screening.
More presentation to be added as accepted.
