Presentations

Artificial Intelligence and Reimagining Patient-Physician Relationships

Robyn Gaier, PhD

Topic: This presentation focuses upon how patient-physician relationships must continue to evolve as artificial intelligence (AI) becomes more embedded in the practice of providing health care, particularly in the United States. Shifts in the dynamics of patient-physician relationships are not new. For example, in the twentieth century, the strict paternalism of ‘doctors know best’ gave way to more patient autonomy in health care decision-making. One key factor in this shift to more patient autonomy was the accessibility of information. It is now our interaction with information systems and processes that necessitates a further shift in patient-physician relationships. Conclusion: If utilizing more advanced AI in healthcare settings is to improve the quality of care, then patient-physician relationships must further evolve to safeguard both the expertise of physicians as well as patient autonomy. The Impact of this Presentation: Two ways in which the growing utilization of AI in healthcare decisions could have the unintended effect of undermining trust in patient-physician relationships will be examined and discussed. First, the lack of complete transparency in AI datasets and algorithms complicates the role of physicians as experts. How much trust should physicians place upon the use of AI when AI deviates from their initial assessments, or when they are unable to explain how the information was derived? Second, while AI can add much-needed efficiency in providing health care, it can also unintentionally contribute to a sense of depersonalization among patients. Physicians and patients alike must be intentional about taking the time to process information and to have genuine conversations. Additionally, physicians must not neglect their obligation to listen to their patients and to understand each patient’s values and priorities, which are not simply outcomes based. Ironically, the increased efficiency that is possible with the incorporation of more advanced AI threatens to short-change these vital aspects of communication, which are fundamental to providing quality health care. Why This Topic is Important: AI is changing how we interact with information and, in turn, with each other. AI certainly has the potential to improve the quality of health care, but only if AI is utilized and implemented in ways that do not undermine the fundamental trust at root in patient-physician relationships.

Reimagining the Physician-Patient Relationship for the Age of AI

Nir Ben-Moshe, PhD

Much has been written about artificial intelligence (AI) in medicine in general and about AI and the physician-patient relationship in particular. I want to highlight two approaches in much of the work on AI in medicine and resist them. First, most prior work on the particular ethical challenges pertaining to the adoption of AI in medicine has addressed them from the perspective of a more general normative theory, often in the form of guidelines promulgated by both medical and technical organizations. This approach is suited to producing admonitions intended to limit certain undesirable outcomes or actions from a broader societal perspective. Second, sometimes a dilemma is presented in the literature in the form of substitutionism versus extensionism. According to substitutionists, AI will surpass physicians in the performance of key clinical tasks, such as diagnosis, prognosis, and treatment plans, and so will eventually make physicians obsolete. According to extensionists, AI will not necessarily replace physicians but will simply extend and improve on their capabilities.

The problem with the first approach is not only that there may be disagreement about general normative theories, but also that the guideline-promulgating approach may be inefficacious and incapable of providing comprehensive responses to applied ethical concerns beyond mere admonitions. And, more importantly, this approach does not seriously consider the relations between AI on the one hand, and the craft of medicine and the physician-patient relationship on the other hand. I believe that these relations must be understood before the related ethical challenges may be receive satisfactory treatment. The problem with the second approach is that it offers a false dilemma: AI will neither necessarily substitute nor extend physicians. In lieu of these options, I will argue that AI can and should transform the very nature of the craft of medicine and the physician-patient relationship. It can do so by facilitating the realization of what has been considered—arguably for many centuries, if not millennia—a normative ideal of this craft and of this relationship, albeit in a novel way. In other words, my aim is to offer a novel and comprehensive way of thinking about how AI can and should affect medical practice; I do so by offering an account of how AI can and should bring about a new version of a well-known ideal of the craft of medicine and the physician-patient relationship, thereby transforming them both.

I will argue that, in analogy to Marx’s views about the emergence of a utopia that is partly brought about by technological advancement, and in which people will be freed from labor to pursue more meaningful activities, AI can and should bring a golden age to medicine. I understand this claim in the context of various interpretative and deliberative models that have been offered of the physician-patient relationship. More specifically, the kind of utopia I have in mind is one in which physicians and patients are freed to focus, as equals, on the values within of their relationship. And even more specifically, I make a case that AI can and should allow for an actualization of the ideal of the physician as friend to the patient. I then make a case that AI can and should allow for a return to the ideals of the physician as craftsman, who works in accordance with the craft’s end and values. Furthermore, and again in analogy to Marx’s views, I discuss how this new understanding of the physician-patient relationship can reduce physician alienation. Here I have in mind not merely the old kind of alienation in which society may sometimes require the physician to be a mere ‘technician’, but also newer kinds of alienation that are introduced specifically by AI: alienation from one’s own knowledge and skills and, given a responsibility gap, alienation from one’s agency. I conclude with more practical considerations and worries.

Ethically and Practically Improving Prenatal Care for the Homeless Community

Eileen Phillips, DBE, HEC-C

Pregnancy increases comorbidities and mortality within the homeless population. This paper explores current socioeconomic and structural barriers which prevent access to prenatal care for the homeless. Resultant recommendations are that improved prenatal healthcare through social support, trust-building with medical providers, and care access improves health outcomes. Research data, support models, and interviews with the homeless substantiate that improved resources have beneficial outcomes such as: optimizing pregnancy outcomes, moving individuals out of a homeless state, and reducing strain on the healthcare system. Currently, higher occurrences of poor health result from inadequate shelter, inconsistent healthcare, and lack of a simplified method to attain social services. These issues are overlooked by society causing homeless pregnant people to be trapped in a cycle of violence, and oppression resulting in mistrust and reduced autonomy. Restoring the health and dignity of these individuals would aid in breaking the cycle of homelessness, poverty and social injustice thereby restoring people to participating members of society.

How Media Shapes the Visibility of Disabilities and Chronic Illnesses in Health Care

Eileen Phillips, DBe, HEC-C and Adrienne Novick, DBe, MS, HEC-C

Media portrayals that frame disability as burdensome, or to be overcome, minimize the lived reality of millions whose conditions are not outwardly apparent. When these narratives seep into medical decision-making, they perpetuate inequitable care, delay care and undermine trust. Similarly, medical training focuses on subjective visual cues of assessing patients. Reliance on visibility has ethical consequences when extended to disability and chronic illness. Appearance underpins diagnostic reasoning; it also influences how illness is perceived culturally. Both social and medical judgements concerning chronic illness and disability ultimately impact how these individuals are treated. Accurate, diverse media representation, combined with reformed medical education and patient-centered practice, can help reduce harm, combat bias, and increase awareness of invisible disability.

Normothermic Regional Perfusion: The Ethical Dimension

Nir Ben-Moshe, PhD

Transplant surgeons are allowing terminal patients to die—with patient consent—then restarting their hearts while clamping off blood flow to their brains. This procedure, “normothermic regional perfusion with controlled donation after circulatory death” (NRP), allows surgeons to remove organs for donation from bodies with heartbeats. My aim in this paper is two-fold. First, I show that one of the main ethical concerns with NRP, the concern that the physician is intending to kill the patient, can be neutralized. Second, I show, that, nevertheless, there is something to this concern if it is understood in the context of the good of the patient as the end of medical practice. The upshot is that NRP might not be ethically acceptable, all-things-considered.

I first draw an analogy to Dan Brock’s discussion of the difference between intentional killing and allowing to die. Brock argues that euthanasia, for example, is an act of intentional killing, but that this is not what bears on its moral status. Rather, we should ask whether the act of killing is morally justified. Hence, I examine whether NRP is, all-things-considered, a morally justified form of killing. I make use of the idea that the good of the patient, as the end of medical practice, includes the patient’s medical good and the patient’s perception of the good, which concerns his values and preferences. I argue that NRP does not advance any component of the patient’s medical good, and that even if the patient consented to the procedure, and so we are respecting their values and preferences, this autonomous choice is not associated with any medical good of the patient.

I discuss several objections to my argument, including the objection that in NRP the physician is dealing with a corpse, and so my talk of the good of the patient and of the physician intentionally killing the patient is moot, since there is no patient. In order to show why, conceptually, this is not the case, I draw an analogy to the case of DNRs. I argue that in NRP, the patient’s heart is restarted as if they don’t have a DNR, but with the intention of inducing brain death, and so killing them, as if they do have a DNR. Therefore, NRP is, conceptually, akin to a form of DNR that includes restarting the heart. In both cases, there is a patient with a medical good who is being killed.

The Nurse Practitioner Ethicist & A Philosophy of Advanced Practice

Jesse Kay, MSN, MA, APRN, CPNP-AC, CCRN

Nursing began as a social practice within the homes of families [1]. It advanced into a profession and today has many advanced practice roles. One of these roles, the nurse practitioner (NP), originally began in 1965 when pediatric nurse Loretta Ford and Pediatrician Henry Silver sought to reach the underserved children in their community [2]. In today’s society, the NP role has expanded to include patient care in inpatient settings through evaluating, diagnosing, treating, prescribing, and promoting health.

Another more recent advanced nursing role, the nurse ethicist, has evolved from challenges at the bedside necessitating advanced training in ethical reasoning beyond typical for nurses to resolve ethical dilemmas and assist with moral distress. Some NPs have obtained education in ethical reasoning and moral decision-making to assist in ethics. Although nurse ethicists are unique and can fulfill roles such as clinical ethics consultants, the NP role has always been set apart: “NPs are and always will be nurses, but they possess unique skills and have a unique role that sets their profession apart” [3].

These unique skills and training set NPs apart and necessitate that they practice the internal morality of advanced nursing practice [4]. This reveals that the NP “practices the internal morality of nursing and honors the internal morality of medicine by practicing a healing and helping relationship stewarded by [the] telos of healthcare with a primary focus on service to the patient’s good” [4, p5] This does not mean NPs practice medicine but rather that they uniquely practice from a philosophy of advanced practice [4]. A philosophy of nurse practitioner practice will be put forward with the new role of the nurse practitioner ethicist.

Remembering Human Flourishing & Health in Scientific Advancement

Jesse Kay, MSN, MA, APRN, CPNP-AC, CCRN

In discussing the ethical implications of healthcare, science, and technology, bioethics has been purposed to protect the vulnerable. It could be said that bioethics focuses on safeguarding the flourishing of human beings as they relate to each other in societies. But what is human flourishing and how does it relate to advancing medical innovations?
In our pluralistic society, there are varied accounts of flourishing. According to the World Health Organization (WHO), “Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity…[and] The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being…[and] The health of all peoples is fundamental to the attainment of peace and security.” Many ethicists and authors have criticized WHO for providing too broad a definition of health that can assume that health is the ultimate good or the WHO’s idea of human flourishing. Levin believes that health cannot be the ultimate good otherwise it would undermine the direction of medicine and science that human flourishing should guide.
In positive psychology, there are two major types of flourishing. These are eudaimonic and hedonic flourishing. Hedonic flourishing focuses on promoting pleasure, avoiding pain, and pursuing life satisfaction and a positive mood. Elliot points out that unfortunately many flourishing accounts rely too heavily on the subjective well-being of hedonic flourishing without remembering eudaimonic flourishing. Ekman and Simon-Thomas caution that often relational flourishing is sacrificed to promote hedonic flourishing in positive psychology. They believe that relational well-being is crucial as “people need a sense of belonging, connection, and meaningful contribution to something beyond the self to flourish.” This is where eudaimonic flourishing comes in.
In Aristotle’s Nicomachean Ethics, he asserts that humanity’s ultimate end is eudaimonia, which is now known to be best glossed as human flourishing rather than happiness. Aristotle believed that “the good for man is an activity of the soul in accordance with virtue…in a complete lifetime.” He believed in a certain way of life that Elliot elucidated when he said, “we act this way to acquire, maintain, and enjoy the individual and social goods appropriate to our humanity and which make life attractive, such as health, security, work, family, knowledge, art, and friendship.” In other words, eudaimonic flourishing provides human beings with a purpose and recognizes the importance of relating to other human beings rather than simply focusing on promotion of pleasure and avoidance of pain. But eudaimonic flourishing also recognizes that the subjective goods of well-being, including health and pleasure, are important to promoting overall human flourishing.
In order for scientific advancement to be grounded and guided by human flourishing and health as a guidepost to help humanity, human flourishing and health should be defined. According to Curlin and Tollefsen, “’health’ here is meant in a limited, circumscribed, and embodied sense: what Kass describes as ‘the well-working of the organism as a whole,’ realized and manifested in the characteristic activities of the living body in accordance with its species-specific life-form” [6]. In other words, health is a good that enables human beings to function well in accordance with how their bodies were made. In addition, and in agreement with eudaimonic flourishing, human flourishing is pursuing and enjoying the specific telos of “conform[ing] to what a thing characteristically is.” In our pluralistic society, there are varied philosophical and religious accounts on the purpose of humanity, but most views agree that human relationships are key to flourishing.
This account will remind us that in order to wisely steward our medical innovations and scientific advancements we must remember that health is a subjective good.

Marketing With a Conscience: Reclaiming Integrity in Healthcare

Melissa Fors Shackelford, MBA

In healthcare, trust is everything—and yet marketing practices can sometimes erode it through overpromising, jargon-heavy messaging, or campaigns that overlook equity and inclusion. This keynote takes a candid look at how healthcare marketing can drift from its ethical foundation, from misleading claims to the unintentional stigmatization of vulnerable populations. Melissa shares examples of how hospitals, health systems, and life sciences organizations have restored trust by leading with purpose, transparency, and compassion. Attendees will leave with practical strategies for designing marketing that respects patients and communities, strengthens credibility, and drives growth without compromising integrity.

The Good, The Bad, and The Ugly: How to Write an Ethics of AI Paper

Amitabha Palmer, PhD, HEC-C

Since 2019, I have been publishing and reviewing ethics of AI in medicine articles, and I’ve noticed that submissions tend to have common weaknesses. This session will help aspiring authors avoid these pitfalls and provide practical guidance on writing excellent applied ethics of AI papers. I will address four interrelated areas: scope, technological specification, ethical analysis, and recommendation formulation.
Scope: The first critical decision is whether you are developing ethical theory, analyzing a particular tool in a concrete setting, or conducting empirical research on attitudes toward AI applications. Avoid writing about “AI in medicine” broadly—the category is too heterogeneous. Instead, focus on narrow medical contexts: rather than “AI in cancer care,” consider “AI in oncological palliative care.” Also clarify whether you are examining tool development, testing, or deployment, and whether the technology substitutes for human work, supplements it, augments human capabilities, or performs entirely novel functions. These scope choices have major implications for your ethical analysis.
Technological Specification: Many papers fail to specify ethically-relevant technical features—whether a system is closed or continuously learning, operates in real-time or batch processing, has explainability constraints, or relies on particular training data and deployment contexts. Grounded analysis articulates which technical characteristics matter for the specific harms or value conflicts at stake, rather than making generic claims about “autonomy with diagnostic AI.”
Ethical Analysis: Move beyond applying abstract principles like beneficence, autonomy, and justice. Instead, identify the concrete, contextual value tensions your technology creates—what kind of caring is displaced, which decisions lose autonomy, what justice concern is actually at issue. Critically, distinguish ethical problems intrinsic to the technology itself from those arising from deployment policies, human factors, or institutional affordances.
Recommendations: Translate your ethical analysis into concrete, implementable actions—whether addressing technical design, deployment policy, training, governance, or combinations thereof. Recommendations should directly solve the ethical problem you’ve identified rather than gesturing vaguely toward oversight.

Simulating Changing Preferences: On Personalized AI Decision Aids in Medicine

Clint Hurshman, PhD

Digital duplicates are AI systems, fine-tuned to simulate the minds—especially the values, preferences, and beliefs—of specific individuals. They may take the form of interactive chatbots that can be queried in real time to acquire information about what a patient values most in medical decision-making. The use of digital duplicates in medical decision-making has been advocated as a way to promote care that accords with patients’ values. For example, when a patient is incapacitated, a digital duplicate could help to inform surrogate decision-makers of patients’ likely preferences among possible treatment options, or even give consent on their behalf. Or, if a patient has decision-making capacity, a digital duplicate attuned to their values could help them to weigh the costs and benefits of treatments.

These proposals raise a number of practical and ethical concerns. This talk focuses on just one: if a digital duplicate is used to inform decision-making, how should it account for the fact that people change over time? Should it simply model the values the patient currently holds, or that she held the last time she had decision-making capacity? Should it model the values the patient is likely to hold at some future time? Or should it model the values that the patient would hold under some ideal, counterfactual conditions?

This paper aims to answer this question by drawing from the existing bioethical literature on surrogate decision-making. The substituted-judgment standard—according to which surrogate decision-making should aim to decide as the patient would have decided for herself, if she were able—has been criticized on the grounds that it is difficult to predict how a patient would choose for herself, and that not all features of patients’ actual decision-making procedures (e.g. irrational phobias) are appropriate for surrogates to consider. In response to such worries, Stout (2022) argues for a “mixed judgment standard,” according to which decision-makers should not merely aim to predict how the patient would decide for herself, but “take up the evaluative perspective of the patient… and use that perspective to supply content to her own decisional procedure” (p. 543).

I argue that a similar standard should be incorporated into the development of digital duplicates. Creators of digital duplicates are therefore justified in abstracting away preferences and other tendencies in decision-making that are irrational or that the patient does not reflectively endorse. This means that the digital duplicate may differ from the actual patient in key respects, and that surrogates engage in a form of paternalism. However, I argue that this approach nevertheless promotes patient autonomy better than the alternatives.

Beyond the Obvious: Uncharted Ethical Frontiers of AI in Medicine

David Beyda, MD

Artificial intelligence is rapidly transforming healthcare, yet most ethical conversations remain focused on familiar ground: diagnostic accuracy, bias, and privacy. These questions matter, but they overlook emerging challenges that will soon redefine the physician–patient relationship.

This presentation examines five emerging ethical frontiers in AI for medicine. The first is authority. When AI generates care plans with confidence, patients and insurers may defer to the machine over the physician, eroding clinical accountability and shaping how future doctors are trained. The second is empathy. AI systems now simulate compassion so convincingly that patients may feel more comforted by machines than by their doctors, raising questions about the authenticity and trustworthiness of these systems.

The third frontier is counterfactual medicine. AI could show patients the lives they “might have had” if they had chosen differently. Insightful, but potentially harmful if it chains patients to regret. The fourth is triage. In moments of scarcity, AI may quietly embed hidden values, favoring patients deemed more “compliant” or “efficient,” reshaping allocation ethics without public awareness. The fifth is the health biography. Long-term risk profiles can label patients as “noncompliant” or “low survival,” altering how clinicians and patients see themselves, turning predictions into prescriptions.

These scenarios are not distant speculation; they are already emerging. By examining these five frontiers: authority, empathy, counterfactuals, values, and identity. This presentation challenges healthcare leaders to look beyond whether AI works and ask what it means for medicine, care, and human dignity.

Ethics Analysis for Innovation Involving the Maternal-Fetal Dyad

David Mann, MD, DBe, HEC-C

Physicians proposing innovative fetal interventions have specific ethical responsibilities to the maternal-fetal dyad participants in their experiments. These ethical responsibilities are related to but distinct from the physician’s fiduciary duties/responsibilities to the maternal-fetal dyad patient. This distinction is realized during the informed consent process because a patient is consenting based on the amount of burden/risk they are willing to accept to achieve a direct clinical benefit; whereas, the research participant is consenting to be subjected to minimal burdens/risks, i.e., protected, to achieve an altruistic benefit for future patients. The maternal-fetal dyad participant in an innovative intervention is consenting to be subjected to minimal burdens/risks to achieve a theoretical but likely benefit to their fetus.

Ethics in AI Image Processing – Navigating Challenges and Ensuring Ethical Innovation

Zbigniew Starosolski, PhD

This lecture will explore core principles of AI-based image processing. It covers topics such as model transparency, explainability, common errors, and biases in AI workflows. The session will also examine AI-based image manipulations and their effects on both clinical and pre-clinical decisions. It concludes with best practices for researchers using AI tools in image analysis and offers a list of publicly accessible resources to support these practices.
By the end of this lecture, participants will be able to:

  • Define AI Ethics in Imaging: Clearly explain the core principles of AI ethics and how they specifically apply to both clinical and preclinical image processing, integrating established medical ethics (autonomy, non-maleficence, beneficence, and justice).
  • Identify Core Ethical Challenges: Describe the three primary ethical issues in AI-driven image analysis:
    ◦ Data Privacy & Security: Discuss the risks linked to handling sensitive health data and the shortcomings of existing regulations.
    ◦ Algorithmic Bias: Recognize the sources of bias in AI models and explain their potential effects on clinical diagnoses and preclinical research outcomes.
    ◦ Transparency & Explainability: Clarify the “black box” problem and emphasize the importance of transparency for trust, accountability, and error validation.
  • Analyze the Impact of AI on Practice: Examine the benefits and risks of integrating AI into diagnostic and research workflows, including over-reliance issues and challenges in establishing accountability for AI errors.
  • Navigate the Regulatory Landscape: Summarize key principles from emerging national and international guidelines (e.g., FDA, EU, WHO) relevant to AI in healthcare and research, including animal studies.
  • Evaluate Ethical Dilemmas through Case Studies: Review real-world examples involving deepfakes, biased facial recognition, and flawed healthcare algorithms to demonstrate how ethical principles can be upheld or violated.
  • Apply Ethical Best Practices: Suggest technical, governance, and practical strategies to reduce ethical risks of AI, such as using representative data, Explainable AI, and maintaining a “human-in-the-loop” approach.
    Formulate an ethical framework by synthesizing the lecture’s ideas to guide the responsible development and application of AI in preclinical and clinical imaging.
Can Ms. Adriana Smith be Harmed?

Ryan Lemasters

This paper offers an ethical analysis of the case involving Ms. Adriana Smith, which attracted significant media attention beginning in early May of 2025 due, in part, to the controversial decisions made by Emory University Hospital (Atlanta, GA). Although media coverage has been extensive, the case has received little scholarly examination, except for Lewis, Quinn, and Mutcherson (forthcoming). This paper therefore highlights some of the key ethical issues and points of disagreement surrounding the case of Ms. Smith. Section 1 offers a brief overview of the case. Section 2 introduces the key moral principles that should be considered in evaluating it. Section 3 examines whether Ms. Smith can be harmed, which we argue is an important point that has been missing in coverage of the case. Section 4 develops new questions and outlines possible avenues for advancing the analysis of the ethical dimensions of the case. While a full ethical analysis of Ms. Smith’s case is beyond the scope of this paper, the goal is to lay the groundwork for critical reflection on some of the most significant ethical issues and points of disagreement involved.

The Ethics of Hope in Pediatric Care: Supporting Families Without False Reassurance

Natalie Peters, MAPS, BCC

Hope is a central coping mechanism for families navigating pediatric illness, yet it poses ethical challenges when prognosis is poor or uncertain. In pediatric settings, clinicians often struggle to balance honesty with compassion, fearing that frank communication may extinguish hope or damage trust. Conversely, overly optimistic framing or avoidance of difficult truths can lead to false reassurance, moral distress among clinicians, and erosion of trust when outcomes do not align with expectations.

Chaplains frequently engage families at moments when hope is explicitly named, questioned, or redefined. Their unique role allows them to identify how families understand hope, what it protects, and how it influences decision-making. This presentation examines hope not as a binary concept (present or absent), but as an ethically complex, evolving process that requires careful interdisciplinary support.

Implications of an AI Risk-Stratification Tool in Liver-Transplant Selection for Alcohol-Related Liver Disease.

Bharat Rai, MSc

Background: Despite being a leading indication for liver transplant, alcohol related liver disease (ALD) and the subsequent evaluation for post-LT alcohol relapse risk, still relies on subjective assessment methods. To date, no reliable clinical, objective criteria have been adopted in widespread clinical practice. We sought to evaluate whether an objective, machine-learning (ML) relapse-risk model (“Ethos”) could optimize clinical decision making and better inform selection criteria.
Methods: We retrospectively analyzed 1,565 adults evaluated for ALD-related LT at three major Mayo Clinic centers between 2000 and 2024. After exclusion of 1,010 patients who did not undergo liver transplant, 555 patients went on to receive liver transplant and were subsequently categorized, according to the LT Selection Committee decisions (Directly approved [DA, n = 217], Deferred-then-Approved [DtA, n = 338]). Heavy relapse, following transplant, defined as documented alcohol use, was identified through chart review. Ethos was trained on demographic, clinical, behavioral, and psychosocial variables available at first evaluation and internally validated (70 % sensitivity, 60 % specificity; area-under-the-curve 0.65). A counter-factual analysis estimated its prospective impact on committee decisions, wait-times, and costs.
Results: DtA candidates spent significantly longer awaiting final LT selection decision as compared to those in the DA cohort (6.3 months v. 3.2 months, respectively, p < 0.001). The rate of heavy relapse after LT did not differ significantly between the two cohorts (DA 7.9 % vs DtA 4.3 %, p = 0.08), and post-LT mortality did not differ significantly DA (6.5 %) than DtA (4.4 %, p = 0.32). 2025 inflation adjusted monthly costs were calculated for the pre and post transplant periods and found to be, on average, $19,614 and $17,090, respectively. After internal validation, Ethos would have identified 45 (60 %) of the AUD-deferred patients as low risk and flagged 12 of the 17 DA patients who later relapsed, enabling targeted intervention.
Conclusion: Liver transplant selection criteria remain dependent on subjective variables, and without the use of stringent objective criteria, patients who were deferred based solely on alcohol-use concerns experienced significantly increased wait-time without significantly decreasing incidence of heavy relapse; thereby considerably increasing associated healthcare expenditure. The use of an ML risk-stratification model during the LT selection process to aid in clinical decision making may offer the possibility of early and accurate identification of patients at risk for post liver transplant alcohol relapse, streamlining healthcare delivery and improving healthcare resource allocation.

Ethics of Polygenic Embryo Screening: Procreative Autonomy, Beneficence, and Choosing for Future Children

Sophia Lindekugel, MD

Despite limited clinical validation and concerns regarding interpretation, generalizability, and equity, polygenic embryo screening (PES) has entered the clinical marketplace. PES is a novel reproductive genetic technology available to patients undergoing in vitro fertilization (IVF). PES uses polygenic risk scores derived from whole genome sequencing to estimate the risk an embryo will develop certain diseases or traits later in life. Risk estimates compare an embryo’s relative risk to the absolute population risk and companies include an embryo ranking generated by proprietary algorithms. Significant ethical and clinical questions related to use of PES remain unexplored. This qualitative study examines clinician and patient perspectives on PES to better understand how this technology is interpreted and communicated in reproductive care. Semi-structured interviews were conducted with reproductive endocrinologists (27 interviews) and patients (26 interviews) who underwent IVF. Interviews were audio-recorded, transcribed verbatim, and analyzed using team-based qualitative methods. Thematic analysis highlights issues related to procreative autonomy, procreative beneficence, the child’s open future, and openness to the unbidden. Ethical tensions emerge related to perceived reproductive responsibility, decision-making under conditions of uncertainty, and impact on child’s future health. Patients also express conflict over risk interpretation and demonstrate misunderstanding of how PES differs from established forms of preimplantation genetic testing, conflating selection of screened embryos with improved pregnancy rates. These findings underscore the need for empirically informed ethical guidance and counseling frameworks to support clinicians and patients as PES continues to evolve. Understanding stakeholder perspectives is essential to informing responsible clinical practice, patient counseling, and future professional recommendations regarding polygenic embryo screening.

Consent Fatigue in Modern Medicine: Rethinking Voluntariness in the Context of Information Overload

Mnotho Ngcobo, PhD, LLM, LLM

Mnotho Thamsanqa Ngcobo is an Assistant Professor of Law at the University of Louisville’s Louis D. Brandeis School of Law, where he teaches Administrative Law and Public Health Law. His research focuses on health law, bioethics, and the legal and ethical dimensions of informed consent within complex clinical environments. He examines how modern medical practices and institutional structures shape patient autonomy, vulnerability, and the quality of decision-making in clinical care. His work draws on legal doctrine, ethical theory, and practical challenges in contemporary medicine to explore how patients and clinicians navigate increasingly demanding forms of information disclosure.

Professor Ngcobo has presented nationally and internationally on topics related to medical ethics, patient rights, artificial intelligence in healthcare, and regulatory design. He is completing his Doctor of Laws at the University of South Africa, where his dissertation analyzes the evolving role of consent in settings marked by uncertainty, complexity, and rapid innovation. His broader research agenda engages questions of justice, dignity, and responsibility in healthcare systems, and seeks to identify practical reforms that protect patients while supporting ethical clinical practice. In both his teaching and scholarship, he aims to bridge theoretical insight with the realities faced by clinicians and patients in everyday care.

PEMPal Demo: EHR-Embedded Support Tool Can Boost Pediatric- Readiness in Our Community Hospitals

Alana Arnold, MD, MBA


In this live demo, we will walk through two common, high-stakes pediatric presentations (e.g., respiratory distress/asthma and fever/sepsis risk) to show how PEMPal supports: (1) evidence-based pathway selection and dosing guidance, (2) consistent documentation and escalation triggers, and (3) operationally pragmatic next-step recommendations aligned to local capabilities (staffing, equipment, transfer thresholds). Because EMI emphasizes ethics in innovation, we will also highlight PEMPal’s approach to transparency and accountability—what the tool recommends, why it recommends it, what data elements are used, and how human oversight is preserved—along with equity considerations to reduce avoidable variation that can disproportionately impact underserved children. The session will conclude with an implementation and evaluation blueprint for administrators and innovators: how to deploy with minimal workflow disruption; which outcomes to track (transfer appropriateness, return visits, time-to-treatment, variation reduction, family experience); and how to align governance, auditing, and clinician training so decision support remains trustworthy, ethically sound, and adoption-ready. The intended impact is to equip clinicians, health system leaders, and ethicists with a practical model for deploying pediatric decision support that improves safety and throughput while strengthening ethical guardrails for real-world clinical AI and digital tools.

Demystifying the Publication Process

Alexander Hutchison, PhD

Have you ever experienced the frustration of having your paper desk rejected a few hours after submitting it to a journal? Have you ever wondered what things editors look for when deciding if a submitted manuscript will make it to review? Here is your chance to learn from an Editor-in-Chief how to avoid the key pitfalls when preparing your manuscript for publication. Dr. Alexander Hutchison, the EiC of Current Protocols will be giving a short talk, Demystifying the Publication Process. From his extensive experience as an editor and reviewer for several journals over a 20-year career in academic publishing, Dr. Hutchison will explain what to do (and not do) to give you research the best chance of being published in the highest quality journals possible. This will be a brief 15-minute presentation followed by a Q&A session during which you can ask him your questions. This presentation is beneficial for all research scientists, particularly those relatively new to the industry, e.g., graduate students, post-docs, and junior faculty.

Implications of an AI Risk-Stratification Tool in Liver-Transplant Selection for Alcohol-Related Liver Disease.

Bharat Rai MSc

Background: Despite being a leading indication for liver transplant, alcohol related liver disease (ALD) and the subsequent evaluation for post-LT alcohol relapse risk, still relies on subjective assessment methods. To date, no reliable clinical, objective criteria have been adopted in widespread clinical practice. We sought to evaluate whether an objective, machine-learning (ML) relapse-risk model (“Ethos”) could optimize clinical decision making and better inform selection criteria.
Methods: We retrospectively analyzed 1,565 adults evaluated for ALD-related LT at three major Mayo Clinic centers between 2000 and 2024. After exclusion of 1,010 patients who did not undergo liver transplant, 555 patients went on to receive liver transplant and were subsequently categorized, according to the LT Selection Committee decisions (Directly approved [DA, n = 217], Deferred-then-Approved [DtA, n = 338]). Heavy relapse, following transplant, defined as documented alcohol use, was identified through chart review. Ethos was trained on demographic, clinical, behavioral, and psychosocial variables available at first evaluation and internally validated (70 % sensitivity, 60 % specificity; area-under-the-curve 0.65). A counter-factual analysis estimated its prospective impact on committee decisions, wait-times, and costs.
Results: DtA candidates spent significantly longer awaiting final LT selection decision as compared to those in the DA cohort (6.3 months v. 3.2 months, respectively, p < 0.001). The rate of heavy relapse after LT did not differ significantly between the two cohorts (DA 7.9 % vs DtA 4.3 %, p = 0.08), and post-LT mortality did not differ significantly DA (6.5 %) than DtA (4.4 %, p = 0.32). 2025 inflation adjusted monthly costs were calculated for the pre and post transplant periods and found to be, on average, $19,614 and $17,090, respectively. After internal validation, Ethos would have identified 45 (60 %) of the AUD-deferred patients as low risk and flagged 12 of the 17 DA patients who later relapsed, enabling targeted intervention.
Conclusion: Liver transplant selection criteria remain dependent on subjective variables, and without the use of stringent objective criteria, patients who were deferred based solely on alcohol-use concerns experienced significantly increased wait-time without significantly decreasing incidence of heavy relapse; thereby considerably increasing associated healthcare expenditure. The use of an ML risk-stratification model during the LT selection process to aid in clinical decision making may offer the possibility of early and accurate identification of patients at risk for post liver transplant alcohol relapse, streamlining healthcare delivery and improving healthcare resource allocation.

The Limits of Biomedical Approaches to Healthy Ageing

Xiang Yu, PhD, HEC-C

In recent years, biomedical interventions have been explored to promote healthy ageing. These interventions are motivated by the idea that ageing is undesirable and therefore ought to be minimized. However, the reason why ageing is considered undesirable is not obvious. In this presentation, we challenge the biological vs. chronological framework that has been used to explain why ageing is undesirable. We also argue that biomedical interventions to healthy ageing are limited because they fail to recognize aspects of ageing that are not rooted in a person’s biology.

It has been suggested that the explanation for why ageing is undesirable lies in the biological, rather than the chronological, dimension of ageing (Garcia-Barranquero et al. 2024). Biological ageing refers to the molecular and cellular damage that accumulates in the body over time. Chronological ageing refers to the mere passage of time. The idea is that biological ageing is what makes ageing undesirable due to the negative effects of physical and cognitive deterioration, while chronological ageing brings valuable goods such as experience, knowledge, and wisdom.

We think that this dichotomy does not help explain the undesirability of ageing, because biological ageing can be desirable if it happens at a developmental stage and chronological ageing can be undesirable if it brings psychological bads such as regrets, loneliness, and fear of death, and social bads such as ageism. This mistake may result from a failure to recognize that biological ageing starts at the time of birth and that chronological ageing ends at the time of death.

The problems that the biological vs. chronological framework faces points to a limitation of biomedical interventions to healthy ageing: they fail to recognize aspects of ageing that are not rooted in a person’s biology. Specifically, they reduce human journey into a technical problem and fail to address psychological and social issues that a person faces by being at an old age.

Innovative Communication Solutions For Deaf And Hard Of Hearing

In the United States, there are over 52 million D(d)eaf or Hard of Hearing individuals who do the same things that hearing individuals do, including seeing their doctor. A lack of accessible communication solutions potentially impacts the quality of patient care resulting in frustration, miscommunication, and poor patient experiences. At Hi There Solutions, we have developed award-winning communication accessibility mobile apps for the D(d)eaf or Hard of Hearing, so they never miss a word.

Our solutions were designed by the Deaf community, for the Deaf community.

After following the patient journey through a typical healthcare system starting at the registration desk and continuing through to discharge, our Just Talk! mobile app, which is specifically designed for face-to-face, live, two-way interactions between a D(d)eaf or Hard of Hearing individual and hearing individual, fulfills a need for a real-time communication solution with a high degree of accuracy.

Encrypted end-to-end, Just Talk! is a two-way speech-to-text, text-to-speech chat board with instant messaging and select animated American Sign Language emojis; and it is available on both smartphones and smart tablets in ten languages.

Using Wi-Fi geofencing, Hi There Solutions implements Just Talk! within a healthcare facility in two ways. The first is direct integration into the healthcare system’s website. The second is integration into the healthcare system’s mobile app. Users of Just Talk! are Wi-Fi geofenced within the healthcare system’s physical location. Wi-Fi geofencing allows the healthcare team and patient to access Just Talk! only when their device is connected to the healthcare system’s Wi-Fi.

A second mobile app solution available from Hi There Solutions is Hi There!!!, which is a smartphone video chat with real-time captioning that distinguishes between speakers and includes speech to text, instant messaging, and select animated American Sign Language emojis. Our captions are transcribed in ten languages. This solution is perfect for when a D(d)eaf or Hard or Hearing individual needs to virtually include a family member or friend in the conversation with the healthcare provider. Like what a hearing person may do under similar circumstances.

Hi There Solutions provides onsite installation and 24x7x365 onsite support with a 4-hour service level agreement in the U.S.

In summary, Just Talk! and Hi There!!! bridge the communication gap between the provider and the D(d)eaf or Hard of Hearing patient helping to ensure excellence in healthcare.

A Proposed Population-Based Justice Model for Ethical Gene Transfer Expanded Access Policies

Ben Slabaugh, MA

Investigative clinical programs using gene-altering therapies to treat diseases with unmet need are essential to the establishment of new treatment pathways and potential cures. The gene-therapy modality is uniquely risky, and the window of opportunity for those affected by rare and ultra-rare diseases is narrow. Individuals with life threatening or terminal genetic conditions may seek to use the FDA expanded access pathway, also known as “compassionate use”, to gain access to unapproved experimental products and processes outside of a clinical trial.

The Belmont principle of justice can be expanded beyond equitable enrollment and representation to also incorporate distribution and access at the population level to correct for genetic disadvantages for populations of patients. Arguments for expanded access from autonomy offer strong, but insufficient grounds to introduce additional risk to a promising clinical development pathway that could result in treatment ultimately being closed to others. Genetic disability is a species of inequality and thus offers justification for the use of resources to pursue gene therapies at the population level. A population-based interpretation of distributive justice is an optimal vehicle to inform the development of ethically sound expanded access policies for investigational gene therapies and offers sufficient output for meaningful decision-support.

Any individual expanded access, while it may benefit that patient in dire medical circumstances in that instance, is ultimately a roll of the dice. Early, transparent, and good-faith engagement with patient advocacy organizations from a procedural justice standpoint can prevent risks to the development pathway that would undermine the goals of those stakeholders to advance the standard of care. The broadest interpretation of equality of opportunity under distributive justice requires that manufacturer policies that govern expanded access should increase the range of opportunities for future populations to receive a treatment they otherwise would not.

How do we responsibly engage with AI Agents in Medicine?

Kristin Kostick-Quenet, PhD, MA, MFA

As Artificial intelligence (AI) increasingly embeds in clinical research and workflows, ethicists caution against full automation and highlight the need to keep clinicians – and humans in general – “in the loop”. However, major shifts in AI development (outside of healthcare) towards “agentic” AI are raising urgent questions about how to reconcile the promises of closed-loop AI systems with consensus views about the importance of human oversight. Consistent with definitions of AI that emphasize a system’s capacity to perceive and engage with its environment, closed-loop systems in healthcare are already becoming more agentic by using computer perception (e.g., computer vision, “ambient” intelligence and other “ethological” approaches) to inform AI inferences. For example, new approaches in deep brain stimulation integrate environmental and behavioral data into AI inferences that guide automatic stimulation. Mobile health applications are likewise collecting environmental and behavioral data to inform AI-based recommender systems that offer personalized health advice. Automated drug delivery systems (e.g., for insulin; anesthesia) are similarly poised to integrate biobehavioral and environmental data without direct human oversight. The rapid advancement and utility of these agentic tools may soon cause AI ethicists to reconsider the widespread consensus around “human in the loop” approaches. Drawing insights from other high-stakes scenarios characterized by human dependence on technologies under conditions of extreme uncertainty (e.g., maritime navigation; spaceflight; nuclear energy management), this presentation explores ethical rationales that may help to reconcile the competing needs for human control and technological utility in the coming era of AI agency.

The Lived Experience of Pediatric Pain Management: Insights from Families and Providers

Annesha Dey

Pain can be defined as a subjective experience which encompasses the emotional and psychological implications of physical discomfort. The subjective nature of pain perception gives rise to ambiguity in our understanding of pain. Given the visible variability across individual patient experiences, personalized pain management is imperative. Pediatric pain management presents unique challenges in assessing pain, due in part to developmental barriers; young children are limited in their communicative ability to use vocabulary and in their cognitive ability to articulate their pain accurately. Thus, the distinctive challenges in pediatric pain management create a vital role for parents and caregivers in recognizing and advocating for their child’s pain management. Given the challenges of assessment and limitations of a singular approach, healthcare providers aim to understand a child’s pain through multiple modalities; yet the efficacy of parental pain management collaborations ultimately depends on the strength of the provider-family relationship. In this study, we sought to explore whether families’ perceptions of their child’s pain and preferences for treatment were aligned with healthcare providers’ pain management decisions at Texas Children’s Hospital. Through semi-structured narrative interviews with families and healthcare providers in pediatric care settings, we analyzed areas of concordance and discordance in pain assessment and treatment priorities. More specifically, we conducted 62 interviews with patient families and healthcare (HC) teams, 31 with parents (including 6 non-English-speaking families) and 31 with providers (2 physicians, 29 nurses). From these interviews, we sought to identify whether parents viewed their involvement as an additional burden in the already distressing hospital setting. Preliminary findings concluded the latter, identifying trust as a crucial component of familial satisfaction with medical decisions—built over time through (1) consistent communication and (2) individualized care for their child’s unique needs (e.g. consolation preferences). For instance, even though some parents expressed they did not agree with the amount of medication their child received, those that had developed strong trust were able to nonetheless understand the necessity and risks of untreated pain—specifically expressing that they “tried to put trust” in the medical team’s decisions. This task was often easier after witnessing consistent communication and value for parental input and concerns. Contrastingly, this was more stressful after trust had been previously broken (e.g. medication administration without notifying parents, adverse responses to overmedication); families noted experiencing added stress with the care. These findings reflect the delicate interplay of competing demands in the pain management process, where efforts to balance the needs of all parties may still leave some dissatisfaction. The presentation will share successful strategies to do so, as indicated by both parents and caregivers from the interviews. Future research should intentionally examine how to foster trust not only among families but also among providers, potentially by incorporating questions about trust into provider interviews. Additionally, this study could be expanded to other institutions and might make use of audio recordings for a more objective collection of data. Ultimately, these efforts aim to equalize power dynamics in the family-caregiver relationship by emphasizing the value of both parental understanding and medical knowledge in pediatric pain management.

Ethical Implications of Using AI-generated Images for Social Advocacy of Visible Diseases – An Ethical Case Analysis

Alejandro Vera, MSc

AI programs have become increasingly popular for generating images, highlighting the importance of evaluating photo-based methods as a form of community engagement. Ethical tension may begin to arise when AI-generated images are used for various purposes, such as medical, commercial, and social. This ethical case analysis will focus on how on AI has become employed for social advocacy purposes. Organizations such as, Vitiligo Voices Canada, have integrated AI-generated images into their advocacy work on social media (images provided below in Figure 1). Vitiligo is an autoimmune condition where the skin loses pigmentation due to melanocyte destruction. In addition to progressive depigmentation, the condition has a variety of physiologically, mental and emotional, and social impacts on an individual and their well-being. The use of AI-generated images can have both positive and negative impacts, simultaneously highlighting and hiding the lived experiences of many with visible conditions like Vitiligo.

There is a tension present using AI to generate images of visible conditions, taking something that is uncontrollable to the individual and making it calculated. AI use may produce biased images without clear and intentional coding. For example, the photos in Figure 1 fail to represent a range of skin tones. Also, potential harm may be introduced when false representations of lived experiences are produced and disseminated. On the other hand, the use of AI to generate representations of conditions may provide individuals the opportunity to participate in advocacy if they are not comfortable using or displaying their own bodies. We argue that the use of AI-generated images related to visible diseases, such as Vitiligo, should be further evaluated. Additionally, coding criteria for the inevitable use of AI-generated images needs to be developed to ensure unbiased and appropriate representations of visible skin conditions and the impacts they have on individuals.

The Ethics of Intelligibility in Innovation-Mediated Care

Hannah Lee, MSc

As innovation-mediated care becomes a primary interface between patients and health systems, ethical commitments are increasingly enacted through design choices rather than clinician discretion. Innovation-mediated care is understood here as healthcare delivery in which digital platforms, algorithmic triage, AI-supported care coordination, or standardized innovation-driven pathways substantially shape how patients engage with healthcare systems. Because these systems rely fundamentally on standardized modes of communication and interpretation, cultural and linguistic heterogeneity becomes a particularly revealing lens for examining how ethical values are embedded in system design.
This investigation asks what ethical values and design assumptions shape contemporary medical innovation, and how these interact with culturally and linguistically heterogeneous patient populations in pluralist healthcare systems. It draws on qualitative analysis of medical innovation strategies, policy documents, and ethical guidance governing digital health, algorithmic decision-support, and care coordination in the United Kingdom and the United States. Read through medical anthropology and clinical ethics, these materials are examined for how assumptions about communication, understanding, and patient agency are built into systems that increasingly mediate clinical encounters.
The UK–U.S. comparison shows how these assumptions are enacted through different institutional arrangements. In the United Kingdom, national digital infrastructures such as the NHS App function as primary access points, while cultural and linguistic diversity is addressed through translation policies that operate outside core system design. In the United States, innovation is routed through market-based digital tools and regulatory frameworks that assume high levels of patient navigation, language proficiency, and individual responsibility. Across both settings, these approaches preserve formal access while shifting the work of interpretation and system navigation onto patients themselves.
This investigation distinguishes transparency, or making systems visible, from intelligibility, understood as a system’s ability to recognize and respond to patients as socially situated persons. Drawing on Judith Butler’s account of intelligibility, the analysis shows how transparency-centered approaches can leave patients legible to systems without being meaningfully understood by them, particularly when visibility is tied to data extraction rather than responsive care. Cultural and linguistic heterogeneity reveals the limits of inclusion strategies that fit diverse patients into preexisting pathways. The investigation ultimately argues for a pluralist orientation to medical innovation, one that treats patient understanding, contextual responsiveness, and shared authority in shaping care pathways as core design considerations rather than downstream accommodations.

Poster Presentations

Navigating Family Life in the Hospital With Heart Failure on an Implantable Ventricular Assist Device

Annesha Dey

Ventricular assist devices (VADs) are used to support cardiac function and circulation in children with severe heart failure who are awaiting heart transplantation. These devices consist of surgically implanted cannulas connected to an external pump and a mobile controller with a short battery life that requires frequent connection to wall power (paracorporeal VAD). While VADs provide essential clinical stability and improve survival rates for these patients, they also necessitate prolonged hospitalization and introduce significant physical and emotional challenges for families. This project explores the lived experiences of families of pediatric patients with advanced heart failure requiring VAD support, documenting their narratives of adaptation to the physical, social, and emotional aspects of their child’s life with a paracorporeal VAD. Using a qualitative phenomenological design, this study aims to capture the subjective experiences of family members following their child’s VAD implantation.

In-depth, semi-structured interviews will be conducted with family members of VAD patients to better understand both patient and family experiences related to life with a VAD. Participants will be recruited from the Cardiac Intensive Care Unit (CICU) and Cardiac Progressive Care Unit (CPCU) at Texas Children’s Hospital through direct referral by medical staff. They will be approached by the researcher or a trained research assistant fluent in the participant’s language, or through an interpreter when necessary. Each participant will complete one in-depth interview lasting approximately 30–45 minutes.

Data will include demographic and contextual variables such as the participant’s relationship to the patient, as well as reflections on their experience of their child’s hospitalization with a VAD. Interview questions will focus on key themes including fear of complications, impacts on relationships, social and financial challenges of living in the hospital while awaiting transplant, and family emotional well-being. Participants will be asked open-ended questions such as: “Tell me about the challenges you and your family face while in the hospital with your child after VAD implantation,” “What positive aspects have you experienced since your child’s VAD implantation?” and “What sources of support do you and your family rely on while in the hospital?” Interview questions may be refined throughout the study based on group discussion and thematic analysis. Given the unpredictability of the hospital environment, particularly within the intensive care unit, multiple interview sessions may occasionally be required to complete data collection. All interviews will be audio-recorded with participants’ consent and transcribed with identifying information removed. Coding and thematic analysis will occur concurrently with data collection, conducted collaboratively by the project team, which includes pediatric ICU physicians and the student researcher.

Data collection is currently ongoing, and analysis is expected to be completed by January 2026. Findings will be presented at the conference upon completion of analysis.

Geographic Disparities in Access to Genome Sequencing for Children

Chad Moretz, ScD

The American Academy of Pediatrics recently recommended genome sequencing (GS) as a first-tier test for children with unexplained developmental delay or intellectual disability. While this guidance allows for faster diagnoses and improved care, access remains uneven.

To explore how geography and other factors might impact access to GS, we conducted a multi-state, retrospective cohort analysis on patients having recent GS at our clinical diagnostic laboratory. Pediatric patients from two larger states with moderate population densities, Texas (TX) and Michigan (MI), were compared to patients from two small states with high population densities, New Jersey (NJ) and Massachusetts (MA). Patient demographic and institutional information were reviewed. Distance between the patient’s home zip code and their ordering institution’s zip code was calculated. Rural-Urban Commuting Area codes were used to establish the geographic area of each zip code. State rural population information was obtained from 2023 public census data.

This analysis included 499 patients with a home address in MI, 473 patients in TX, and 96 patients in NJ/MA. The mean distance to the institution ordering testing was 103.6 miles (median = 32.4, max = 682.3) for patients in TX, 45.2 miles (median = 29.0, max = 413.0) for patients in MI, and 26.3 miles (median = 23.5, max = 79.1) for patients in NJ/MA. All ordering institutions among the four states reviewed were located in urban areas. The mean age of testing for patients in MI, TX, and NJ/MA was similar (6.5 years, 6.4 years, and 6.9 years, respectively). Ninety-six (19.2%) patients in MI, 59 (12.5%) patients in TX , and four (4.2%) patients in NJ/MA lived in rural areas. These proportions were less than each respective state’s rural population, with 27.1% of MI, 17.0% of TX, 8.9% of MA, and 6.3% of NJ populations living in rural areas. The mean and median distance between rural patients and their institutions were similar for those in TX (mean = 118.6; median = 86.6) and MI (mean = 107.2; median = 78.9), and were three-fold and two-fold greater, respectively, compared to NJ/MA (mean = 38.0; median = 37.9).

Geographic location influences access to GS and amplifies health and financial inequities. Patients in TX reside farther from the institution ordering testing compared to patients in MI, as well as in more densely populated states such as NJ and MA. Patients living in rural areas may experience greater logistical barriers to care, especially in large states such as TX where public transit is limited outside of major cities. Additionally, the proportion of patients residing in rural areas is lower than the proportion of each state’s rural population, suggesting patients that receive GS from rural areas may be underrepresented. Mobile or remote sample collection, along with increased telehealth genetic appointments, could promote access to equitable care. Additional studies on genetic testing access would help better understand needs.

A Deontological Appraisal of Medical Student Involvement in the Care of Incarcerated Patients

Vikas Burugu and Mariam Mansuri

Correctional systems are increasingly partnering with academic medical centers to provide healthcare to individuals who are incarcerated. While these patients retain the right to informed consent, there are clear limitations posed by structural barriers such as delayed access to diagnosis and treatment, absence of family for care advocacy, low literacy rates, fragmented care, and limited confidentiality. Care logistics are codified in advance, limiting patient involvement. During a clinical experience activity, we witnessed a patient restrained to his hospital bed by handcuffs who tearfully pleaded for an adjustment to his insulin dose. His worsening hyperglycemia revealed the vulnerability that arises when a patient’s needs exceed the limits of one’s role as the learner, exposing the fragile balance between the pursuit of learning and preserving the dignity and vulnerability of the patient.

Geospatial and Ethical Dimensions of Dermatologic Specialty Care Across the Hawaiian Islands

Rachael Johnson

Background / Purpose: Equitable access to specialty care is an ethical imperative in medicine. In Hawaiʻi, dermatologic services are concentrated on Oʻahu, with limited availability among rural and neighbor island communities. Geographic isolation is compounded by socioeconomic barriers – such as lower income, limited insurance coverage, and fewer specialist clinics – that disproportionately affect marginalized groups, particularly Native Hawaiians and Pacific Islanders (NHPIs). These limitations can delay diagnosis and treatment, further exacerbating existing health inequities. Yet, no recent, systematic data exist to quantify disparities in access to dermatology services across the Hawaiian Islands. This study addresses that gap, providing an empirical foundation for ethically informed policy and workforce interventions to improve skin health in Hawaiʻi, with implications for rural and Indigenous health equity nationwide.

Methods: Providers were identified using the American Academy of Dermatology’s “Find a Dermatologist” tool, and active practice was confirmed through the Hawaiʻi state medical licensure database. Primary practice locations were verified using the National Provider Identifier (NPI) registry, with discrepancies resolved by reviewing training and employment history to flag outdated records. Dermatologist density was calculated for each island using 2020 United States census estimates. This multi-step process reflects best practices for workforce identification and geospatial studies, minimizing misclassification and ensuring that only actively practicing, board-certified dermatologists with current Hawaiʻi licensure and accurate practice location data were included in the final analysis.

Results: A total of fifty-six dermatologists were identified as actively practicing in Hawaiʻi, with the majority located on Oʻahu (76.8%), followed by Maui (12.5%), the Big Island (8.9%), and Kauaʻi (1.8%). No dermatologists had primary practice locations on Molokaʻi or Lānaʻi. Most dermatologists (75%) had not completed fellowship training; 17.9% were fellowship-trained in micrographic dermatologic surgery, 5.4% in dermatopathology, and 1.8% in pediatric dermatology (Figure 2B). Of the ten Mohs surgeons, nine were based on Oʻahu and one on the Big Island. The sole fellowship-trained pediatric dermatologist practiced on Oʻahu. Two dermatology-trained dermatopathologists were located on Oʻahu and one on Maui. Dermatologist density per 100,000 individuals was approximately 2.49 on the Big Island, 4.26 on Maui, 4.23 on Oʻahu, and 1.36 on Kauaʻi.

Conclusion / Implications: Analysis of dermatologist density in Hawaiʻi reveals significant inter-island disparities, with dermatologist-to-population ratios for the Big Island and Kauaʻi falling below the national average of 3.65. The absence of dermatologists on Molokaʻi and Lānaʻi further highlights critical gaps in care. These findings reflect longstanding inequities in healthcare access for NHPIs, whose health outcomes are shaped by histories of land dispossession, cultural marginalization, and underinvestment in rural infrastructure. Addressing these systemic issues requires more than a redistribution of providers – it demands ethically grounded innovation in care delivery, including culturally responsive teledermatology, community-based workforce development, and policy frameworks that prioritize Indigenous and rural health equity. By integrating geospatial mapping with ethical reflection, this study advocates for a justice-oriented approach to healthcare planning – one that recognizes access to specialty care not as a privilege, but as a moral obligation within an equitable healthcare system.

Compassion And Its Role In Patient Care, Physician Burnout, And Physician Well-Being: A Qualitative Analysis

Emily Zoorob

Background:
Dr. Stewart Gabel was among the first physicians who posited a precursor to physician burnout: demoralization. Demoralization is a “feeling state of dejection, hopelessness, and a sense of personal ‘incompetence’ that may be tied to a loss of or threat to one’s own goals or values.” Gabel argues that systems that allow physicians to act within their own value systems are the key to mitigating demoralization. Another key dimension for mitigating demoralization, and further burnout, is resilience, defined as the “ability to thrive in the face of adversity.” Higher resilience scores have been significantly associated with lower levels of burnout and higher levels of empathy among physicians, and mindfulness practices have been shown to improve healthcare worker resilience and compassion satisfaction. Finally, in addition to resilience and mindfulness, compassion itself is an indispensable tool for preventing physician burnout, with continuing education in self-compassion linked to higher compassion satisfaction, job satisfaction, and lower levels of burnout.
Methods:
Data came from two distinct streams, though both participants were MDs selected via convenience sampling. One dataset consisted of a physician’s journal entries through the first year of residency, while the second dataset was derived from a brief, semi-structured interview conducted via Zoom. This exploratory study utilized qualitative analysis based in grounded theory, as themes and the resulting framework emerged directly from the data itself. Thematic analysis was performed in accordance with Naem et al., 2023. Salient quotations were selected and assigned keywords, which were then consolidated into codes. Themes and final concepts were developed by the research team in response to frequently appearing keywords and codes. This study was conducted in accordance with Institutional Review Board approval and ethical principles outlined in the Belmont Report, the Declaration of Helsinki, and Good Clinical Practice guidelines.
Results:
Frequently appearing keywords included holistic care, advocacy, compassion, renewal, coping, humanity, purpose, gratitude, and reciprocal healing. Negative keywords included demoralization, sacrifice, grief, diminished agency, and abandonment. The most frequent codes reflected physicians’ perceived power to heal, power to harm, reflections on practice, mindful culture and compassion, and the healthcare system itself.
Conclusion:
Drawing from Timothy Gallwey’s performance equation, Potential = Performance – Interference, compassion emerged as the central linking factor influencing physician well-being and patient care. Among the most frequently appearing keywords, holistic (n = 15), advocacy (n = 9), and compassion (n = 8) were prominent, alongside renewal (n = 7), coping (n = 6), humanity (n = 6), and purpose (n = 6), reflecting a strong orientation toward engaged, humanistic practice. In contrast, negative keywords such as demoralization, sacrifice, and grief appeared less frequently (n = 2 each) but represented meaningful sources of interference. The most frequent codes were power to harm (30.00%) and reflections (30.00%), followed by power to heal (21.05%), mindful culture and compassion (16.67%), and the healthcare system (15.31%). Holistic care, mindfulness, advocacy, renewal, purpose, and teamwork supported reciprocal healing of both patient and physician, while diminished agency, ego-driven practice, and systems that must be played undermined physicians’ ability to sustain compassionate care.

Algorithmic Diagnostics And Clinical Harm:  A Neuro-Bioethical Analysis Of AI-Induced Iatrogenesis In Pediatric And Adolescent Care.

Bita Makarachi

AI-based diagnostic systems are rapidly integrated into pediatric and adolescent healthcare. However, the current ethical and safety frameworks assess harm using outcome-based models (mostly adult-oriented), which does not sufficiently examine the developmental vulnerability of children undergoing predictive diagnostics. This work addresses a critical gap by analyzing the possibility of algorithmic diagnosis causing different kinds of clinical harm in periods of neurodevelopmental plasticity. We performed a neuro-bioethical conceptual review that incorporated pediatric ethics, developmental neuroscience, clinical decision-making and AI governance literature. Using pediatric diagnostic AI as the area of analysis, we analyzed the role of predictive systems as epistemic authorities in clinician-parent-child relationships. We identify AI induced developmental iatrogenesis: harm in the form of anticipation of diagnostic labelling, epistemic displacement of clinical judgement, changed caregiving behaviour and constrained developmental trajectories. These harms are cumulative, relational and most importantly invisible to the current regulatory and patient safety frameworks. We propose the Neuro-Algorithmic Developmental Nexus (NADN) in an attempt to explain the way diagnostic algorithms simultaneously shape the authority of the clinic, the feedback of behaviour and the way children self-understand. Children do not have the cognitive and social capacity to challenge and redefine algorithmic prognoses and this exacerbates ethical risk and irreversibility. We propose the developmentally rooted paradigm of AI ethics that includes the irreversibility thresholds, algorithmic humility and time-based harm assessment for the protection of children in AI-mediated healthcare.

Assessment Of Community Acceptance Of AI And Other Technology In Healthcare

Aliza Raza


OBJECTIVE:
This study aims to fill the gap in knowledge about community member acceptability of AI and other technology by assessing attitudes, concerns, and acceptance levels toward AI, wearable devices, and health-tracking applications among adult patients with and without chronic conditions.

BACKGROUND:
Advances in artificial intelligence (AI), wearable technology, and health-tracking applications are rapidly
transforming healthcare delivery and patient engagement. These tools can provide continuous monitoring, early detection of health issues, and personalized care recommendations, however the success of these innovative technologies depends heavily on public acceptance and trust among patients. While numerous studies have explored the clinical performance of AI algorithms and wearable devices, there is limited research examining how individuals living in communities, especially those living with chronic conditions, perceive these technologies and the level of acceptance of existing and developing applications.

METHODS:
This observational study employed an anonymous questionnaire administered to adult community members aged 18 years and older in approved public and community resource settings. Participants were recruited in person without randomization, placebo assignment, or control groups. Eligible individuals were English- or Spanish-speaking adults who had not previously participated. After providing verbal consent, participants completed a 10-minute survey assessing attitudes, concerns, and use of artificial intelligence, wearable devices, and health-tracking applications in healthcare. Surveys were completed using paper forms or electronically through REDCap via QR code access. Small, non-monetary incentives valued under $5 were provided upon survey completion.

RESULTS:
Of 75 respondents who completed surveys, 62% were female. The mean respondent age was 41.4 years (range 18-81). Respondents were diverse, with 19% reporting Hispanic ethnicity, 21% Asian, 11% Black, and 47% White. Overall, 61% agreed that AI will improve treatment of medical problems over  the next 10 years, while only 11% agreed that AI should help decide which patients are treated first and 13% agreed that AI should help decide which patients are treated first in an emergency room or help decide when a patient is ready to leave the hospital. However, 49% disagreed that AI is more likely than human doctors to make biased decisions. While 67% of respondents were comfortable with doctors reading their medical records for research purposes now, only 51% were comfortable with AI doing the same thing.

CONCLUSION:
Although community members appear to have hope for the future of AI in medicine based on these responses, there is little willingness to trust the technology in healthcare currently. Data that are yet to be collected may strengthen or weaken this argument, however, as the sample is not fully collected as of yet.

Recommendations for the Use of Therapeutic Conversational Agents for Treating Depression in Cancer Patients

Abigail Blair

Depression represents a significant comorbidity in cancer patients, correlating with increased mortality rates and reduced treatment adherence. Digital mental health tools (DMHTs), particularly therapeutic conversational agents (TCAs), offer promise in addressing financial, geographic, and cultural barriers to mental health care. However, their deployment in oncology settings requires careful consideration of multiple risk dimensions.

Cancer patients exhibit substantial heterogeneity in depression vulnerability, arising from biological factors (tumor type, disease stage, comorbidities), psychological factors (mental health history, personality traits, prognostic uncertainty), and social factors (support networks, financial burden, survivorship phase). Current TCAs demonstrate efficacy for mild-to-moderate depression in subclinical populations, with users reporting therapeutic alliance scores comparable to human therapists. Yet significant limitations persist: inadequate crisis response capabilities, insufficient validation in populations with severe depression or complex comorbidities, cultural incompetence in diverse populations, and concerning patterns of user engagement decline over time.

This presentation presents clinical guidelines for TCA deployment in cancer populations, integrating risk stratification across biological, psychological, and social dimensions with technology-specific vulnerabilities including emotional dependence, inadequate crisis response, and cultural misalignment. We propose nine evidence-based recommendations addressing: crisis response limitations, severity-appropriate use as adjuncts versus standalone interventions, assessment of technology use patterns, regulatory compliance, and patient-centered decision-making. Our recommendations emphasize comprehensive clinical assessment over algorithmic criteria and advocate for open clinical conversations about unprescribed DMHT use.

Beyond implementation guidance, we address fundamental questions about the role of intelligent technologies in therapeutic care. While TCAs may enhance accessibility and efficiency, their deployment risks displacing essential clinical activities: extended conversations about patient values and meaning-making, the development of therapeutic relationships, and collaborative clinical deliberation. Careful deployment must balance technological benefits against potential erosion of the human elements central to healing, particularly for patients confronting life-threatening illness.

These guidelines aim to support clinicians in making nuanced, case-by-case decisions that honor both the promise of digital mental health innovation and the irreplaceable value of human connection in cancer care.

Patterns in Patient Consent to Exam Under Anesthesia by Medical Students

Riya Kumar and Claire Hoppenot, MD

Background:
Pelvic exams under anesthesia (EUA) by medical students provide a unique teaching experience, allowing students to evaluate normal and abnormal pelvic anatomy. These sensitive exams lack clinical benefit; many states require specific patient consent, separate from general procedural consent. However, there is no standardized procedure for obtaining such consent, with considerable variation between teaching facilities. In 2024, our hospital system adopted new language to ask patients for permission for EUA by medical students within the surgical consent, which included trainee-performed breast, pelvic, prostate, and rectal examinations in a single question. The objective of this study is to retrospectively evaluate patients’ responses to this question when undergoing gynecologic surgery.

Methods:
We conducted an exploratory retrospective chart review in a single academic medical center using multivariable logistic regression to examine associations between patient factors and consent for pelvic EUA by medical students under the inclusive language of the consent.

Results:
We identified and reviewed 200 surgical cases completed by the gynecologic or gynecologic oncology services. Twelve cases were excluded: five for patients appearing in multiple surgical cases, three for surrogate decision-maker involvement, and four for absent consent documentation. Of 188 patients, 101 (53.7%) consented to pelvic EUA. There was no significant association between consent and patients’ age, language, race, ethnicity, residential location, insurance, occupation, or surgeon. Likewise, no difference was noted based on several surgical factors including surgical approach, procedure complexity, or diagnosis type. Prior gynecologic surgery was associated with higher consent rate (OR 2.07, 95% CI: 1.04, 4.24, p<.05), with greater effect among patients with a cancer diagnosis (OR 9.7, CI: 1.85, 60.0, p<.05).

Conclusion:
We find that patients who underwent a prior gynecologic procedure are more likely than others to consent to a pelvic EUA, possibly due to greater knowledge and more comfort with the planned procedure than those who had not had previous gynecologic procedures. No other demographic factors affected rates of consent to EUA by medical students. Additionally, comprehensive language in a consent for EUA does not appear to deter all patients from consenting to pelvic EUA in gynecologic procedures. More research is needed to determine how patients understand the consent and why they may accept or decline pelvic EUA by medical students.

Ethical Considerations of Gene Therapy in Huntington’s Disease: A Review of AMT-130

Jennifer Escobedo, Esther Nwana, Faiza Sachwani, and Kimberly Hoang

Huntington’s disease (HD) is a progressive neurodegenerative disorder characterized by motor, cognitive, and psychiatric symptoms, with current treatments focused primarily on symptom management. AMT-130, an experimental gene therapy, targets the underlying genetic cause of HD through adeno-associated virus (AAV)-mediated reduction of Huntingtin protein, offering potential to slow disease progression. However, new discoveries should be regarded with cautious optimism as many ethical questions arise with the foreseeable FDA approval of AMT-130, some of which will be expanded upon in this review. Clinical considerations include the safety of AAV vectors, risks of high-dose administration, and the challenges of prolonged neurosurgery. Other factors are the irreversibility of therapy, uncertainty in long-term outcomes, patient selection complexities, and the need for long-term neurological and immunological follow-up, despite expectations of a one-time treatment. These considerations emphasize the importance of thorough patient counseling, shared decision-making, and informed consent. Nobel Prize–recognized advances in immune regulation highlight mechanisms of peripheral immune tolerance relevant to AAV-based therapies like AMT-130, which may help reduce immune-related risks. The novelty of AMT-130 produces a wide, dynamic price range; estimates reach up to millions of dollars. Only a handful of HD centers are equipped to handle AAV gene therapy administrations; even fewer have adequate resources to properly deliver AMT-130 to all their patients. While systematic measures have attempted to increase equitable access to emerging treatments, gaps remain – especially with the underrepresentation of diverse populations in clinical research. Delayed diagnosis in racial and ethnic minorities, due to racial biases and/or socioeconomic determinants of health, will lead to many patients missing their window of opportunity to benefit from AMT-130. Black and Latino HD patients are often underdiagnosed or misdiagnosed with other neurological conditions, so they may not be considered early-manifest and no longer qualify for AMT-130 at time of diagnosis. Another concern with both clinical and research implications is that less than 20% of individuals with a high risk of HD get predictive genetic testing. The most common factors that limit the rate of presymptomatic testing (PST) amongst those that are at risk for HD are: the anticipation of psychological harm, concern about genetic discrimination, and lack of viable treatments. Studies of other late-onset genetic diseases show that increased awareness of available disease-modifying treatments, like AMT-130, raises the rate of PST for at-risk individuals. Nonetheless, accessibility of this novel therapeutic will ultimately determine the extent that the treatment motivates genetic testing. These considerations underscore the need to balance innovation, risk, and equity in the responsible development and application of gene therapies for HD.

Parents’ Perspectives on Interpreter Quality and Accessibility in Pediatric Care: A Multidepartment Qualitative Study

Camila Mallo

Effective communication between medical professionals and patients is essential to delivering safe, high-quality healthcare. Language barriers are among the most common and consequential obstacles to communication, posing serious risks to patient safety and quality of care. This issue is particularly pressing in the United States, where approximately 25 million people report speaking English “less than very well” (Juckett, 2014). Patients with limited English proficiency (LEP) continue to face disproportionate barriers and worse outcomes in healthcare settings. National survey data show that 25% of adults with LEP report being mistreated or disrespected in a health care setting, and 22% report being unable to access care they needed because of language barriers (KFF, 2023). Clinical research has demonstrated that LEP patients are significantly more likely to experience adverse medical events (Divi et al., 2007), have higher 30-day readmission rates, are more likely to misunderstand discharge or medication instructions (Flores, 2005), and experience lower satisfaction with care compared to English-proficient patients (Karliner et al., 2007). Language barriers are especially consequential in pediatrics, where parents must act as surrogate decision-makers for their children. A key strategy to facilitate effective communication between LEP families and healthcare providers is the use of professional interpreters. Professional interpreters are widely recognized for their use, as it has been shown to reduce clinically significant errors to 2% of encounters compared with 22% when no interpreter is used (Flores, 2003). Additionally, parents who receive professional interpretation are twice as likely to report satisfaction with communication compared to those without (Karliner et al., 2007). Despite these benefits, the use of interpreters remains inconsistent. Even when available, interpreter services may not be timely, accessible, or perceived as of high quality. Prior studies have primarily focused on modality comparisons (in-person, telephone, video) or single settings, often emergency departments, leaving significant gaps in understanding how interpreter quality and accessibility are perceived across different pediatric hospital departments, where clinical demands and communication styles may vary. This qualitative study addresses persistent empirical and ethical gaps in the literature by examining how parents of limited-English-proficient (LEP) pediatric patients experience interpreter quality and accessibility across emergency, surgical, inpatient, and outpatient departments. Semi-structured interviews were conducted to elicit parents’ perceptions of interpreter accuracy, empathy, timeliness, and availability, and to understand how these communicative factors shape trust, comprehension, and perceived inclusion in their child’s care. This study identifies both shared and department-specific patterns that illuminate how the quality of interpretation mediates parents’ sense of partnership within the healthcare encounter. Findings show that parents value interpreters who not only translate accurately but also convey empathy and understanding. By comparing parental experiences across diverse clinical contexts, this study illuminates opportunities for healthcare institutions to strengthen interpreter integration and establish more consistent standards of quality oversight. The findings underscore that linguistic access is not merely a logistical concern but a matter of ethical responsibility in pediatric care. They emphasize that equitable communication is essential to respecting families’ dignity, fostering trust, and ensuring informed participation in their child’s treatment. These insights call for systemic reforms that embed interpreter equity within hospital governance, clinical training, and professional ethics education, reaffirming linguistic access as a moral obligation intrinsic to family-centered and equitable care.

Parents’ Emotional Response in Robotic Assisted Surgery: A Qualitative Study

Rachel Bobb

Emotional distress among parents is well-recognized in pediatric clinics (Bazus et al., 2024; Malpert et al., 2015; Patton et al., 2020). Such distress may arise due to limitations in the decision-making process, particularly when complex medical technologies are involved. Robotic-assisted surgery (RAS) has become an increasingly common practice in cancer treatment, offering clinical advantages such as enhanced surgical precision and reduced recovery time. However, its use also introduces new sources of uncertainty and anxiety for parents when making treatment decisions for their children (Boscarelli et al., 2023; Mei and Tang, 2023). Despite the growing adoption of RAS in pediatric care, little is known about how parents emotionally experience and interpret this technology. This study addresses this gap by identifying factors that contribute to parents’ distress with RAS operations in pediatric oncology.


Perceptions of RAS are often shaped by misconceptions and safety concerns, which can contribute to parental hesitancy before surgery and their ongoing distress even after decisions are made. As with many newly introduced medical technologies, RAS has faced natural skepticism from both patients and families (Brar et al., 2024; Fairag et al., 2024). A common misunderstanding is that RAS procedures are performed autonomously by machines rather than being controlled by surgeons, leading some parents to question the system’s adaptability during surgery (Rivero-Moreno et al., 2023; Brar et al., 2024). Furthermore, reports of technical malfunction incidents have amplified public doubts about the safety and reliability of robotic interventions (Alemzadeh et al., 2016).


This research is a qualitative study on parents’ emotional responses toward RAS in pediatric oncology. We recruit parents of patients aged 0-18 who are undergoing RAS for cancer treatment. Each participant completes a standardized assessment, including the Decisional Regret Scale (DRS), along with a semi-structured interview to capture the nuances of caregiver experiences. Data are analyzed using a mixed approach, including both a quantitative assessment of emotional distress level and a qualitative thematic analysis to identify factors contributing to parents’ emotional responses following their child’s surgery.


This study helps to develop a better understanding of parents’ emotional responses toward RAS in pediatric oncology and to identify opportunities for targeted intervention and prevention. To date, no study has systematically examined how parents experience and interpret RAS in this context, making this research an important first step toward addressing an overlooked dimension of technologically mediated care. The findings will raise awareness of the emotional burden parents face and inform strategies to improve preoperative counseling, strengthen trust in medical teams, and promote family-centered communication in complex surgical settings.