Skepticism Surrounds AI in End-of-Life Care Decisions

Wed 28th May, 2025

Recent research indicates that public trust in artificial intelligence (AI) for making end-of-life care decisions remains low, with many individuals preferring human judgment over machine-based assessments. This study, led by the University of Turku in Finland, assessed how people perceive moral judgments related to decisions made by AI and robots compared to those made by human doctors.

The international research involved participants from Finland, Czechia, and Great Britain, who were presented with various medical scenarios concerning end-of-life care for patients in comas. The findings, published in the journal Cognition, reveal a significant preference for decisions made by human doctors, particularly concerning euthanasia.

Michael Laakasuo, the principal investigator of the study, notes a phenomenon termed the Human-Robot Moral Judgment Asymmetry Effect. This effect suggests that people hold AI systems to a higher standard than human decision-makers. Despite AI's increasing presence in healthcare, the research highlights that individuals often view human doctors as more competent in making critical moral decisions.

In situations where life support was to be discontinued, the acceptance rate for decisions made by AI was markedly lower than those made by humans. Conversely, when decisions involved maintaining life support, the acceptance levels were comparable between AI and human decision-makers. Notably, this disparity in acceptance diminished when patients were conscious and expressed their desire for euthanasia.

The research also suggests that many individuals perceive AI's capacity to justify and explain its decisions as limited, contributing to the reluctance to accept AI's involvement in clinical roles. Laakasuo emphasizes the importance of patient autonomy in the context of AI applications in healthcare.

As AI's role in various sectors continues to expand, understanding public perceptions and reactions is critical for integrating these technologies in a manner deemed ethically acceptable. The research underscores the complexity of moral judgments surrounding AI in healthcare, indicating that a clear distinction exists in how decisions are perceived depending on whether they are made by humans or machines.

In conclusion, the study highlights the need for ongoing dialogue about the ethical implications of AI in medical decision-making, ensuring that future AI systems align with societal values and expectations.


More Quick Read Articles »