Today’s Solutions: December 18, 2025

At the Optimist Daily, we’re always on the lookout for solutions—whether they may be for lighter issues like gardening, or for much heavier topics, like how we can best discuss how to prepare for death. Today, we’ll be touching upon the latter.

The question of how and when to prepare for death is among the most difficult and human of conversations — one which centers around our (perhaps unique) ability to grasp, turn, and examine each facet of our mortality, like a diamond under a loupe. Yet, surprisingly, these important conversations are increasingly being guided by very non-human advice: artificial intelligence. For doctors and patients, crucial but difficult decisions about end-of-life care cannot be made until a conversation about dying begins. But the taboo around death and fear of discouraging patients often delay such conversations until it is too late.

Writing in STAT, Rebecca Robbins interviewed over a dozen clinicians, researchers, and AI developers and experts on the role of machine learning in addressing patient’s mortal concerns. “A lot of times, we think about it too late — and we think about it when the patient is decompensating, or they’re really, really struggling, or they need some kind of urgent intervention to turn them around,” said Stanford inpatient medical physician Samantha Wang.

The nudge provided by AI may help doctors and patients have this difficult talk before it’s too late.

Multiple artificial intelligence models are being applied to palliative care. The models use various machine learning techniques to analyze the medical records of patients, availing themselves to the vast troves of data to generate mortality probabilities. These AI actuaries are trained with, and then tested on, data of patients who have already been treated, including diagnoses, treatments, and outcomes, discharge or death; some also include socioeconomic data and insurance information, Robbins writes.

From there, clinicians receive notifications about those whom the algorithm feels are at the highest risk of death — and prompts that difficult discussion. Those messages have to be considered and curated carefully; at UPenn, clinicians never receive more than six at a time, to avoid overwhelming docs and generating alarm fatigue. At Stanford, the notifications do not include the patient’s probabilities.

“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” Stanford physician Ron Li, per STAT.

It’s odd to think about relinquishing such a heavy human burden onto artificial intelligence, but perhaps that is the appeal of it too. Even for highly experienced doctors, having a conversation about death with patients is an incredibly difficult decision. With accurate, highly selective AI, doctors can feel more certain they are making the right choice.

Solutions News Source Print this article
More of Today's Solutions

More US states and cities are boosting minimum wages in 2026. What does it me...

BY THE OPTIMIST DAILY EDITORIAL TEAM As the federal minimum wage remains frozen at $7.25 an hour, unchanged since 2009, cities and states across ...

Read More

3 organization hacks for Type B brains that actually work

BY THE OPTIMIST DAILY EDITORIAL TEAM Scroll through any productivity blog or time-management book, and you’ll find a familiar formula: rigid routines, detailed planners, ...

Read More

An easy hack to counteract the harmful health effects of sitting all day

Humans are not designed to spend the entire day seated. Nonetheless, billions of us do it at least five days per week, as Western ...

Read More

Ensuring no pet goes hungry: The rise of pet food banks in the UK

Pete Dolan, a cat owner, recalls the tremendous help he received from Animal Food Bank Support UK, a Facebook organization that coordinates volunteer community ...

Read More