Is #PatientsUseAI safe and okay? Compare it to what we HAVE. Part 2
Patients around the world - even in the richest countries - need care.
Yesterday this blog started an inquiry into whether it’s safe and reasonable for patients to use AI on their own. The first step was to establish whether AI knows what it’s talking about on medical questions, and the answer was a resounding yes, if you use it well:
The short answer is yes: When it’s properly instructed, it’s better than most doctors, as measured by medical licensing exams.
A passing grade (to become a doctor) is around 60%, most students average 75%, and ChatGPT scored 98% on a sample of questions from USMLE step 3.
And four lesser products scored 84+. (See the post for details.)
So, yes. Its answers are at least as trustworthy as the average doctor.
But beyond that, what if you can’t get to see a doctor?
What if (like Hugo’s story) the first appointment is months away?
What if there are no experts in your rare condition?
What if (like Sue Sheridan) your E.R. gave you a really questionable diagnosis that worries you?
What if you live in a country where there’s no well developed health system?
What if you live in America and you can’t afford to access the care that others can get - because universal healthcare doesn’t exist here?
If your child is sick, if you’re sick, if your elder is sick, and there is no care that you can access, what do you do?
That is the question addressed last month by Isaac “Zak” Kohane, MD PhD, in a viewpoint column in the prestigious NEJM:
DO NOT DOWNPLAY THE IMPORTANCE OF THIS ISSUE. Zak’s essay starts with his own inability to help a new-hire AI expert find a primary care doc - in Boston! Much has been written about the physician shortage, in developed nations as well as the third world. So to the list above we could add:
What if you live in America and there are no doctors available … even if you work at a top hospital and have great benefits and great connections?
This is not a new issue to Zak. In June 2023, when generative AI was just months old, Zak spoke at a Harvard conference organized by Beth Israel’s Division of Clinical Informatics. Many other speakers had raised the question of whether it’s ethical to use generative AI because it hallucinates. He started his talk by saying (as best I recall):
“I’ll tell you what’s the real ethical scandal in healthcare.
It’s that half the people on the planet can’t get healthcare.”
AI can help people who are forced to wait.
This isn’t rocket science: just imagine yourself (or someone you really care about) with a problem and no experts available. You find yourself wondering:
Is this symptom something I should worry about?
What might this be?
What can I do while waiting for a doctor?
Two more points:
1: #PatientsUseAI is not a rejection of doctors. We want doctors.
I keep hearing cautious thinkers wondering why people feel the need to go elsewhere instead of using doctors. I hope the thoughts above make clear how much that’s out of touch with reality - and I hope that a voice as authoritative as Zak’s will help them see.
2: This isn’t just about anxiety. Delays can harm.
The worst part of the physician shortage (or other system clogs) is that while you’re blocked from getting care, your child’s condition can be getting worse.
Zak’s essay points out what should be obvious:
Whether out of desperation, frustration, or curiosity, large numbers of patients are already using AI to obtain medical advice, including second opinions — sometimes with dramatic therapeutic consequences.
and
[T]he public will not wait for rigorous evaluations, and even imperfect tools will be — and are already being — used because of the information void created by growing gaps in access to primary care.
So yes: #PatientsUseAI - and it’s reasonable to do so.
Correction: an earlier edition said Zak’s essay was in NEJM AI. It’s in NEJM itself, not the AI sub-journal.