Three years, 17 doctors, suffering kid, no diagnosis. Then Mom tried ChatGPT.
Doctors usually nail the right diagnosis. But when none emerges, sometimes AI in patient hands can change a life.
As more people have noticed this year that #PatientsUseAI, I want to add coverage of cases that got publicity earlier but were not widely seen.
Here’s one that appeared a year ago on the website of the Today Show. It’s a well-told medical mystery written by Meghan Holohan:
As the article details, mom Courtney’s son Alex developed problems at age 3. Countless doctor visits were fruitless. When ChatGPT came out in 2023, she writes, “I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT.” “I put the note in there about ... how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”
Eventually GPT suggested a diagnosis of tethered cord syndrome, which turned out to be correct … though none of the 17 doctors had thought of it. She’d done a miracle:
He was formally diagnosed 2 weeks later and had surgery 5 weeks after I had found the ‘problem’ as we called it. My son is doing wonderfully today
She’s since released her full name, Courtney Morales Hoffman, and has become a well-deserved celebrity at medical AI conferences. She keynoted the 2024 SAIL conference in May (along with son Alex!), and will do so again on October 1 at a precision medicine event in Boston. She co-presents with Dr. Holly Gilmer, the specialist who confirmed the diagnosis.
Let patients help medicine achieve its potential
The beautiful thing about all these stories is that patients without medical education are using these new tools to help medicine achieve its potential - without adding burden to the healthcare system. It’s truly empowering.
Some doctors think that patients who use AI (or patients who google) are rejecting medical advice. But you can’t say that in a case where the family’s been to seventeen doctors!
(You also can’t say that in a case like Hugo Campos and his father - they got an appointment for Dad’s horrible rash, but it was three months away. While waiting, Hugo used his AI skills and solved the problem before the appointment even arrived … but it wasn’t like he rejected doctors. They just couldn’t get in to see one!)
A doctor visit has time limits. A patient’s life doesn’t.
There’s a recurring theme in many patient stories: doctors are often pressured to limit how much time they spend on you, which is bad news for a complex case. As Courtney wrote on her new company’s website,
I put hours into assembling a cohesive medical picture upon meeting doctor after doctor, downloading data from multiple portals and creating comparison charts. I created a 34-month symptom journey with all the doctors we had seen. … As so many doctors told me, they didn’t have time to read his binder.
Even if the doctor’s time is up, when the patient and family leave the office their need goes on. That’s why Courtney had time to work extensively with the AI, honing her skills as she went.
“Good” according to whom??
Courtney’s story appeared on the Today site on 9/11/23. On that very day I was at a conference of health data developers. During the opening keynote speech a famous health IT authority, now in charge of a big AI industry group, said:
“It is just too early for us
to give generative AI to patients
and expect that the results will be good.”
In the patient community this doctor is known for his paternal thinking: “Patients don’t really know what they’re doing,” “If patients look at their medical records they’ll be confused,” etc. So when he said this, my fellow advocates asked, “What is he talking about, giving us AI?? We already have it!” But that’s paternalism for you - thinking only about his side of the relationship, viewing patients as helpless and unable to act.
So it was supremely ironic when, right after that speech, word spread about Courtney’s story. Because from the patient perspective, what she achieved is certainly “good.”
Darn good, as her much-happier kid can attest.
One note. Courtney says that "While Chat GPT is credited with solving my son’s ‘problem’ … I truly credit a comprehensive medical file as the reason we found his diagnosis.* I hope someday to include artificial intelligence in PC Medfolio to allow others to utilize AI in a similar way that I did and I am closely watching legislation to implement it at that time."
Another note: There is no surprise in hearing a healthcare-related professional "authority" say that " “It is just too early for us to give generative AI to patients and expect that the results will be good.” There is nothing more dangerous to these people than autonomous patients. He confirms it in his ridiculous statement that would make you believe that he has the authority to tell me what I can and cannot do with LLMs. I am sure he is making a lot of money from his wide expertise but optimizing life expectancy or even patient self-assessments is definitely NOT part of what he does.
Great illustration of the potential power of ChatGPT with the right support: 1) patient and caregiver (mother) careful curation of extensive data, 2) using the tool to generate hypothesis about potential causes (a differential diagnosis list), and 3) human expert in the loop with patient to confirm the data findings, likely diagnosis, do any further confirmatory tests, and inform treatment options in consideration of patient situation and preferences (shared decision making). The patient knows more about their symptoms and its impact over time, and has more incentive to devote whatever time is necessary to solve the puzzle. Doctors have limited time and knowledge, and often don’t pay enough attention to what the patient is saying. The patient is in charge of their healthcare; they should be prepared to manage their physicians, to insist on what is important.