Forced to wait while Dad suffers, Hugo Campos takes action: #PatientsUseAI
An empowered patient explores, learns, and finds an answer.
Patient uses of AI are accelerating more rapidly than I anticipated, and I’m discovering there are methods most people don’t know about that will help us even more. Starting with this super-inspiring story on #PatientsUseAI, I’ll try to spotlight examples.
What do you do when Dad’s suffering and his appointment is months away?
Hugo Campos lives in Oakland. He’s caregiver for his elderly father, who moved here from Brazil (as did Hugo, many years ago) and speaks only Portuguese.
This March his father came down with an intensely itchy pruritic rash - the kind nobody can resist scratching - on his arms, legs and back. It was torment.
They went to the doctor, who said it was some sort of dermatitis and referred them to dermatology.
The first available opening was in August - months away.
What would you do? Your elderly father is miserable every day: “It was horrendous,” Hugo says. “He couldn’t sleep.” Can this kind, elderly man tolerate this suffering for months??
What can you do? Are you powerless? Or empowered?
As a patient with a severe heart condition, Hugo has been highly engaged in his own care for years, so when AI came along he quickly learned to “ride that new horse,” and was astounded at what he could learn to do, with practice. As he blogged six weeks ago,
I have used generative artificial intelligence (AI) extensively over the past year. I have relied on AI for various tasks, including preparing for appointments by organizing my health information, weighing the pros and cons of different medical interventions, and summarizing medical notes for a family member not fluent in English.
He learned that you’re not limited to asking LLMs questions about the world, or having them write poetry etc - you can feed them gobs of data and tell them to analyze it. (At one point he downloaded 112MB of PDFs from his various hospitals and uploaded it to ChatGPT, then asked it questions.) He learned that some LLMs are better for different tasks: GPT’s no good at finding data in PDFs from Epic; Perplexity lets you search academic papers…
So when he decided to help his dad, he was “literate” in an AI sense: he knew what he wanted to pursue and he knew how to “speak” to the systems - how to prompt them to evoke the desired response.
What followed was a lengthy process, not a “one and done” answer. But watch how he used the tools and what eventually emerged.
1: Gather the data
So now for his father’s case, he gathered and organized all kinds of data:
Photos of the rash
History notes
Lab data, including tests ordered in the past by his cardiologist and data on his CKD (chronic kidney disease). The PCP hadn’t cited these.
etc.
This is a lot of manual work, and he looks forward to when AI tools will do that for us ... because like doctors, without a substantial and well organized record, a medical AI will be limited in what it can do for us. (This hints at the coming need for an LHR (longitudinal health record) as described last week by patient-father James Cummings.)
2: Tell it what role to play.
Here Hugo used skills he’s developed in the past year, artfully instructing different LLMs to produce the kind of result he wanted. He learned a lot by browsing examples in the library of prompts offered by Anthropic, maker of the Claude Opus 3 AI.
Knowing that the major LLMs have passed the medical licensing exam (GPT 2023, and all five passed a 50-question sample this July) Hugo wanted to “talk to them” as if they were doctors. (Makes sense, if they can answer medical questions accurately!)
But he also got more specific, as needed, and also experimented to see which results he liked best. Here are some examples.
First, since an LLM can “speak” at any level and with any level of detail, give it a voice:
<your_role>
You are a dermatologist fellow presenting a case of a 94-year old male patient with 6 weeks of pruritic rash on arms, thighs and lower legs.
</your_role>
To ChatGPT he said:
You are a dermatologist assessing a 94 year old male patient who is experiencing a pruritic rash on his arms, thighs and lower legs. …
(Why did he use multiple LLMs? “I don’t trust any one of them to know the truth,” he says. “I don’t ask them for answers - I use them to help me think. I compare what the different models tell me.”)
3: Describe the case.
Example, to Claude:
<context>
The patient is a 94-year old man with a 6-week history of a pruritic rash on the arms, thighs, back and lower legs
The patient has chronic venous insufficiency for over 10 years
There are no signs of infection like fever, pain, significant edema
…
</context>
To GPT-4o, he uploaded some documents (as attachments to the prompt) and said:
First, read through the clinical notes uploaded to your knowledge base. Then look at the photos of the patient that show the skin rash. …
4: Guide its thought process.
That style of prompt (“first … then …”) is an example of prompting the chain of thought, or chain of thought prompting. Hugo uses it, and “zero-shot prompting,” as described at this ChatGPT page.
Why is this necessary? Well, the first 18 months of LLMs taught us that while they’re great at language and facts, they’re not so great at organizing their thought process - but you can get around that by telling them what to do first.
What he fed Claude:
<differential_diagnosis>
There are several possible diagnoses, including 1) contact dermatitis from a new topical irritant or allergen, 2) Scabies, given the severe pruritis, 3) Xerotic eczema, since dry skin is common in the elderly and can be intensely pruritic. There may be other potential diagnoses. Keep an open mind.
</differential_diagnosis><instructions>
1. Examine the provided photos uploaded to the knowledge base
2. Consider the differential_diagnosis provided, but keep an open mind to other possibilities not included here.
3. Then create a 4-column table containing (a) ranking with the most likely diagnosis on top, (b) the diagnosis, (c) the reasons and considerations why you gave it this ranking, (d) recommended course of action the patient can take to mitigate the problem.
4. Finally, walk me through your decision-making process and tests like skin scraping for microscopic exam, a skin biopsy, etc, that can help narrow down the diagnosis.
</instructions>
Let’s pause for a minute.
LOOK! This is not how most people think about “talking” to an AI:
He tells Claude.AI “keep an open mind” to other possibilities he hasn’t mentioned. (Imagine saying that in an ordinary Google search.)
He asks for the results of the thought process to be presented in a 4-column table - because Hugo’s already thought out what information he’s seeking, to inform his own decisions: “Rank the most likely causes, and why you think so, and what to do about each one.” (If you’ve ever done any programming, you’ll recognize this as an essential step in asking a computer to generate information for you.)
And he tells it to “walk me through your decision-making process”! (In other words, “show your work.” This is now recognized as an essential step in preventing AIs from hallucinating, and it’s essential for Hugo’s “help me think” objective.)
I won’t take time to also paste in his GPT-4o prompts, or the rest of the lengthy process … by now it should be clear that a patient with no medical education can learn to “ride that pony” quite well. The link to all of it is at bottom of this post.
Along the way he sometimes had the AI translate various outputs into Portuguese for his dad to read, to help him be engaged in his care. “I relied on his experience with the rash to help guide me and the AI. Teamwork.” Patient empowerment folks have long said that you can’t complain about health literacy if you don’t first consider the clarity of the material. AI can help.
What the AIs produced for him
This process went back and forth, as he tried various things and then followed up in different ways over a period of weeks. It was very much like having a series of meetings with a skilled partner or consultant.
As one example of an interim output, here’s a partial screen capture of the 4-column table he requested above, as produced by Claude Opus 3:
Did you know an LLM can do that?? People who only read about AI don’t. That’s why you gotta learn to “ride that horse” by doing it yourself - or you’re already becoming outdated.
Remember, this table was just one “opinion.” He got another differential from GPT and thought about them both. He could have explored with others, just like asking several expert friends - but when patterns emerged, he thought about trying something.
(He said I should also mention that he also used plain old Googling, and uploaded his dad’s photos to Google Image Search, etc.: “I’ll use anything I can get my hands on.”)
Deciding when to try things
At this point he realized there were actions worth trying, because there were enough commonalities in all the output that some patterns had emerged, including things that required no prescription: reduce hot showers; tweak the diet for his kidney condition; increase the use of topical creams.
In ten days the rash started improving, and it’s now essentially gone.
Fleshing out the vision: patient autonomy
Note that at no time in this process did anyone have to say, “Sorry, our 20 minutes is up,” nor did Hugo ever have to suppress questions due to time running out.
Autonomy is the ability to pursue whatever you want, however you want, for as long as you’re willing to keep working at it. Hugo has long had a clear vision of having as much autonomy as possible over his own health, and he shares that vision. He’s worth following.
Six years ago, long before generative AI, he blogged a fantasy of being able to cardiovert himself when he went into atrial fibrillation, as detected by his $99 Kardia AliveCor iPhone EKG.
This May, in similar style, he blogged an AI fantasy: Howard, my healthcare agent, in which a future AI tool specific to him - his records, his LHR, his needs - responds to questions by analyzing all available information about him, not about average patients.
He wrote that post while working on his father’s rash, so he knew exactly what he was talking about. He knew how much work went into gathering the data (especially extracting it from the sludge that EHRs export), and he knew how much work goes into composing the prompts that produced this result … then he realized he could use AI to reduce that drudgery too! (Yes, he had Claude write prompts that he fed to GPT.)
And he had GPT-4o act as a writing coach to improve his draft of the “Howard” essay.
A new era is truly dawning
What Hugo did would be pretty much impossible until LLMs - but not unimaginable, as people like Hugo reach deep to sense what they really want. When you do that, you start to see “To do that, I’d need this, and this…” And when those things start to materialize, it’s exciting.
But to have these possibilities bear fruit requires culture change, and nothing helps that as much as when the establishment recognizes what you’re doing. That’s why it was big news when the top-tier journal NEJM AI published Carey Goldberg’s column When Patients Take AI into Their Own Hands, as I wrote here in April.
And in case you didn’t notice, the column asked:
Is anyone even studying this phenomenon? The editors of this journal have thus far received no submissions of studies aiming to assess what happens when patients take generative AI into their own hands…
This is exactly why I started this Substack, prompted by the great patient advocate Grace Cordovano. If nobody’s writing about what patients are doing, nobody will know it’s happening - and they’ll get the wrong idea about what patients can do.
Takeaways
You have just read a compelling true story of severe suffering that was relieved in a situation where the overloaded health system could not help. Do not look away.
Takeaways:
#PatientsUseAI. Empower and enable us. This is most important because a lot of people think all the public needs is some carefully vetted tools so that we don’t hurt ourselves. Ask Hugo what he thinks about that! :-)
Remember what I’ll call “Hugo’s Law”: “I don’t ask LLMs for answers. I use them to help me think.”
We need tools (apps, APIs, AI methods etc) to pull together our data and organize it. (So do clinicians, by the way!)
Difficult cases require a full LHR. Someday we’ll all have one but right now it’s ultra important for difficult cases like James Cummings’ kids and Hugo’s dad. AIs can’t provide insights based on data they don’t have.
We need clear, simple, “reel”-style video clips and short tutorials (prompting tips) to boost the public’s AI skills. Why short? So the public will consume them.
Today most people (including most doctors!) are only casual users of AI, but that was also true for years about web searching in the 1990s. Enable and encourage patient uses of AI - and study them, as NEJM AI says should be done.
The happy ending
Hugo and his dad, out for a traditional Brazilian breakfast.
Hugo has published an extensive list of some of the prompts he used in this project, including screen captures of uploads and results, in this Google Doc.
This is beautiful to read!
I've always been a big fan of empowering patients, and this AI case study just confirmed it's totally possible!
Very interesting - I would like to see the link to the screen captures with the prompts used, results, etc. But the link you provide doesn't work - could you double check it? Thanks!