All Episodes

February 25, 2026 35 mins

How often do you use ChatGPT to evaluate your ailments? Did it work? More and more people are turning to chatbots to diagnose their illnesses — with varied success. But when it does work, it can be life-changing. Dr. Dhruv Khullar heard of a case where ChatGPT identified the cause of one man’s years-long gastrointestinal struggles, in seconds. Given a medical system that can fail so many, Dr. Khullar started to wonder, “If A.I. Can Diagnose Patients, What Are Doctors For?” That’s the title of a recent piece he wrote for The New Yorker. Oz sits down with Dr. Khullar to see if there is an answer to this question. 

Additional Reading: 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Welcome to tech stuff. I'm was Valoshan and today I
want to start with the story. It's about a man,
Matthew Williams, who experienced such intense stomach pain that he
felt he had to go to the ear. He was
given laxatives and told that he was just constipated, but
when his symptoms worsened, he went to a New Year

(00:35):
for a second opinion, and it's there they realized that
Piver's intestine is twisting on itself. This is serious and
Matthew immediately goes in for life saving surgery. But unfortunately
this isn't where his problems end. Every time he eats,
he has significant diarrhea and trouble keeping on weight. It's

(00:58):
completely life altering. Matthew spends years talking to gastroentrologists and nutritionists,
but no one is able to help relieve his symptoms
until chat GPT his doctor Druv Kola.

Speaker 2 (01:12):
He puts up his symptoms in and he tells the bot,
you know, these are the things that are bothering me most.
You know, when I eat certain foods, that's when it
really starts to trigger my abomal pain, my diarrhea, et cetera.
And within seconds, chat GBT comes up with this diagnosis that, hey,
the foods that you're describing are high in this compound

(01:34):
called oxalate that he had never heard of that's found
in leafy greens and other types of foods. And he
takes that information to a nutritionists and they design a
diet and basically he has his life back.

Speaker 1 (01:44):
On the one hand, this is absolutely incredible chat GPT
gave Matthew his life back. On the other hand, a
story like this makes you sort of question, what are
doctors actually for? Is this question that Dreve, a doctor
and a journalist, attempted to answer in a recent article
for the NIYULCA. Let's get right into it. It feels

(02:09):
like a seismic moment, right because you know, my stepmother doctor,
I got some friends who are doctors. I remember the
constant eye roll about doctor Google, like patients coming in
thinking that they'd been able to like diagnose themselves with WebMD.
But now here's a story about a guy who was
totally failed by the medical profession, frankly right, and who
was able to do something which many people in the

(02:31):
medical profession still advise against, which was used checchipt to
diagnose himself.

Speaker 2 (02:36):
Right right. I mean it is a seismic shift. And
you know, we can talk about how much the technology
has even advanced since he, you know, put in symptoms
a couple of years ago, and how there are now
specific models that are designed on healthcare information to give
even more effective diagnostic support. But I want to raise
just one other point here. I mean, we talked about

(02:56):
Matthew Williams, we should also talk about this other patient
that was in the article that I highlighted, and this
was a sixty year old guy who was just concerned
about how much salt was in his diet. Was concern
that a lot of people with particularly high blood pressure
around the world probably have. He turns to chat gbt
asks for alternatives, and the chatbot gives him a suggestion

(03:16):
of something called sodium bromide, which is a kind of
an anti seizure medicine that was used in the past
but had had significant toxicity and so isn't really used
much anymore. He orders this compound online based on the
advice he got from chat gbt. He starts taking it,
He accumulates in his body, he starts hallucinating. He's kind
of out of his mind. He goes to the emergency room,

(03:38):
they find out that his bromide levels are you know,
hundreds of times above normal, and it takes him weeks
to recover where he gets it out of his system
and gets his life back. And so I just want
to put those two stories, you know, in the same narrative,
because it shows the tremendous power but also the tremendous
risk of turning to chatbots for health advice.

Speaker 1 (03:58):
Now you compossibly know this question, but I have to
ask you anyway, which of those two stories you think
is more representative of where we are in terms of
using AI for self directed healthcare.

Speaker 2 (04:09):
Well, I think they both have some truth. I think
that as people get more familiar with how to use
these things and what the best approaches are to get
the information that you need, and as they get more
integrated into the healthcare system, as both patients but also
doctors get more comfortable with them being along for the
ride in clinical encounters, I think we're going to see
a lot more of that first story, a lot more

(04:31):
of the Matthew Williams type story, and not only that,
but a lot more of being able to be a
kind of conavigator of the healthcare system, which as we know,
is incredibly complex. It's hard to access, you know, not
always able to get the answers that you want or
need in a timely manner. And this is something that

(04:51):
I think could really support medical care if it's used
in the right way.

Speaker 1 (04:55):
I saw the announcement about chet GPT Health. I haven't
trialed the product myself or even heard a huge amount
beyond the headlines. Do you have you played around with
it or heard any patient sort of accounts of using it.
How's it compared to the regular chat chipet?

Speaker 2 (05:09):
Yeah, so you know it. It was released a few
weeks ago and it's still I think, you know, most
people are still on waitless to kind of use it.

Speaker 1 (05:16):
You know.

Speaker 2 (05:16):
The idea here is that you can upload your medical
records safely. You're able to put in you know, if
you have an Apple health app or other device, you
can put some of your metrics in there. The way
that they have presented what it's doing is it's again
supposed to be more of a guide than a diagnostician.
I think they've been explicit about the idea that it
shouldn't replace a medical professional. But if you want additional

(05:38):
insights about your health. It may be useful in that way.
You know, as a general matter, I think, you know,
Claude Andthropic have released something similar, and I think we're
going to see a lot more of this. I think
these things are helpful, but in a way limited. A
lot of people in the current kind of discussion think
that getting better health is about having more information, having

(05:59):
it more personalized information, and that's part of the story.
You want to know. You know what your specific risk
factors are, how your sleep patterns maybe are aiding or
diminishing the quality of your life. You know what you
should be eating, et cetera. But a lot of health
is actually in behavior change. It's not just information. You know,
if you think about someone who's smoking cigarettes, that person

(06:21):
probably knows that it elevates the risk of lung cancer.
That's not a secret. And even if you told them
your risk of lung cancer if you keep smoking is
twenty seven point two percent or whatever it might be,
that's not necessarily going to inspire behavior change. And so
I think part of the conversation is getting the right information,
getting the right diagnosis. But there's this whole other part
of medicine that I think the chatbots, and AI is

(06:43):
still not as helpful with which is how do we
get people to change their behavior, how do we manage
conditions over the long time, how do we deal with
the uncertainty that comes with a lot of diagnoses, and
how do we ultimately get better treatments.

Speaker 1 (06:54):
Yeah, I want to come back to that because I
think you had this great quote and the piece which
was that you know, being a doctor feel less like
being Sherlock Holmes and more like Sissyphis in terms of
constantly pushing a boulder up a hill. So when it
comes to that, but before we leave Matthew behind, I
mean he said to you, I trust AI more than doctors.
I don't think I'm the only.

Speaker 2 (07:15):
One that was striking. And I think part of it
is that we are in a moment where there's broad
distrust of institutions, and medicine is certainly not immune to that.
Part of that is earned mistrust. Part of that is
the behavior and the rhetoric of political actors. But you know,
we're in this general moment where people don't trust a

(07:35):
lot of expertise and institutional advice. The other part of
it is that, you know, as we talked about his
story is one of kind of being failed by the
medical system in some way. That first time he went
to the emergency room all those years ago, he got
a misdiagnosis. He got laxatives for what they thought was constipation,
and that even could have made his diagnosis worse because
he had this twisting of the bow. And then after

(07:58):
he had the surgery got the correct diagnosis, no one
will was able to pinpoint what was really bothering him.
And so I think, as a doctor, as part of
the medical profession, if we set up an antagonistic relationship
with our official intelligence or companies that are trying to
make health care better and more efficient in some ways,
and we feel like we want to be the gatekeepers

(08:18):
of medical care, I think we are going to force
people into a situation where a lot of folks are
trying to decide do I trust doctors more? Do I
took AI more? What I'm hoping to push towards as
a medical profession is that we should be leaders in
trying to figure out how best to integrate AI into
medicine and healthcare. How do we make the most of
these technologies do the things that we can't do. I'm

(08:40):
not able to in fifteen or twenty minute go through
twenty years of medical records for the patient in front
of me and come up with a treatment plan. But
AI could synthesize that information and provide it. We have
to make sure that the AI is doing that in
a valid way, that they're not important mistakes, and so
there needs to be a lot of validation of that
type of synthesis. In summary, really need to start thinking

(09:00):
about AI as partners and how do we integrate it
into the care that we're delivering to people.

Speaker 1 (09:06):
I want to talk about diagnosis. You witnessed you referenced
Casparov versus Deep Blue, the famous early nineties chess face
off between man and chess supercomputer which kind of took
the world by storm. There was recently a diagnosis face
off between an expert diagnostician who I think you went

(09:28):
to medical school with and a residency and a specialized
medical AI diagnosis to not just regular chat GPT, but
a system called Cabot. And you actually, I mean there
was a dramatic scene, so describe it.

Speaker 2 (09:42):
That's right. So I went to Harvard last year to
witness this kind of showdown between this bot called Cabot,
It is named after this famous physician named Richard Cabot.
That's a great, great play on words. But Richard Cabot
came up with this kind of way of teaching trainees
how to think through complex diagno cases. So in the
early nineteen hundreds he's a physician at Massachusetts General Hospital

(10:04):
and he starts a seminar series where an expert physician
gets up in front of the room and he's presented
a very complex case and the details come out kind
of drip drip, You talk about the symptoms and the
labs and he's talking through you know, how he's thinking
about getting to the right diagnosis. And this was kind
of the basis for what this Cabot was trained on.

(10:24):
So all the CPCs, these are these cases clinical pathological
case conferences. It's trained on this literature, which now there's
hundreds and hundreds of these things.

Speaker 1 (10:33):
Just to clarify for audience, a CPC, as I understand,
is basically a kind of form or a way of
recording the key details of a medical case so that
it can be basically turned into a textbook example so
that future doctors can learn how a diagnosis has come
to is that is that kind.

Speaker 2 (10:51):
Of fair mostly fair. So they're called clinical pathological conferences,
and the pathology part is that at the end of
the case, a pathologists usually has looked at what the
correct answer was. So maybe you did a biopsy, maybe
you got a blood test, maybe you had some stain,
and so you know the right answer. And these are
the most complex cases that came into the Massachusetts General Hospital,
and they were selected because they were educationally instructive. And

(11:15):
then these things were written up and they were published
in the Newland Journal of Medicine. And they've been doing
this for more than one hundred years now, and so
they're kind of thought of this gold standard of clinical
reasoning and diagnostics. If you can solve a CPC, you
could solve pretty much anything. And most doctors, you know,
if you just gave him the case, probably couldn't solve
a lot of these. And these are challenging. And so
I go to Harvard and they're having this kind of

(11:36):
showdown between one of my residency classmates, who you know,
I was always envious of because he was, you know,
he was kind of the god walking amongst the rest
of US residents in terms of his diagnostic acumen. So
he gets up there and he's given the case and
he walks through it. He creates these kind of four big,
kind of ven diagrams of the things that he's thinking
about in terms of labs, and he's thinking about imaging,

(11:56):
and he's thinking about symptoms. And in the middle he
points this this diagnosis called loft print syndrome, which is
a kind of an autoimmune condition, and everyone clapsed and
it's a it's an impressive display of diagnostic as it is.

Speaker 1 (12:10):
Revealed to it is the right correct The Wizard's has
taken the rabbit out of the.

Speaker 2 (12:15):
Hut exactly exactly, and it's something that mostly again, most
people probably wouldn't get. And then Cabot was given the
same prompt and within five minutes it comes up with
a presentation. Within five minutes. Again, you know, most doctors
would get six weeks to do this and present it.
And it is it's funny, it's kind of professional sounding
in terms of the voice, casual enough. It talks about

(12:39):
the salient features of the case, and ultimately it arrives
at the correct diagnosis, loft print syndrome. And there's kind
of this chill in the room when it comes to
that realization that this AI has basically solved something that
even might be difficult for expert diagnosticians, and it made
me think, you know, in the past, I'd been kind

(12:59):
of skeptical, can AI do the kind of very complex,
difficult cognitive work the reasoning that's required to come to
a diagnosis of this level. And now I basically had
to reassess that whole thought process where I was thinking,
you know, can I do it? And now I say,
how could we not be using this thing? You know,
how can we not be using this as a diagnostic

(13:19):
aid if in fact it is as good as it
seems to be.

Speaker 1 (13:24):
I mean, you mentioned the sodium bromide as a replacement
for sodium chloride and the gentleman who poisoned himself? Is
that an ongoing risk towards that like a story which
kind of was characteristic of the earliest stage of generative AI,
which had more hallucinations. I mean, do you how much
of a hallucination risk is there in Cabot?

Speaker 2 (13:43):
I think that it is definitely still an ongoing risk,
and so we should not be turning over our medical
reasoning to these bonds just yet. One of the key
insights of the piece that I wrote, and I continue
to feel this way, is that the effectiveness of a
lot of these models depends on the curation of the
information that is presented to them. So if you give

(14:04):
information to these bots that's organized in the right way,
that has the right salient features, that's talking about things
in a way that you know it is legible, it
is very very good at coming in the right diagnosis.
But a lot of medicine is actually about gathering those
clues figuring out how to curate the case in your
own mind. So if you're talking to these models in
broad strokes or don't emphasize the right details, you can

(14:27):
very easily get a different and possibly incorrect diagnosis. And
in fact, I played around with Cabot and I gave
it Matthew Williams's case, and I gave it did yes,
I did. And when I gave it kind of broad strokes,
you know, without the sufficient level of detail, didn't emphasize things.
First of all, it just made up some stuff, made
up its vitals, and it came to the wrong answer.

(14:47):
It did not deliver the correct answer when I gave
it the kind of exact transcript of what happened in
the emergency room. That what the doctors had thought, how
the labs hose, they ordered, how they were thinking about
the process, that's when it nailed the diagnosis. And so
you know, one of the things that's really powerful about
a doctor using it is often they know what are
the salient things we should be emphasizing to the bots,
and patients might not always you know, have that background

(15:11):
and that ability to do that.

Speaker 1 (15:13):
You know.

Speaker 2 (15:14):
The one thing that I want to get to as
well when we're thinking about doctors using these things is
this idea of cognitive de skilling, when you're off floating
the cognitive work of thinking through something yourself. And there's
not so different than student writing an essay and not
learning to write the essay themselves, or any other type
of you know, professional who's who's using these these bots.

(15:35):
But I think there's there's something really challenging about relying
on these things, using them effectively, but not letting them
replace your judgment and you're thinking, because then we end
up with a generation of physicians who aren't thinking for themselves.
And when something goes wrong or you're not able to
spot where the AI is making the incorrect judgment, then
patients could really be hurt. And so this idea of

(15:57):
cognitive de skilling is something that we think we really
need to guard against.

Speaker 1 (16:00):
I think there was a physician quote in your piece
who basically confessed to you that he'd gone through a
whole day at work and realized he hadn't made a
single diagnosis himself. He just outsourced every single one to AI.

Speaker 2 (16:12):
Yeah. It was a medical student actually, and I think,
you know, we need to think a lot about how
we educate students in this environment. But he basically said
that every time he would step out of a patient's room,
he would basically put some version of what the patient
had told him into the bot and it would create
a list of potential diagnoses, and then he would go
present those diagnoses to the physician who's in charge in
the supervising physician. And he basically looked up one day

(16:35):
and said, I haven't thought about a single patient unassisted
the whole day. So he did, in my view, very
smart thing. He said, I'm not going to use this
for in the way that I have been doing it.
I'm first going to come up with the diagnosis, the
list of diagnoses myself. I'm first going to think about
what I think is happening, and then almost as a
second opinion, used the AI and see, you know where

(16:55):
some things I might have missed or you know what
it is emphasizing that I didn't emphasize, And that type
of second opinion consultation I think is a much more
effective way of using these bots at this stage.

Speaker 1 (17:06):
I think one of the most striking sort of medical
AI stories of last year was this study that showed
that doctors plus AI are better than doctors or AI alone.
But this one study in fact demonstrator that doctors plus
AI were worse than AI on its own in terms
of diagnosing. I mean, to be fair. Now more recently
you have the people building these models, I mean, Dario

(17:29):
Armaday in particular, being much more stark about the future
role for humans in a world of AI decimation of
white collar work, of which I guess the medical profession
is the sense part of white coats versus white collars.
But why did it happen? Why would doctors not made
better in this study rather than worse by using AI?

Speaker 2 (17:50):
Well, one thing to note is that this study was
done at a time where a lot of doctors hadn't
used AI in the past. And so one of the
things that that study raised for me, which as you said,
showed that basically AI using or analyzing cases by itself
performed better than doctors that were using AI, is that
the doctors didn't really know how to use these things,

(18:11):
what the advantages and disadvantages of using them, what specific
techniques they should be using. And so the initial study,
as you say, showed that they didn't get any better.
Now in a follow up study, I should note that
the team suggested doctors use AI in specific ways. They
asked that, you know, some doctors to read the AI's
output and then analyze their cases. They told other ones
to you know, suggest specific ways to talk with the AI.

(18:35):
They asked other people you know, to come up with
their own working diagnosis and ask for a second opinion.
And in this case, actually the doctors did get better
at using the AI and coming up with with the
right diagnosis. And so I think a lot of this
depends on how we're going to be interacting with the AI.
I'm open to the idea that for certain tasks and
for certain things, AI alone will just be better. I mean,

(18:58):
AA is going to be better, you know, as calculators
are better than humans. That adding complex numbers. I mean,
that's possible to me. But I think a lot of
the important parts of medicine are still not able to
be automated. I think that a lot of it requires,
you know, judgment and understanding the context and the perspective
of the patient and what's actually happening. You know. I'll
give you an example. The other day. You know, I

(19:19):
had a patient who came in with a cat bite
on her on her arm, and it looked like maybe
there was an infection there. The infection was getting worse
and red and swelling, and I started some antibiotics. It didn't
seem to get be getting better, and so I got
kind of concern, you know, so let me let me
ask an AI. And basically its recommendation was was pretty stark.
It was this person could have something called necrotizing FASc itis,

(19:41):
which is kind of a flesh eating bacteria. They could
lose their arm. You should call surgery. They need surgery,
and that's the next step. I had seen a lot
of these types of cellulitis. Sometimes they get a little
bit worse before they get better. You know. I needed
to be able to contextualize what it was giving me,
and so I didn't have that knee jerk response. We
had a different antibiotic. It started to get better, and
over time the woman improved. And so I think even

(20:03):
something as simple as that, you can't just you know,
automatically take what the AI is telling you without using
your own judgment to figure out, you know, is this something?
What level of priority should I give this output that
I've been given?

Speaker 1 (20:40):
After the break Drew's answer to the question if AI
can diagnose patients what a doctor's for stay with us. So,

(21:06):
if AI can diagnose patients what a doctor's.

Speaker 2 (21:08):
For, I think there's a lot of things that doctors
still do that AI is not able to do, and
I think won't be able to do in the near future.
You know, one of the really important things is managing uncertainty.
So if even the AI model gives you an answer,
and often it won't be one answer, it could be
several answers, or it's not clear what to do with

(21:31):
the information that you've been given. Maybe they have the
right answer, the right diagnosis, but there are many treatment
options and some of those are you know, have trade
offs between efficacy and side effects and so on, So
there always be this realm of managing uncertainty, and I
think you want a human, a clinician, someone who's been
trained to think about this to help you with that.

(21:51):
The second is integrating values. I think a lot of
medicine it's a science, but it's also an art, and
part of that art is to elicit patient values, to
understand what's important to them, what their preferences are, what's
important to them in the short run, but also in
a long run, integrating those values with the best science available,
and coming up with a treatment plan. And the third is,
you know, you want someone to take responsibility for the

(22:14):
care that you're receiving, particularly for complex or challenging care.
We want someone who's kind of the quarterback. You want
someone who you know, you've been given this cancer diagnosis,
you've been given a heart failure diagnosis. You know you're scared,
you're uncertain about what the path forward is. Many people
in that situation would want someone who is able to
take responsibility for them, to guide them through the most

(22:36):
challenging aspects of diagnosis and treatment. And so I think
there are a lot of things. So say nothing about
the kind of inconsistencies that we've talked about and even
the best AI models, but those kind of human aspects
of care I think I don't see being replaced for
a long time.

Speaker 1 (22:50):
Talk about that idea of being less like Sherlock and
more like sissifas well. You know.

Speaker 2 (22:54):
That gets back to this idea that when people watch
shows like House, they might think that a lot of medicine,
or most of medicine is kind of sitting around and
trying to crack the case. And most of what we're
doing as doctors is figuring out you know, this person,
you know ate this meal three weeks ago, or you
look on at their fingernail and that has this type

(23:16):
of dirt from this county, and that county has the
you know this type of bacteria or something like this.
That's not really the way that that medicine works. And
so a lot of medicine part of it is getting
the right diagnosis. I don't want to diminish that we
have a huge problem with diagnostic errors in this country
and elsewhere, and a lot of what we need to
be doing is making sure we have the right diagnosis.
But even when you have the right diagnosis, let's say

(23:38):
someone has amphasema or heart failure or cancer, or sickle
cell disease. You have that, you know what, you know
what's going on. Managing that condition takes a tremendous amount
of work, a lot of balance between different organs. You
move the fluid to improve someone's heart failure, it dehydrates
the kidneys. You got to balance these things together. You know,
you're convincing someone to stop smoking, who has emphysema or

(24:00):
you know, whatever it might be. There's a lot more
just kind of brute force work and staying at it
and making sure that people are plugged in after they
leave the hospital and that they have the supports that
they need. And so the point I'm trying to get
out there between the Sherlock and Sissiphus and distinction is
that diagnosis will get you only so far. A lot
of medicine is about the management of the patient afterwards.

Speaker 1 (24:23):
It strikes me there's two kind of parallel situations here,
or two different use cases for AI in the medical system.
I believe you're at Wildcornel, right, that's right. So one
is you live in New York, you're fully insured, and

(24:43):
you get to go to Wildcornel when you're sick. How
do you and your doctor harness AI to get you
the best possible care. The other is you live in
ken Year as an example in your piece, but also
in many places in the US and are not ensured,
don't get to go to Wild Cornell, and AI might

(25:06):
be your only choice or at least a major like
almost an alternative source of truth versus a complementary source
of truth. Can you talk about both of those sort
of healthcare situations and what's similar and what's different in
terms of how you think about applying AI within them.

Speaker 2 (25:25):
Right, So you know, one of the things that you're
getting at is many people just don't have access to
the type of care that we'd want them to have
access through. That's true, you know, internationally, and it's true
within the United States, either because they don't have adequate
forms of health insurance or because they're simply not enough
doctors in their area. So you know, something like half
of US counties don't have a single psychiatrist, and so

(25:47):
you can imagine the challenges that creates for people with
mental health issues. The other point it raises is that
often when we're talking about AI, who we are hesitant
to use it or reluctant to use it. If it's
not perfect if the ERR rate isn't functionally zero without
recognizing or emphasizing that there's a lot of error that's
going on in the medical system right now, and so

(26:08):
often it's not the case that it has to be perfect,
but it should be better or as good as what
people otherwise have an opportunity to access. And so, you know,
in an ideal world, you'd like doctors and AI working together,
everyone gets kind of the best level of care. But
in reality, I think this is going to be a
situation in which a lot of people turn to AI,
and if that's absent oversight by clinicians or not in

(26:32):
conjunction with clinicians, they will receive a worse level of
care than they would if those things were working together,
but probably a better level of care or at least
some care, some insight into what's happening with their bodies
their minds that they wouldn't be able to access. And
so we already see a larger level of autonomy for
AI in the healthcare system. You may have seen recently

(26:52):
Utah signed a partnership with AI health company Doctronic, and
so some types of medication refills will be automatically refilled
after you have a conversation with an AI. You know,
I think that's a smart way to start going about
this because most of the medications are pretty low risk medications,
and you know, after all, you're just doing refills and
you're not doing the initial prescription. But you can envision

(27:14):
a world in which much more of healthcare, at least
transactional forms of healthcare. You know, you just have a UTI,
or you need a quick X ray for your ankle sprain,
make sure nothing's broken. Those types of things. Even for
people who are well insured or who have access to doctors,
it might just be more convenient to have the AI
do that and only for the more complex things, the
conditions that require managing uncertain tier or certain forms of judgment.

(27:37):
And that's where AI and doctors come together.

Speaker 1 (27:39):
Most of the piece was anchored in a patient experience,
But tell me a bit more about the doctor experience.
I mean, are you unusually interested in this topic because
you're also a writer for The New Yorker and therefore
you have an audience who's hungry to know about AI medicine,
or would you say that you have a kind of
characteristic level of interest in how this might change your profession.

Speaker 2 (27:58):
I think a lot of my colleagues are very interested
in the same questions. I don't think it's just about
being a journalist or you know, having this other part
of my career. A lot of people are using AI already.
You know, we already used various forms of decision support.
It wasn't you know. We've left the era where doctor
had to keep everything around in his or her head.
We had first, you know, accessible textbooks and then kind

(28:21):
of digital textbooks, and so we were looking things up
and supplementing, you know, the knowledge in our brains with
those things. But you know, now it seems like almost
everyone it gets a second opinion, often from an AI
before they need a second opinion from a consultant or
a colleague of some sort. And so I think there
is a lot more interest and willingness appetite to engage

(28:42):
with these types of technologies, in part because you know,
medicine and the delivery of medicine is broken in a
lot of ways. I mean, there's so much administrative burden,
the time pressure that physicians are under that it can
be very helpful to have this offload some of that pressure.
If it's done in a responsible way. I think the
more general point is that I feel that the medical

(29:04):
profession is kind of fundamentally changing, and it's undergoing this
transformation from a very twentieth century model. Part of that
is keeping things through your head, but part of that
is just being the ultimate authority on all things medicine.
You know, you have all the knowledge, couldn't get knowledge otherwise,
you have all the access. Patients had nowhere else to go,
and you had kind of the ultimate authority. People trusted

(29:26):
doctors and healthcare professionals more than any other part of society,
and those things are all changing. You know, knowledge is
more democratized, first with the Internet, now with AI, access
is beingcoming democratized. We have direct consumer company to telehealth companies.
I talked about AI writing prescriptions, and I think more
of that will happen. And then there's been this crisis
of trust in institutions, and so the other thing that

(29:48):
I've been trying to think about is how does AI,
you know, play into this more general phenomenon by which
the medical profession is really undergoing a fundamental transformation in
the twenty first century.

Speaker 1 (29:58):
Yeah, you mentioned sort of an earned crisis of trust.
You are another piece in the YUCA recently talking about
the Gilded Age of healthcare, which got a lot of pickup.
I think you're on CBS News talking about it. Why
didn't that piece struck a chord? And what's the connection
between that piece and your work on AI in the
medical setting.

Speaker 2 (30:17):
Well, I think both of these pieces and a number
of other ones, they get at this idea of the
fundamental frustration that patients and doctors have with the current
state of the healthcare system. And when we think about corporatization,
that is things like private equity buying practices and hospitals

(30:38):
that is, things like nonprofit hospitals that are nonprofit but
actually behaving very much like for profit entities. That is,
things like insurers engaging in prior authorizations and care denials.
All these things kind of come together to create a
system that isn't really working for anyone. And so what
I was trying to get at that in that piece

(30:58):
is that you know, we have almost it's like a
gilded age where there are what we've done is the
healthcare system much of the time, is seen as a
vehicle through which various people can profit, as opposed to
a vehicle through which we can best help patients and
support the health of people. And that is just so backwards.

(31:18):
And so, you know, to the extent that AI plays
into that story, you know, the hope is that it
creates certain types of efficiencies. You know, a lot of
the documentation and red tape that occurs either through prior
authorization or regulatory reporting, maybe that gets automated. Maybe AI
takes off a lot of the tasks that are preventing,

(31:40):
you know, clinicians from getting to spend more time with
patients in the room. Maybe people feel like they have
better access now because they're kind of easy. Simple questions
can be answered by AI, and they're able to spend
more time on the more difficult stuff with with real doctors. So,
you know, again, the profession is changing. There are many

(32:00):
reasons for that. One of that is that we've become
a much more corporate system than we were, you know,
half a century ago, and maybe AI, you know, has
an ability to get us to a place where strange enough,
I technology could make care more humane.

Speaker 1 (32:14):
Again just a close drew you said, quote doctors can
remake their profession working with other powers to help shape rules, norms,
and relationships. What's your prescription for how they should do that.

Speaker 2 (32:29):
Well, I think we need to think about how we
can engage beyond the walls of a clinic or beyond
the halls of a hospital. And so that requires leaning
in to the ways that medicine is changing and trying
to play more of an active role. And so, you know,
I've talked about this idea that medical pression is changing.

(32:51):
We can you know, focus on gatekeeping and keeping others out.
We can kind of retreat into this idea that we're
just going to focus on kind of technical skills, making
sure that we're compensated in a certain way, or we
can kind of reinvent the profession. We can kind of
embrace this world that we're now in. Part of that
means being on social media, doctors who are talented in

(33:12):
that way, making engaging videos that are both informative and
truthful and getting people's attention that way. Part of that
is getting involved with these new healthcare startups and companies
that are using technology to try to improve care. Part
of that is running for office, you know as physicians.
Part of that is banning together in professional societies and

(33:32):
you know, at a time where the information coming out
of the federal government isn't always accurate and reliable. Maybe
people are turning to these alternate forms of knowing, by
which I mean professional societies, local health departments, these other
ways in which we feel like we can get the
best health information across to people. And so the idea
is that we live. We're not a hedgemon anymore. The

(33:54):
medical profession is not just something that people blindly follow.
It's something that is going to require work to engage
these other actors and these other ways of becoming leaders
in the public sphere.

Speaker 1 (34:05):
Trucula, thank you for joining tech Stuff.

Speaker 2 (34:07):
Thanks so much for having me.

Speaker 1 (34:30):
That's it for tech Stuff this week. I'm os Voloshin.
This episode was produced by Eliza Dennis and Melissa Slaughter.
It was executive produced by me Caroen Price, Julian Nutter,
and Kate Osborne for Kaleidoscope and Katrian Novel for iHeart Podcasts.
Jack Insley makes this episode and Kyle Murdoch wrote our
theme song. Please rate, review, and reach out to us

(34:50):
at tech Stuff podcast at gmail dot com. We love
hearing from you.

Speaker 2 (35:01):
Fo

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.