All Episodes

August 27, 2025 32 mins

This week, we went to the doctor. Oz speaks with Dr. Matthew Lungren, the Chief Scientific Officer for Microsoft Health & Life Sciences, who co-authored a study showing that AI diagnosed complex medical cases four times faster than human doctors. Dr. Lungren walked us through how multiple AI agents worked together to generate their diagnoses, what that means for the future of medicine—and how human doctors and AI could collaborate to build a more democratized healthcare system.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to tech Stuff. This is the inside View. I'm
os Vloschen here with Cara Price.

Speaker 2 (00:19):
Hello, so as I'm very curious to know more about
the story you've brought me this week, since it's a
topic we discussed a lot on this podcast.

Speaker 1 (00:27):
Yes, so today I've got a story about AI in healthcare,
specifically AI and diagnosis. I spoke with doctor Matthew Lungren,
who is the chief Scientific officer for Microsoft Health and
Life Sciences, about this blog post that Microsoft recently published
with the title the Path to Medical Superintelligence.

Speaker 2 (00:47):
Do I want to know what medical superintelligence is? It's
more big than just regular intelligence. But I actually heard
about this study. It was everywhere, and if I remember correctly,
it was that the AI were better at diagnosing than doctors.

Speaker 1 (01:00):
Right, Yeah, that's right, In fact, four times better. There
was a headline in Time magazine which really says it all.
Microsoft's AI is better than doctors are diagnosing disease. Special
shout out here to Elliot Fishman, who's our old friend.
He's a professor of radiology at Johns Hopkins and he
runs this fascinating email group that discusses new developments in AI.

(01:22):
Matthew Lunger and I are both members of this group,
and Matthew is also one of the authors of the study.

Speaker 2 (01:28):
What kind of doctor is Doctor Lungren?

Speaker 1 (01:30):
Like Elliott Fishman our friend, he's a radiologist by training
and has a public health background. He was hired at
Stanford where he started using machine learning to analyze large
data sets. Here's Matthew.

Speaker 3 (01:43):
Eventually my lab grew into a very large AI center
at Stanford, which bridged the computer science department in the
medical school and kind of saw translation of newest techniques
into healthcare applications accelerate. Taking that work further, I went
to Microsoft on sabbatical at Microsoft Research and realized that
a very similar opportunity was there in big tech if

(02:07):
you could start to connect the latest technology to problems
in healthcare. And so that's how I came to be here,
and that's kind of what I still do all day.

Speaker 1 (02:15):
And Matthew is also one of the authors of the
Microsoft study.

Speaker 3 (02:18):
I believe that the human expert plus these expert systems
together will ultimately deliver better care.

Speaker 4 (02:25):
No matter what.

Speaker 3 (02:25):
Profession you're in, there's always a gray haired person that has,
you know, in some sense, seen it all and kind
of compressed that into their brain and their pattern matching
in a way that is just faster than folks that
don't have as much experience. And that's true anywhere, but
certainly in medicine, right. I think that the assistance or
ability of AI to now sort of connect dots in

(02:46):
ways that maybe can achieve that wisdom or that experience
and bring that to the surface.

Speaker 4 (02:53):
It's kind of an unprecedented time.

Speaker 1 (02:55):
The only exceptional performance I four times better than human doctors.
One of the things I found most interesting about the
study was that it wasn't just one single AI model
doing a diagnosis. It was a whole team of AI
models that were able to talk to each other in
order to count with hypotheses, order tests, and ultimately count
with a diagnosis.

Speaker 2 (03:16):
So multiple AI models seems a little bit unfair.

Speaker 1 (03:20):
Yes, and in fact we talked about this. The doctors
in the study were not allowed to call specialists to
help them with their diagnosis, but the ais were allowed
to talk to each other. So doctors are not going
to be made obsolete anytime soon.

Speaker 2 (03:32):
Well good, because I have a physical coming up and
I don't need four AI models being like, well, this
girl got real big this year.

Speaker 1 (03:40):
Now, as you and I already know, people are already
using AI regularly to diagnose themselves. In fact, I think
more than ten percent of the overall CHATCHBT traffic is
around medical stuff. This is not always music to the
ear of doctors, so it was interesting to look at
an example where this is actually an AI build built
for doctors and to work with doctors rather than patient facing.

(04:03):
And the other interesting thing for me, which we talk
about with Lunger, which we'll get to, is how this
idea of multiple ais talking to each other can simulate
the experience of the best hospital systems in the US
for people who otherwise might not have access to these
panels and experts.

Speaker 2 (04:22):
I can't wait to hear what you learned from him.

Speaker 1 (04:25):
Well, here's the rest of my conversation with doctor Matthew Lungren.
So you're a trained doctor, and I want to start
with the basics, which is diagnosis. I'm not sure when
the last time you made a diagnosis on a patient was,
but I'd love to hear from you as a doctor.
What is the process of diagnosis?

Speaker 4 (04:43):
Yeah, I mean it depends quite a bit on the specialty.

Speaker 3 (04:46):
But as most people know, the classic image of a physician,
right is to speak with the.

Speaker 4 (04:52):
Patient, kind of do a Sherlock Holmes kind of thing.

Speaker 3 (04:54):
Everyone's seen the shows like House and Things are kind
of sensationalized sort of the approach.

Speaker 4 (04:58):
But really there's a lot of unknowns that you have
to tease out.

Speaker 2 (05:01):
Right.

Speaker 3 (05:01):
You have to interview the page, you have to obviously
interpret labs and other information, and you have to start
to narrow things down and order appropriate tests. Try not
to chase too many what we call the zebras, but
keep those in mind in case you're dealing with one, and.

Speaker 1 (05:15):
The zebra would be the classic House episode.

Speaker 3 (05:17):
Right, yeah, right, Well every House episode is a zebra,
which actually has some relationship to the study we're going
to talk about today. But in general, it's more common
to have an uncommon presentation of a common disease than
in a common presentation of an uncommon disease, if that
makes sense.

Speaker 1 (05:33):
Right, right, right, And this kind of relationship between AI
and doctors has been going on for a few years.
I remember reading a great piece in the Niyoka about
how one of the challenges for AI was that the
best doctors can't actually tell you in words why they're
good at making diagnoses.

Speaker 4 (05:54):
That's right. It's interesting.

Speaker 3 (05:55):
I think there are things that humans have, many cotton
adiases that are well undo and I think you know,
keeping that in check while also trying to leverage the
information in front of you not be affected by the
case you just saw or something you just heard at
a conference, or an error that you experienced years ago
that's still impacting the way that you think about diagnoses.

(06:18):
And I think those biases have been well published and
discussed at nauseum in healthcare, but we're kind of dealing
with this new human plus AI dance.

Speaker 1 (06:29):
That's fascinating. Yeah. I mean I actually slipped and fell
down a few stairs at the weekend and bashed my
head slightly on one of the stairs, and then didn't
feel very well, and I was like, I wonder if
I could be concussed. So I did a selfie and
sent it to check GPT and it said my eyes
look fine. So I actually, if I'd been more wired,
I would have gone to the doctor. But there's a

(06:50):
kind of a duck side to that as well.

Speaker 3 (06:51):
Yeah, I mean I think it sounds like you did okay,
But I would say that the old saying in healthcare
during the particularly the rise of the Internet, right, which
is kind of the other similar kind of technology logic
advancement that impacted healthcare. We used to say to our patients,
you know, your Google search does not replace our medical degree, right,
And that wasn't meant to be a condescending but it
was just sort of like we had to sort of
pull them back from the abyss of going down a

(07:14):
rabbit hole and every ache and pain was immediately terminal cancer, right,
that kind of But today it's different. It sort of
reference the experience you just mentioned that's happening everywhere. In fact,
the recent open Ai launch of GPD five, they spent
fifteen minutes talking with a patient who went through a
very difficult battle with cancer and worked with the model

(07:35):
herself and was able to have very complex medical jard
and explain to her in plain English, was able to
help her with questions to ask the position. And as
someone who still practices and sees patients today, I have
to say my patients are better informed than maybe ever
and it's kind of changing the bar with this classic

(07:56):
information asymmetry problem where the patient has to kind of
keep up up with the technical speak and all the
information that we spend decades learning.

Speaker 4 (08:04):
It feels like there's almost a better playing field.

Speaker 3 (08:06):
So I can have this conversation with my patient almost
at a peer level, is right, and then we can
go through the care journey together. I'm extremely excited about
that prospect.

Speaker 1 (08:15):
Taking a couple of steps back, I mean, you mentioned
you've been in and around this since twenty twelve, twenty thirteen.
Why do people want to use AI medicine.

Speaker 3 (08:24):
Well, it's an incredibly challenging discipline and it has only
become more so maybe in the last ten or fifteen years.
One of the things that is going on is that
information is doubling roughly every ninety days medical information. That
trend has been going on for a really long time.
And what does publication of papers, publication of papers, new therapies,

(08:46):
new guidelines, all these things keep stacking up, right, And
so just because you've been through medical school and training, right,
we have lots of systems in place to help us
continue our education. But really the reaction to that has
been to sub in some cases sub sub specialize. So
to give you an example, I am a diagnostic radiologist,
so that's the bigger specialty, and then I specialize in

(09:08):
interventional radiology, which is an image guid to procedures basically,
and then I am further specialized in pediatric version of that.
So that's like a Russian nesting doll of specialties. And
you see that throughout healthcare. And that is partly due
to the complexity of care that's required for some patients,
but also it's due to the information tidle wave and

(09:29):
being able to hold all that in a human mind
right with all of our limitations, and so AI, I
think at least the work that we've been doing here
is starting to provide a counter narrative to needing to
be sub subspecialized in order to be able to manage
information and take really good care of your patients across
a wide variety of complex diagnoses. And I think that

(09:52):
that's really where the excitement is. I think right now
is can I use this system to augment my ability
to care for PAYP.

Speaker 1 (10:00):
And why isn't AI more ubiquitous in medicine? And what
has been integration challenge up until now, Well.

Speaker 3 (10:08):
There's a whole podcast just on that odds, I would say,
but the short version is that we have been an
incredibly skeptical discipline it's skeptical of new technology and at
the same time extraordinarily risk averse for good reason, right,
we require significant evidence, right to change the way we practice.

(10:29):
We have you know, as you know, clinical trials take
years and years, and some still fail, actually many fail,
and we accept that as the system that keeps our
patients safe and keeps us on the cutting edge. I
think in terms of just the technical mechanics of adoption,
we have a very rigid system in the software two
world that is changing. What's so again, what's so exciting

(10:49):
about this is that again any physician can pull out
their cell phone and interact with this cutting edge AI
without needing to have you know, three four year long
cycles of integration with software. Right, and it's just the
early days, but as of the trends that we're saying,
just to.

Speaker 1 (11:05):
Take a step back, I guess the classic model of
measuring AI performance versus doctor performance was to present a
hard problem or a hard diagnostic conundrum and ask for
an answer and measure answer versus answer. How is that
different to what you've done?

Speaker 4 (11:22):
Yeah, well it's even less precise than that.

Speaker 3 (11:25):
So that the way up until now, at least for
large language models, when people talk about they have medical capabilities,
they were actually using medical examination questions.

Speaker 4 (11:35):
So there's a question stem and then there's a multiple
choice answer.

Speaker 3 (11:39):
That's not medicine, but it is how we you know,
qualify our humans, right, human physicians to be granted a
medical license, so that we think we kind of use
that for a long time as a as a surrogate
or a bell weather, But it wasn't.

Speaker 1 (11:52):
Could it pause a test to be a doctor rather
than could it actually be effective at acting as a doctor.

Speaker 3 (11:57):
That's interesting, right, And we were able to show very
early on with GPD four that these models outperform positions
on these multiple choice tests. But there's all kinds of
caveats there. Is that really medicine? Has it seen some
of that data and it's training assuredly?

Speaker 4 (12:12):
Yes? Right? And is that useful?

Speaker 3 (12:14):
I think those questions came up now in practice, it's
estimated that ten to twenty percent of AI interactions with
these common chatbots like GPT are around a medical use case.
So we know that there's someone is getting value out
of that somewhere, right, and we see it with our
own eyes. So how do we bridge the gap to
something a slightly more realistic in terms of not giving

(12:37):
you all the information up front, just like we would
in real healthcare. One of the principal thoughts around the
study was is there a way to take advantage of
the incredible capabilities that these models have in medical diagnosis.

Speaker 4 (12:49):
And knowledge but also push it a bit further.

Speaker 3 (12:53):
And not have it kind of just be a question
answering machine. And so we thought, can we kind of
have several versions of the model kind of act as
different humans or this is that idea of an agent,
and give them jobs. One job is to look at
the economics of the tests that you're trying to order.
One is to question your next decision point. So the

(13:15):
information isn't just in and out with one model, it's
actually in and out through a system of models. And
we showed that no matter what model you use, whether
it's Google's model, whether it's open the Eyes model, whether
it's an open source model, it improves that diagnostic capability
on these extraordinarily challenging diagnostic tests.

Speaker 1 (13:32):
So you had ten co authors on this study, and
you know, as we talked about when it was released,
took the world by storm, and so, I mean, how
did you go about designing the study and what was
the hypothesis and what have you found?

Speaker 3 (13:47):
So this was a cross Microsoft collaboration, but harsh and Noori,
who is the lead on this, really wanted to say,
you know, we have a lot of evidence that these
models perform well for these standardized tests, and then we
see the real world situation where that's not how people present.
They don't show up with hey, these are all my tests,
these are all my problems, and these are the four

(14:07):
choices of what I may have right. And then taking
what are essentially some of the most difficult questions out
of New England Journal and structuring them in a way
that requires a model to ask for more information or
order tests, just.

Speaker 4 (14:20):
Like a physician would.

Speaker 3 (14:22):
The hypothesis was that that would be interesting and of itself,
but then what if we also put humans through that
same system. In other words, here's the first step headache, Okay,
what do you do next?

Speaker 1 (14:33):
Well?

Speaker 4 (14:33):
Do I need to ask more questions? Do I need
to order a test, et cetera, et cetera.

Speaker 3 (14:37):
One of the really brilliant outcomes here was by having
that system of agents as opposed to just the single model,
allowed us to have a more realistic understanding of the capabilities.
In other words, if I wanted to know the answer,
and I'm a chatbot, my answer could be, let's order
every single test that there is, and that would probably
get you the right answer.

Speaker 4 (14:57):
Is that feasible?

Speaker 2 (14:58):
No?

Speaker 4 (14:59):
Right?

Speaker 2 (14:59):
Ye?

Speaker 3 (15:00):
So forcing it to think about resources cost of the
care actually found a very interesting what we would call
the pride or frontier of capability underconstrained resources. So they
were actually getting to an incredible diagnoses very very accurately,
but also cost efficiently, and that was really one of

(15:20):
the biggest takeaways from this work.

Speaker 1 (15:22):
Can you just to make it more concrete for our listeners,
can you kind of set up one of these cases
as though an episode of House Dare I say, and
then what the human doctors did and what the AI
agents did, and then how you compare that performance.

Speaker 3 (15:39):
Let's just say it was someone that had easy bleeding
that unexpected. They were brushing their teeth and they started
bleeding and it was kind of unusual, and they noticed
that they were getting a lot of bruising, and there's
just a certain battery of tests. I think that was
pretty comparable on both sides in terms of what they ordered.
But taking continued to.

Speaker 1 (15:55):
Be what the AI ordered and what than human doctors.

Speaker 4 (15:57):
Are human and AI pretty much right.

Speaker 3 (15:59):
So the first few steps, I think a lot there
was a lot of similarity, which is expected. Where we
started to see early diversions was because of that agent setup.
Humans did kind of jump to more advanced tests more quickly,
more expensive tests, and that was interesting because the models
were able to kind of get to the next step
with a battery of less expensive tests. So we thought

(16:19):
that was a kind of an interesting starting to see
some divergence. And then, to be fair to the humans,
they're still kind of handcuffed. In other words, they're just
getting text feedback as they're interacting with the system, whereas
when I'm with a patient, I'm seeing them, I'm able
to kind of take some cues, I'm able to examine them.
So there was some limitations there, but then the less

(16:40):
once it got to the stage where you had a
differential diagnosis, so a list of likely things, more often
than not, the model was ranking them in a much
more data driven order that ultimately led to the correct
diagnosis much more quickly. Whereas you know, as us you
would with humans, with these limitations, you're kind of going
in some rabbit holes, you're maybe not ordering them in

(17:02):
the best order, and so you're kind of going down
other paths that end up increasing the time or expense
or potentially leading to the rown diagnosis.

Speaker 2 (17:15):
After the break, how the multi agent system the diagnostic
orchestrator actually works stay with us.

Speaker 1 (17:38):
I put the study through chet GPT describe the diagnostic
orchestrator as like a virtual team of five doctors, each
with a different role. One less possible illnesses, one chooses
the best tests, one plays devil's advocate, one watches the budget,
and one checks the quality of everything. The team talks
it out step by set, but decides what to do next.

(17:58):
Is that is that a fair summary? That's exactly right?
And you can have infinite numbers of those agents.

Speaker 3 (18:03):
I think these five were just kind of a scratching
the surface of what's possible. I will say just quickly
that I was incredibly happy to see that the curmudgeon
agent we called it, or the Devil's advocate agent was
helpful because you get into these group things situations, and
it's kind of fun to watch a model argue with
other models about some of the decisions being made in

(18:24):
questioning the steps. So where the models fall short today
is outside of the text domain. And what I mean
by that is models are incredibly good at understanding medical
concepts as their communicated in text form, but when you
get into the images and genomics and waveforms and all
the other types of ways that we take care of

(18:46):
our patients, the models are vastly underperforming humans. And a
good example of that is if I needed to look
at a chest sexuray in one of these diagnostic steps
and the model had to interpret the chess sector, it
couldn't read the report actually had to look at the image,
it would fall short and fail nine times out of ten.
So we know that that's a significant gap. But on

(19:07):
the other hand, most healthcare right eighty percent of physician
or patients interaction with their healthcare systems involve some kind
of other information like a ECG or a biopsy path
slide right or a MRI for example. So I'm hoping
to see agents that have those competencies included into the mix,

(19:29):
or we can start to really get to a place
where the diagnostic environment meets how we're testing the systems.

Speaker 1 (19:36):
There was a study last year which I was fascinated by.
Wish is that AI diagnosis in this study was better
than human plus AI. In other words, I was a study,
and you would assume, or you would hope, that a
doctor using AI would be better than just an AI
diagnosis alone. But in fact, the human plus AI model

(19:56):
was worse than the pure AI model. And one of
the conclusions from this was maybe that the doctors what
didn't want to listen to what AI was telling them.
But I mean, did you see that study and did
it give you pause?

Speaker 3 (20:07):
For more than a decade we've been kind of dealing
with this unexpected result. This goes all again, all the
way back to the earliest days of applying at least
some of the powerful deep learning systems in healthcare, we
have consistently seen that, in other words, in whatever set
up the AI, if you just leave it alone, typically
does better than the human plus THEI or.

Speaker 4 (20:28):
The human alone.

Speaker 3 (20:29):
Now is that a indictment on the human ability or
is that more of a Have we set this up
in a way that either doesn't favor the real world,
or have we not figured out the ideal human computer
interaction or how we should be What task should we
be offloading to the system versus the task that we
should be collaborating with the system on I think that's

(20:51):
really where the exploration is that I'm interested in, because
I still hold out hope and sort of some sense
of self preservation, but that there is a future where
the two are better. Just how to offload what job
and in what sort of system that ultimately becomes. Maybe
it's five agents, maybe it's ten, maybe it's a thousand.

(21:15):
You know, we don't know the answer yet. We're just
barely scratching the surface. But in three years time, I
expect this to be fairly common, that clinicians of all
types will be working alongside and or even consulting with
some of these systems for their care their patients.

Speaker 1 (21:30):
And what is the adoption rate today? I mean, how
far what would need to happen for this, you know
paper that you've written in the system that you developed
to be widely deployed in US or global healthcare.

Speaker 3 (21:42):
In a very practical sense, there is a lot of
regulation around this, and regulation requires very rigorous study and
evidence and real world deployment, all the things that you
would expect right if you're you know, care team is
using some of these things to take to take care
of you and your health problems generating that evidence, working
with policy makers, trying to figure out exactly what evidence

(22:04):
would get to the point where we can say definitively
this is at standard of care or beyond and it
should be used and here's how you use it. Those
are very mechanical, but they're very important. It may also
require a change in how we approach the regulation of
medical software because these kinds of systems are challenging our
traditional software that we have used for decades in healthcare.

Speaker 4 (22:27):
Right, they're very different.

Speaker 3 (22:28):
They're non deterministic, they have moments of brilliance and moments
of you know, stupidity. I should say, right, you've seen
these kind of things, and so how do we actually
design a system where it's safe, effective, and actually improving outcomes?

Speaker 4 (22:41):
And that's ultimately the evidence we have to generate.

Speaker 1 (22:44):
Yeah, I mean beyond stupid mistakes. How do you see
the risks here? I mean we're seeing this research around
you know the problems of cognitive offloading with AI, some
suggestions that if you use AI too much you become
dumba and deskill yourself. I mean, is there a risk
of de skilling doctors? Like what are some of the

(23:04):
maybe intangible but nonetheless medium time risks that we should
be considering here.

Speaker 3 (23:10):
What they refer to a skill atrophy as real, and
we've seen this in various other disciplines too. I think
it will also require a shift in how we think
and perform our knowledge work jobs. And in one way
this has been sort of looked at is via the
idea of meta cognition. So rather than you having to

(23:31):
be the central source of decision making, are there things
that you can manage? So the imagine you managing a
team of a these agents. You have a goal, but
you're offloading some of the cognitive tasks to those agents.
Those are some of the early discussions around it. But
I fundamentally believe that everyone that's in a knowledge work

(23:54):
industry or role will have to rethink how that role
evolves in the future. And this is kind of that
first step, at least for us in the healthcare space,
which is that do you need to memorize all these
facts or do you just need to be able to
have the right judgment and know which where the models
are good and not good, and be able to fill
those gaps and manage it like you would a team,

(24:16):
like any manager would know one oh one, this person's
good at this, they have some weaknesses here, right, I'm
going to sign this task to them versus this.

Speaker 4 (24:24):
You know.

Speaker 3 (24:25):
That's that metacognition world that where I think we're rapidly
heading towards, and healthcare is going to have to figure
out a way to do that as well.

Speaker 1 (24:32):
Yeah, I mean the other question around deployment is conflict
of interest. Right, So the previous research I've seen is
all around AI versus human doctors. But this element you've
added is to cost as well. It's not just outperforming,
but it's outperforming at less costs in terms of tests.
Is a really interesting element, but it adds the potential

(24:54):
for major conflict of interests on both sides.

Speaker 2 (24:56):
Right.

Speaker 1 (24:56):
So for example, I'm British, grew up with the NH
and one of the consistent themes of the NHS was
death panels. Are there bureaucrats deciding when people should die?
What is the appropriate level of care to give to
people to prevent them from dying, given that it's a
drain on a public budget. That's in the UK. Here

(25:17):
in the US we have these for profit healthcare model
where there is an incentive which if you're insured do
you sometimes worry about that your position or healthcare system
is pushing you through numerous medical procedures because ultimately it's
a profit center and you may not actually need them.
So how do you begin to grapple with those problems

(25:38):
when you think about a system like this.

Speaker 3 (25:41):
These are problems that have existed even as you know
before before AI, and I think that the responsibility for
those of us kind of generating the evidence and the capabilities,
and kind of displaying the rationale behind how these things
work or don't work and where they work is a
conversation entirely from both the economics and the cultural societal

(26:04):
aspects of just how we deliber care. I think it
wouldn't be a controversial statement to say that, at least
from the US perspective, that our healthcare system is not ideal. Right,
And that's true whether you're in a capitated system, a
fee for service system, or a government based system like
the VA. I have to hope that the better angels
prevail here. But I agree with you and share your

(26:26):
concerns that in the wrong hands, or with the various
misalignments that happen in these systems at all different levels,
we could end up causing some disruption in a way
that we aren't hoping to see.

Speaker 1 (26:39):
In the end, it was an interesting blog post that
you wrote around cancer care, and it al really struck
me because I mean, I don't want to put the
words into your mouth, but as I read it, if
you are one of the lucky few who gets to
go to one of the great cancer centers like M
d Anderson when you're sick with cancer and have access

(27:00):
to these cross field panel of experts who, as you
mentioned earlier, sub or sub sub or even subsub sub specialized,
you have a measurably better outcome, whereas in fact, most
people in the US and certainly almost all people practically
speaking globally don't have access to these cancer centers. Talk
about that and about this idea of the multi agentic

(27:23):
AI and how it sort of reflects or refracts what
we've been talking about with the diagnosis piece.

Speaker 3 (27:29):
Yeah, this was a very important So, yeah, thanks for
bringing this up. I think a lot of people don't
know some of the inner workings of healthcare where some
of the really big bottlenecks are in terms of getting
the best possible outcome, and one of them is in
cancer care, as you're pointing out where some of the
leading centers, and in particularly larger cities, they have the
ability to bring specialists from all different disciplines together to

(27:53):
discuss the patient's care, and that's called the tumor boarder
multidimary tumor board. The reason that not everyone can do
that is not just because they don't maybe have that
specialist in house, but also because of the massive amount
of prep time it takes to gather all the information.
It's not just the patient's data that you have to gather.
You have to gather what clinical trials are new and

(28:16):
availed in this patient eligible for, what does the latest
literature say, And someone has to go through all that information,
prepare that and then present it to a group. And
what we found from the ASCO, which is the large
society in cancer care in the US, was that it
takes between two and a half and three and a
half hours of preparation time per patient, and some centers

(28:38):
run thousands of these tumor boards a year, and those
are the ones that have the most resources and certainly
the most access. The idea of AI, fundamentally for me,
and the reason I'm in this field is that I
want to democratize that experience for everyone, increase the access.
So no matter where you live or mate matter what
you do for a living, should that same level of

(29:00):
precision when it comes to your healthcare. And so this
was that first step to that. I do think this
is going to continue to evolve back to our conversation
around managing a team of experts as your primary physician,
could I call on a team of expert agents to
help walk through some of the things that we might
not be considering in our fifteen minutes we have together

(29:21):
once every six months or whatever that looks like. I'm
very hopeful that given the right circumstances in the way
the technology is progressing, we're going to get to a place,
I think, in a perfect world at least where the
access for every patient is equivalent to those who may
have access to the best resources.

Speaker 1 (29:41):
I mean, you mentioned that twenty percent of all AI
search is or op to twenty percent of a AI
searches are around medicine, which is fascinating. I didn't know that.
But there are of course other people who don't want
AI in the healthcare settings, or who worried about their
human docture or their primary care physician being replaced by
an unfeeling machine. What do you say to them?

Speaker 3 (30:04):
It's interesting, I think going all the way back to
the earliest days of search, that stat was still about
the same. Up to twenty percent of Internet searches were
healthcare related. And we're seeing two interesting trends. One from
the economists that showed that these searches that are going
on today in the typical search engines, the one that's
going down the fastest is healthcare. Isn't that interesting because

(30:28):
where are people going?

Speaker 4 (30:29):
Then? Well, they're probably going to the models. So I
actually push back on that.

Speaker 3 (30:33):
I think that most people want to be educated about
their medical condition, and they want to be they want
to feel safe and free to ask questions about their
own healthcare and essentially infinitely patient knowledgeable sort of oracle environment.
And again we're not there yet, so I don't want
to make that claim. But I even me, I put

(30:54):
my data into these models and ask questions about it,
and I walk away sometimes learning something or at least
what I should be asking my physicians. So again, would
I rather do that than any healthcare Not me personally,
I do want to have that relationship with my physicians,
but I also want to walk in much more knowledgeable,
so I feel like we're on a pure level when
we're speaking about my care decisions.

Speaker 4 (31:16):
Matt, thank you, Thank you so much.

Speaker 2 (31:17):
As that's it for this week for tech Stuff. I'm

(31:43):
Cara Price and.

Speaker 1 (31:44):
I'm as Valosha And this episode was produced by Eliza Dennis,
Tyler Hill and Melissa Slaughter. It was executive produced by
me Cara Price and Kate Osborne for Kaleidoscope and Katria
Novelle for iHeart Podcast. The Engineer is Behit Fraser and
Jack Insley mixed this episode. Kyle Murdoch wrote our theme song.
Please do rate, review and reach out to us at

(32:05):
tech Stuff Podcast at gmail dot com. We love hearing
from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.