All Episodes

February 17, 2025 28 mins

Send us a text

Mel interviews Graham Walk of "MD Calc" fame. This episode explores the evolving role of AI in healthcare, focusing on a study that compares ChatGPT's performance to that of human doctors in managing complex medical cases. We discuss the implications of these findings, the potential for misinformation, and the future of AI integration in clinical practice.

Examination of the BMJ study on AI vs. doctors
• Real-world application of AI in patient care 
• Concerns around AI misdiagnosis and misinformation 
• Future prospects of AI in healthcare settings 
• Impacts of AI integration on workforce and private equity 
• Human-AI collaboration as a path forward

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 3 (00:12):
All right so tell us who you are.
I am Graham Walker.
I am an emergency physicianhere in sunny San Francisco,
california.
I'm about halftime clinical now, mel.
These days the other half of mytime is spent doing kind of AI
tech transformation for thePermanente Medical Group, which
is the medical group you know10,000 physicians for Northern
California Kaiser Permanente andthen created a website, mdcalc,

(00:36):
used by a lot of ER doctors aswell, and a new company called
OffCall, trying to improvephysician burnout, entrepreneur,
troublemaker and I like talkingabout AI as well.
That's the other new excitingthing that I think we're all
trying to figure out.

Speaker 2 (00:52):
I like that very humble MD calc, that a few
people use A handful Dude.
It is amazing what you guyshave done.
I've watched it from when youwere babies to now.
It's one of the most usedthings in all of medicine not
just in medicine.
It's incredible.
So good work.
Thank you Should be very proud.
Let's talk AI.
I followed you for a long time.

(01:15):
You are a geek in the bestsense of the word, and so I'm
loving that you're into AI,because obviously this is the
next big thing here on the show.
We've been talking about AI.
We talked with some Israelistudents all our medical
students when they did the studythey're now in residency about
AI and test taking and theirstudy that found that it was
better than Israeli docs atsitting exams.

(01:36):
So this new paper came out andit's titled ChatGPT versus
Doctors on Complex Cases of theSwedish family medicine
specialist examination andobservational comparative study.
This was in BMJ open and thedate I'm trying to find, which I
cannot find.

Speaker 3 (01:51):
Accepted November 22nd 2024.
Great.

Speaker 2 (01:55):
Okay, so can you tell us about this study and then
we'll get sort of your generalinsights about where you think
this is going?
Because since the last one wedid have a lot of people asking
what does this mean for medicinein the years ahead?

Speaker 3 (02:08):
Yeah.
This paper is almost a year oldat this point, mel.
It looks like it was receivedMarch 6th of 2024.
And, as you know, ai is movingso quickly that even really the
publication lag time, which Ithink is improving in some of

(02:29):
these studies, is stillinsufficient right.
So this used GPT-4, which youcan still technically access,
but ChatGPT actually won't useGPT-4 anymore.
They'll now use 4.0, which is amore recent, more modern model.
And they changed the questionthey asked, which I thought was

(02:51):
really important here.
They stopped asking multiplechoice questions.
They stopped giving thesemodels access to a question stem
and then, you know, pick Athrough D as an answer choice,
which the models are pretty darngood at in many specialties,
including emergency medicine.
Instead, they simulated what isa way more realistic scenario

(03:14):
which is, hey, we're going togive you a question stem and ask
you hey, how would you managethis patient or what would you
do next?
So you know, I think there area couple examples.
I looked in the supplementarycontent.
So you know, I think there area couple examples.
I looked in the supplementarycontent.
One's like, I think it's like afour-year-old with constipation
or like an elderly patient withpneumonia, and some goals of

(03:35):
care has been discussed, but nota ton.
Those are what you and Ipractice with every day.
We have kind of an open-ended,undetermined patient.
We don't know what's going onwith them yet.
These were graded, not on acorrect answer, but you
essentially got points in thisSwedish family medicine exam for
tackling a particular subject.

(03:57):
So you might get a point forasking the four-year-old about
their diet and asking a familyhistory.
You might get another point andyou might get another point for
asking you know stuff about,specifically about whatever milk
intake or cheese intake orwhatever four-year-olds get
constipated for.
And so instead of there beingjust a right answer they were

(04:19):
Swedish family medicine docs getgraded on the comprehensiveness
of their answer and that'sgoing to include all the stuff.
That is going to includedifferential diagnosis and
social determinants of healthand social factors.
And it found that GPT-4, again aslightly older model, did worse

(04:39):
than even average familymedicine doctors and did way
worse than kind of expert familymedicine doctors.
And to me I highlighted thiscase, this study.
I liked it a lot because thisis so much more representative
of what we do every day.
In addition, you have to bethese tools are being compared

(05:01):
to physicians much of the time,but it takes a doctor to get the
content out of the human beingpatient.
So you have to know the way toask the question, to ask if
they're having pleuritic chestpain, and so it way more
simulated a real-life patientencounter and it felt way more

(05:24):
face face valid to me that thesetools actually aren't as good
as all the other headlines aresaying.

Speaker 2 (05:30):
I think that's a really great point.
They've gotten so good at examsand, of course, they've gotten
good at exams because they havethe entire world's knowledge at
their fingertips.

Speaker 3 (05:38):
And they're trained on exams too yeah.
Yeah.

Speaker 2 (05:41):
Yeah, that's fine.
Okay so, and I was thinkingexactly this same thing how many
times have you had a patientcome in that says my ear hurts,
my hair hurts?
I've got this bump over here.
Yesterday I had crushingretro-sternal chest pain.
You're like what?
What?
Pick one, maybe two, yeah yeah.
And you do spend so much timetrying to tease out what are the

(06:03):
things that matter.
So I'm really interested inthis.
Moving it to the real worldcircumstance where maybe the AI
listens to the patient and youlisten to the patient and you
come up with your differentialand it comes up with its
differential.
You know, part of me wants thehumans to win.
I really want to kick AI's ass,but then the other part of me
is like no, the way we getbetter at looking after people

(06:26):
is if AI gets really frickingsmart and can help us and work
with us.
So I'm sort of torn.
I want the humans to win, butwe need all the help we can get.
If we can reduce the number ofmisses, that's great.
Do you think that we are gettingto a place with the new models?
I mean, there's 4.0, there's4.0 mini.
There's the newer models thanthat.
I just was saying to somebodyhearing Sam Altman or somebody

(06:47):
saying some of the newest,latest, greatest models cost
about a thousand to three and ahalf thousand dollars per prompt
, because that's how much effingelectricity they use.
So where are we headed?
Give us your prognostication.

Speaker 3 (06:58):
I've heard several prognostications.
One is that we are going toneed medical-specific models.
You're paying a price toChachiBT because ChachiBT can
answer questions about Lynchsyndrome, but it can also give
you a recipe for meatballs, andit can also tell you about

(07:21):
Napoleon and it can tell youabout Napoleon dynamite, because
it can do all of these things.
You're paying a comp, acomputational and an electricity
price for that, and so not onlywould a medical model
potentially be cheaper and couldrun on smaller hardware, but it
would also potentially be moreaccurate, because it's not going
to have knowledge about, about.

(07:43):
You know other things besides,besides medical practice.
I've heard two other things.
One is that you know there are,there are some rumors that,
like GPT-5 didn't go very well,there was some model collapse,
and so they're having to usedifferent techniques to get
further advancements as well,and I would agree with you.

(08:05):
I think the best of both worldsis a human that is using AI to
help us make sure we're notmissing stuff.
You know, I want like a littleguardian angel on my shoulder
that's kind of watching over me,and it's like whoa Graham,
you're about to dischargesomebody that did you consider
they might have X or Y or Z.

(08:28):
The challenge is, mel, themedical training system is so
good right now, and has been formany years, that most of the
time the doctor is right.
I mean, how often when you seesomebody with an ankle sprain,
how often are you wrong thatthey have an ankle sprain,
especially then if you get anx-ray that they may or may not

(08:50):
need?
Now you've kind of confirmed,yeah, you have an ankle sprain,
and so my worry is that thesetools, if not used properly or
not used by trained medicalprofessionals, they might
generate like more work and more, more cost and more waste.
Because, you know, maybe itwill say, well, hey, did you

(09:16):
consider it's not just an anklesprain, but it's a septic joint
or something right, somethingkind of crazy that probably you
know is is not really within therealm of the differential
diagnosis.
But you know, you could imagineif somebody doesn't know how to
differentiate an ankle sprainfrom a septic joint, they might
just listen to the LLM and say,well, we should consider this.
Therefore, I'm going to stick aneedle in your ankle, I'm going

(09:37):
to order all these additionaltests instead of the right thing
to do, which is reassure thepatient.
Here's some Motrin, here's anACE wrap.

Speaker 2 (09:51):
Yeah, I think that's a significant concern,
particularly once we drop thelawyers in there.
So if you've got ai scrapingthe data of the chart that's
being created by you and the aiand the ai says, well, pe is a
possibility, in which patient ispe not a possibility, it's
always possible and you alwayssee, like the medical student,
the really smart medical student, like this could be a pe and
you're're like it could be, butno, we're not going down there.
The sensitivity of these LLMscould be so high, like you said,

(10:11):
it could actually make things alot worse.
We're trying to do less testing.
We're trying to focus in onwhat's possible and our miss
rates are low enough on thisstuff and we don't want to go
back to scanning everybody.
So that is a concern and maybethat gets fixed over time.
What about in your work onNDCalc, for example?
Are you finding utility for AIhelping you create that massive

(10:32):
database that you have?

Speaker 3 (10:34):
We are working on some generative AI tools that
are still in kind of internaldevelopment.
They are not necessarily aroundbuilding new scores, because
generative AI is really notgoing to be good at that.
That's more kind of on theother side of AI, which is
predictive AI, if you think ofkind of two different camps, but

(10:55):
helping people find the rightscores or helping people find
information in thisexponentially accelerating,
growing body of medicalknowledge.

Speaker 2 (11:05):
that is harder and harder to manage and understand
we are internally and we'llrelease it soon on our textbook
using ai, llm, gpt, zero mini tohelp us with search, and it's
been really good.
But there's one thing that onehas to remember is that it is

(11:26):
trained on the internet on allof the knowledge.
So we say, just look at ourtextbook, and it refuses to do
that.

Speaker 3 (11:34):
Oh interesting.

Speaker 2 (11:35):
It just cannot stop looking elsewhere.
So we have to keep tweaking andtweaking and tweaking.
And there was a great examplethat Mike Weinstock came up with
not on our search, but onanother LLM and said what is a
good muscle relaxant inpregnancy?
And it came back withrocuronium, which is Technically
true, absolutely true, and alsoterrifying For the non-medical

(11:58):
people listening.
Rocuronium is actually aparalytic agent.
We use to paralyze people inorder to put the tube in to
breathe for them.
So if you gave a pregnant womanrocuronium, yes, indeed, you
would relax their muscles andthey would stop breathing and
die.
So we keep finding these issueswhere when you train on a big
data set, even if you ask it tolook just at a small data set,

(12:20):
it can't help itself.
It sort of jumps out of thetextbook.
We've asked it please again,tell us where you got this
information, and sometimes itdoesn't.
It's like I don't know what youmean.
I can't tell you where I got it, I just know it.
So are the newest models anybetter at this hallucination
stuff, or is this just intrinsicto?

Speaker 3 (12:39):
the technology.
Both they are intrinsic to thetechnology, right?
These tools are just trying topredict, it's just math.
And then we don't see the math.
They convert the math back intoEnglish.
But you know, these tools arejust trying to predict the best
next word based on your query oryour prompt.
So while I remember GPT 3.5 hada lot of hallucinations and

(13:03):
Google especially Google GeminiI found really guilty of halluc
had a lot of hallucinations andGoogle especially I Googled
Gemini I found really guilty ofhallucinating a lot.
It does seem like I'm findinghallucinations less and less,
especially hallucinations that Ican confirm right.
I mean, I'm not seeing it comeup with fake journal articles as
much, even when I'm trying totrick it and force it to do that

(13:24):
.
So I think that it's improvedbut it's certainly not perfect.
I kind of think of this andmaybe the future versions as
they improve, a little bit likeWikipedia, Mel, in that anybody
can technically change Wikipedia.
So you could go to theWikipedia page on appendicitis

(13:44):
right now and someone could haveedited it and said changed it
to oh, it's in the left lowerquadrant, not the right lower
quadrant.
But the odds are that it isactually unlikely that Wikipedia
is wrong and so that likethat's how I kind of think about
this is you have to have alittle bit of a, you have to
have a little bit of skepticism,but that much of the time it is

(14:07):
going to be right.
But it's kind of a.
I wouldn't call it trust butverify, but I'd say like,
consider but verify, because youcan't yet say how accurate it
is, but that much of the timeit's correct.
My colleague, jonathan chen, isan informaticist down at
stanford, talks a lot about thisnot being hallucinations but

(14:28):
confabulations, meaning thesethings don't know that they're
making a mistake.
There are some techniques youcan do to drive down the
confabulation rate by actuallysending its response back to
itself and saying, hey, doesthis seem correct?
And again, this is theweirdness of this technology is
often, if you give its responseback to itself, it can say, oh,

(14:52):
no, boy, oh, we've made amistake, and it can self-correct
itself.
So that's one of the examplesof behind the scenes how the
tech companies are reducing thehallucination or the
confabulation rate by notsending the data directly back
to you.

Speaker 2 (15:07):
There's kind of an intermediary step that's being
used and that's I think part ofthe reason why the time from
prompt to answer is increasing,If I understand it correctly.
I don't really understand thetechnology, but they basically
are giving it more time to think.
Yeah, and part of that moretime to think is that internal
stuff.
It's like asking it that samequestion back to it a different
way or already checking itbefore you even check it.

(15:28):
Is that sort of what'shappening in the background
there?

Speaker 3 (15:31):
Yeah, and you know there's this concept of
prompting, where you kind ofgive it an instruction and tell
it how you want it to behave.
So you know, one prompt couldbe respond like Donald Trump,
another could be respond likeYoda, and it's going to behave
differently.
You're going to get a differentresponse.
And so one of the things thatthey are doing is say you type
in a medical question.
It will then do severalanalyses of that and one might

(15:54):
say, hey, this appears to be amedical question and so it might
now give two.
You know, it might kind of havemultiple steps in the prompt,
so it's not respondingimmediately, but it might say,
okay, we've identified.
Number one, this is a medicalquestion.
Number two the question appearsto be about appendicitis.
Number three I'm going to givean answer to the question Number
four.
Create a prompt to test it.

(16:15):
Hey, you're an emergencyphysician, you're receiving this
prompt about appendicitis.
Does this seem accurate?
And then, if the tool says, yep, that seemed accurate, then on
step five is actually send itback to the user.
So that's one of the ways thatthey're trying to get rid of
hallucinations is by doing someinternal self checks.
And can you?

Speaker 2 (16:33):
explain you mentioned it before this idea of the
collapse of the LM.
I've heard this term and likeit's like run out of data, and
is that what it means?
It's like it can't get anysmarter because we've given it
everything, or is theresomething else going on?

Speaker 3 (16:48):
I don't know the details of it, especially
because it starts to get highlytechnical, and I think only
people that are actually CSpeople know this level of
specificity.
But yeah, it does seem likethese tools require massive
amounts of data to kind ofquote-unquote learn or be
trained, and to some degree,they've run out of that data.

(17:11):
Now lots of people are talkingabout who has the most
human-generated content.
Well, it's social media, right?
You have all these human beingsproviding free content all the
time.
Some of that data is startingto get dirty with a bunch of AI
garbage in it too.
So there are certainly somekind of tech commentators who

(17:32):
are saying, well, we've alreadypoisoned the well.
Saying well, we've alreadypoisoned the well.
Even if Facebook wanted to useall of its own data, they'd have
to cut it off in maybe 2022 orsomething before AI took over,
Because if you look on Facebooknow, there's so much
AI-generated garbage andactually, if you click on the
comments section, a lot ofpeople think that many of those

(17:53):
comments are just bots, which isterrifying.

Speaker 2 (17:56):
Yeah, so they talk about training these, having the
LLMs create their own contentso that then they can train on
their own content.
That to me seems like incest,and we know how that works.

Speaker 3 (18:09):
I agree.
It also seems like it wouldeventually remove all creativity
and all nuance or differentways of describing the same
thing.
As you're training the model onitself, you know it's starting
to see more words that are moreand more popular, are going to

(18:30):
become more and more, it's goingto use more frequently and if
you run this enough times, youknow maybe you you'll never have
the word chest pressure again.
Every single time it'll just bechest pain and it actually will
lose the ability to understandwhat is chest pressure or
understand that that's kind of asynonym for chest pain, because
it's heard the word chest painso many times and it's starting

(18:51):
to forget the word chestpressure almost.

Speaker 2 (18:53):
The idea that these will be trained on social media
is terrifying.
The idea that these will betrained on social media is
terrifying.
So you've got the bot problem,but also you have the human
problem, which is people actlike complete a**holes on social
media because there are nosocial filters.
And we have this on our programall the time and we're mostly
physicians and clinicians andpeople will say the most

(19:13):
horrible things when they don'trealize that there are humans
behind this.
So they'll say, mel, thisprogram is a piece of shit.
And then I'll email them likewhy do you think it's a piece of
shit?
And please don't talk to oursupport team like that.
And they're like oh my gosh,I'm so sorry, I didn't mean to
say it like that, I was just abit tired after a night shift,
but that's the kind of crap thathappens on social media all the

(19:35):
time.
So it's like we're training it.
If we do this and I'm sure thateverybody is, and I'm sure that
Elon Musk is with Grok we'retraining it on the most toxic
part of humanity.
And so what comes out of that?

Speaker 3 (19:46):
I could not agree more, mel.
There was a paper that justcame out, I want to say a week
ago.
It looks like it's calledMedical.

Speaker 2 (19:52):
Large Language Models Are Vulnerable to Data
Poisoning Att attacks and I canconfirm that that came out
January 8th 2025 in NatureMedicine, the open access
version.

Speaker 3 (20:03):
So from NYU.
Lead author is a medicalstudent that I emailed who was
just like oh my God, you'rekilling us.
It's awesome.
So there's this thing calledthe pile, which is a big imagine
like billions of documents onthe internet, and a lot of the
large language models aretrained on this, because they've
all kind of agreed this is likea good amount of content that
you can use to create a largelanguage model.

(20:26):
So what these authors did isthey took the pile and then I
think the pile has like I don'tknow a billion documents in it,
right, and they added just50,000 medical misinformation
documents.
A tiny I mean it's 0.001% ofdocuments were intentionally

(20:51):
misinformation.
So you know, whateverIvermectin cures cancer Vaccines
don't work.
They came up with some evenmore erroneous ones, like you
know, I don't know, like, oh,beta blockers can be used for GI
bleeds, whatever it is.
And then they ran these modelsand asked them questions and
they found, with just that tinyfraction, 0.001% of data you

(21:15):
could get the models to tell youwith full confidence,
dangerously incorrect medicalinformation.
And so I found that absolutelyfascinating that if we are
training it on Twitter or X andwe have a little bit of white

(21:36):
nationalist, nazi stuff in there.
It's going to have sentimentsthat no one would want as
truthful or accurate or, youknow, like dangerous information
in there, and this wasspecifically on a medical model.
So you could imagine verydangerously if doctors were
using a tool like this and a badactor wanted to cause problems

(21:57):
or cause misinformation.
You could relatively easilywith just inserting just a tiny
bit of poison destroy the entiresystem.

Speaker 2 (22:07):
That's fascinating that so little data can have
such a huge effect, and I thinkit can obviously go both ways.
So the white nationalists couldget in there.
We saw from Google's one oftheir first iterations that they
just were so politicallycorrect it couldn't make a white
George Washington.
Yeah, that is so weird.
So it works both ways ways.

(22:32):
It seems that this is a it'skind of a terrifying revelation
that if you feed it a little bitof misinformation whatever that
misinformation is it seems tohighlight it for some reason.

Speaker 3 (22:37):
Well, and you know even the stuff that wasn't.
It was just silly.
Like you know, the initialversion of Google Gemini was
trained on a bunch of Redditdata and you know, sure, there's
some dark sides of Reddit.
There's also just been somepeople who are just joking and
being silly and being stupid,and you know, I mean there's
tons of screenshots online of,like you know, google's AI

(22:58):
recommending you add glue topizza, that pregnant women
should eat no more than tworocks per day, I mean, and you
can.
Actually it's interesting.
You can actually go into theReddit threads and find the
users who said this stuff in thecontext of making a joke or
being sarcastic or somethinglike that.
But this has somehow beenbubbled to the top of these

(23:18):
tools, maybe because Google saidoh hey, trust Reddit, listen
highly to Reddit, maybe for agood reason.
I mean, often Reddit's a goodplace to hear, you know, read
about product reviews or getopinions from real human beings.
But yeah, you can see there aremajor downsides if we don't
understand this, or do thiscorrectly.

Speaker 2 (23:40):
You can imagine like somebody saying, yeah, put glue
in your pizza, it makes it tastebetter.
And I'd click on that and saythat's so funny.
Click, but the.
AI is like oh yeah, oh, this iswhat humans must do.
It's like no, we're joking,crazy.
What do you see?
So you work at Kaiser?
How do you see implementing AIin the next few years?

(24:00):
We've heard about AI scribesstarting to listen into our
conversations and helping uswith our chatting.
We've heard about the AI goingthrough your charts and doing
what we said at the beginning.
He was like Mel, did you thinkabout PA in this patient or
something else?
Where do you see this going inthe next few years?

Speaker 3 (24:16):
I would love to have AI scribes that not just write
my note but help me with all theother stuff you have to do
during an encounter, which islike write orders, come up with
a diagnosis, add in your billingcodes, add in your quality
metrics junk of like.
Oh, here's the reason the STEMIwas delayed, or here's the

(24:38):
reason I didn't give 30 per kiloof fluids for sepsis, and I
think those tools will be ableto help us with that piece to
make it less onerous, andthey'll be kind of teed up.
I mean I would love to tell apatient with my AI scribe
listening I am worried that youhave appendicitis Ding, it's now

(25:01):
collected, you know, suggestingappendicitis.
When I get back to my desk andI'd like to, you know, do a CBC,
a chemistry panel, give you agram of IV Tylenol, some four
milligrams of IV Zofran and do aCT scan to evaluate for
appendicitis.
And I come back to my desk, Ireview those orders Looks good,
I click sign.
That saves me another 10, 20clicks and it maybe, you know it

(25:27):
adds my sepsis fallout thingfor my note as well.
So I think that's.
Those are all pretty low hangingfruit that are not.
You know, they're, they're kindof what, what I would consider
low risk.
They're, you know, human in theloop, right, it's not doing
something for me, it's teeing itup.
And then I I'm still the onethat's clicking, sign on those
orders or confirming thediagnosis and and it's something

(25:49):
that is annoying part of my daybut also helps get the work
done of seeing a patient in theER or the clinic or whatever.
So I think those are the easywins.
And then I think this yearwe'll start to see more people
admitting that they're usingChatGPT to review cases and kind

(26:10):
of deciding how they want toevaluate ChatGPT further.
You know, you could think Imentioned the Wikipedia example.
The other example I've thoughtabout is like, you know, maybe
ChatGPT is like a pretty goodintern or a decent art, you know
a really good R2 or somethinglike that, where, oh, it has
some really good ideas, but youas the attending are still

(26:31):
deciding if you should listen tothose ideas or not.
I always think the best type ofmed student or the best time an
intern is like they wanna dotoo much.
And then me, as the attending,I'm like no, no, we're good, we
don't need to do a lactate or wedon't need to do blood cultures
.
I like that you're consideringsepsis, but in this case let's

(26:53):
not do that.
So I like the idea of thesemodels helping me consider all
possibilities, not misssomething, but then I'm
obviously still the one incharge deciding yeah, we're not
going to pursue mesentericischemia in this patient.

Speaker 2 (27:03):
Well, thanks for your insights.
I reserve the right to call youagain when some more studies
come out.
It is frustrating with thisliterature because what you said
at the beginning the delaybetween submission and
publication and this is going sofast that there's new models
all the time.
So we always feel like we'retalking about stuff that's six
months or a year old and thatmakes it a little difficult.

(27:24):
But so that's why it's nicehaving you sort of prognosticate
about what might happen next.
I was hoping that this wouldjust give us more time with the
patients, that if it can do alot of the busy work, we could
have more time with the patients.
But then I thought all of thosefor-profit hospitals are going
to say that's great.
Now let's fire three emergencyphysicians and you have to see
35% more patients.

Speaker 3 (27:45):
I'm like you.
Yeah, that's my fear.
I don't think that in mymedical group we are
physician-owned andphysician-run and I don't think
that's the intent at all.
The intent is actually to keepour physicians from not being
burnt out and staying with ourmedical group and not leaving
the group or leaving medicine.
But you could imagine a privateequity group or something

(28:06):
having a particularly differentopinion about that.

Speaker 2 (28:08):
Yeah, I think it's a real problem.
That's a whole other discussion, which we're going to have with
some experts soon about privateequity in medicine and is it
destroying it?
So that will be coming up soon.
Graham, thank you so much foryour time.
Thank you for all that you'vedone for emergency medicine for
MD Calc, which is an amazingthing.
If you haven't seen it, peoplego check it out online come on,
come on come on, it's a stupidthing to say thank you.

(28:30):
Mel, I'll talk to you soon.
Sounds great.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.