Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Tech It to the Limit, the humorous and surprisingly informative podcast that
(00:22):
makes digital innovation and healthcare as entertaining as it is relevant.
I'm Sarah Harper.
And I'm Elliot Wilson.
And we're here to pull back the curtain on the world of digital transformation in healthcare.
Don't worry, you don't need a medical degree to join in on the fun.
Just a sense of humor and a penchant for all things health tech.
So buckle up, folks.
It's time to Tech It to the Limit.
(00:46):
What's up, Elliot?
Hey, Sarah.
Welcome to Tech It to the Limit.
So excited to be here with you.
Same, likewise.
Talking about digital transformation in healthcare.
I mean, what else is there in life?
(01:06):
Nothing, nothing.
And I have two kids.
Yeah, I won't tell them.
But they don't listen to podcasts yet, so it's fine.
And even if they did, they wouldn't listen to this one.
By the time they're old enough to listen to podcasts, podcasts won't be a thing anymore.
So I think you're in the brain.
Well, I don't know about that.
Maybe they just download an RSS feed into their brains.
Oh, gosh.
Elliot's dog, the dog of the year, is a dog that's been in the world for a long time.
So why are we here, Elliot?
(01:27):
Like, why are we recording today?
Well, this is Tech It to the Limit.
I think it's worth telling people how we met.
Our listeners?
How we met.
(01:48):
I love this story.
It's an origin story.
We've only met just a few months ago.
We met down in San Antonio.
Down in old San Tom?
It was actually the day of the Alamo anniversary.
Are you kidding me?
I am not.
How could I forget?
I also love that you know that.
(02:08):
How could I forget?
What are you, like, Davy Crockett's great-great-great-grandson?
You're from Jersey.
That's, like, not even possible.
No, it was.
It was the anniversary of the Alamo anniversary.
And I remember this so clearly because I went for a run and happened to be running past
the Alamo that day.
And there were these people dressed up in old-timey clothes doing their darndest to
(02:29):
remember the Alamo.
And I was thinking to myself, why are these people protesting the American Telemedicine
Association?
I don't understand.
Do they want to go back to the 1800s?
We don't stand for anything controversial.
We are the future.
Telemedicine is bipartisan.
I don't get it.
But anyway, we met at the ATA where you were giving a fabulous poster presentation on your
(02:50):
work.
Oh, thank you.
Which you did not come to because you were getting ready to host the Innovators Competition,
which is so cool.
Yes, and it was a lot of fun.
Well, anyway, we met just then.
We hit it off.
I have come to respect both your intellectual gigantism.
(03:11):
Wow.
I am the gigantosaurus of intellects.
But also your amazing personality and positivity.
And I have enjoyed working with you on this project, and I can't wait to keep working
with you on this project.
Okay, now it's my turn to gush over you, Elliot.
So when we were at the Specialty Interest Group Breakfast, I was behind you in the buffet
(03:34):
line.
And I don't remember, but I was like, I want to be behind this guy.
I want to meet this guy.
You have a really amazing shirt that said, talk dated to me.
And then when I found out that you were the chair of the digital transformation thing,
I was like, I need to be your friend.
We need to be friends.
And I learned a new word or acronym in real life because I was like, what's another virtual
(03:56):
coffee?
But maybe you said we could be in IRL.
And I was like, what's that?
So anyway, just the energy, the universe conspired to bring us together.
And we met, we gave them the Orient's origin story.
Elliot, like, what are we doing this podcast for?
I mean, why are we even here?
How do they just hang out together virtually?
(04:18):
Which is also great.
But I think what we've come to understand is what this world needed was another podcast.
100%.
There are not enough.
There are not enough podcasts in the world at all.
And we decided there needed to be another one.
No, we're doing this because we both love this space so much.
We both love health care.
We both love digital transformation technologies and how they apply here.
(04:42):
And we get excited about it.
As you like to say, we like to nerd out about it, which is totally true.
Dig in deep enough.
And I think we'll both raise a glass to P value.
So we're also here to have fun, right?
And we want to be the leading voice for not only information, but also entertainment on
(05:05):
digital transformation and health care.
Everybody's talking about artificial intelligence.
Everybody's talking about digital transformation.
But everyone else is boring and dry like day old breath.
And we are people that everyone else wants to hang out with.
And we want to hang out with them too.
We are connectors.
And we learn from discourse and humor and just makes everything more interesting to
(05:30):
listen to.
Right.
And I think the other thing is we are seeking to be as curious as possible.
Yes.
I think one of the things I like about the way that we have conversations, Sarah, is
that we don't come at it with feeling like we know the answer.
We don't have expert-itis.
No.
I feel like we know the question to ask, and we are coming up with very interesting questions
(05:54):
to talk about.
But we don't come in saying, well, this is the answer, because I don't think we know
the answer.
And I don't think anybody knows the answer.
Well, the eight ball.
Well, the eight ball knows the answer.
But I think on Tech at the Limit, we recognize that nobody really has the answer.
And we're all kind of just reaching for it at this point.
So.
Yeah.
(06:15):
Yeah.
I think this podcast is going to be a virtual space for our listeners to kind of take advantage
of our networks.
Elliot, you know a lot of people because you're brilliant and connected and extroverted.
I am as well.
Well, I was going to say, you know just as many people for exactly the same reasons.
(06:37):
I don't, you know.
We're also humble in real life.
We're the most humble.
But we're going to bring together some great minds and we're going to learn from them from
that questioning that you referenced earlier from seeking to understand the problems and
the capabilities that are out there seeking to solve those problems.
(06:58):
So I am stoked for this experience.
And you know.
I'm super stoked.
But I want our audience to be as stoked as possible as well.
So why don't you tell them who is joining us for our first podcast?
Okay.
So this is very exciting news because he is an experienced physician and end user of technology,
(07:24):
but also an experienced engineer and fellow curious mind who has more than 40 patents
and is way ahead of the pack when it comes to identifying use cases for artificial intelligence
in preventative care.
I'm talking about the one and the only Dr. Paul Friedman of Mayo Clinic.
(07:44):
If you don't know already who I'm talking about because there really is only one distinguished
inventor of Mayo Clinic.
He has that designation and he just happens to be a really humble.
Wait, that's an actual designation?
Yeah.
It's the one that you and I dream about.
It is.
I mean, that really truly is such an honor.
(08:06):
But you know, Dr. Friedman is also a really wonderful person and he just loves to talk
about healthcare and digital transformation and approach those questions that we were
talking about with humility.
I mean, he's a scientist, right?
So well, it was a fabulous conversation that we had with him and I'm so excited for the
audience to listen to it and hear it, which will be coming up after the break.
(08:31):
But first, what we're trying to do at the beginning of each podcast to kind of wet the
whistle, Sarah and I are going to be sharing with each other our favorite, most interesting
piece of healthcare innovation news articles with each other right here on the air.
So excited for this.
Who wants to go first?
(08:52):
Do you want to go first?
You want me to go first, Sarah?
I'll kick us off.
All right.
So I'm going to start with a question.
I think I love reading and I'm a pretty good scanner and I subscribe to a lot of stuff.
I don't read it all every day, but when a headline catches my eye and I have the time
to go down that rabbit hole, man, do I ever have fun.
So I was reading an article from Axios by Tina Reid called Coming to a Hospital Near
(09:19):
You 5G and that caught my eye because I have an interest in connectivity and how we can
make access to telemedicine and other digital tools more equitable outside of our urban
areas, right?
Where network connectivity is pretty accessible.
So the gist of this article was about, they kind of made this wonderful analogy at the
(09:43):
beginning too, which I really love, that hospitals, the way they're built are kind of, you know,
from the stone ages and they're like packed stadiums if you've ever been to a sporting
event or a concert on game day where your device kind of works, but it can't, the connectivity
is either really slow or you can't get any service, right?
So 5G is a game changer for those types of events and for hospitals for the very same
(10:08):
reason, because it has less than a millisecond of delay, whereas your traditional 4G network
has like 70 milliseconds of delay in terms of data transmission speed.
And why does that matter?
Why is that relevant for healthcare?
Well, when you think about access to care and how telemedicine can really enable that
(10:29):
for underserved communities, think about a critical access hospital in rural America
that does not have a surgeon on hand, on site, but maybe you have robotic technology and
having the right connectivity speed, the right data transmission speed that enables emergency
telesurgery through robotic equipment to those underserved communities.
(10:52):
So you're kind of marrying that physical innovation in technology with robotic surgery with really,
really high speed connectivity, which is needed.
You can't have any delay between what the surgeon is telling the machine to do in, let's
say, New York and, you know, Poughkeepsie, or I'm picking the place.
Poughkeepsie is probably not rural.
(11:13):
If you're listening from Poughkeepsie, I'm sorry.
I clearly need a lesson in New York.
Somewhere in Tennessee where Davy Crockett's family is still living.
Thank you.
Thank you for the save, Elliot.
So yeah, that's what I read and I was like, heck, yes, cannot wait for this to come to
a hospital near you.
Yeah.
And did it say where this is being done the most so far right now?
(11:36):
It didn't.
To be honest, I didn't go down the rabbit hole where they, the thing I like about Axios
is they're pulling from the scientific publications.
So they cited an article that was published in the Journal of the Society of Laparoscopic
and Robotic Surgeons.
So, you know, I didn't go down that rabbit hole probably because I was like trying to
(11:56):
feed my kids or do my day job, but check out the article in Axios.
It's called Coming to a Hospital Near You 5G by Tina Reid, very well summarized, great
sources cited within it.
And if you have time to go down that rabbit hole, it's probably really exciting.
Excellent.
That's a lot of fun.
Yeah.
(12:16):
So I read an article from, oh, it's from The Telegraph.
This is from Nicholas Smith and it's titled How AI is Learning to Lead the Human Mind.
I heard that article.
You read it?
Yes.
It's really a lot of fun, right?
So this was about a team, a researcher team in Singapore that's using an AI model.
(12:38):
They're using that and MRIs to associate brain pattern images with different features.
So like color, shape, texture, and semantics, things like that.
So what basically what they're doing is they're having people sit in an MRI for like eight
hours and showing them 160,000 images and mapping their brain throughout the day as
(13:01):
they are showing these images to them.
And they're starting to now recreate those images again, right?
So if I know what your brain patterns look like when I've shown you all of these sample
data and then you think of something else that may be similar to that sample data, I
can recreate that based off of all the analysis that I've done over your firing neurons during
(13:24):
this time.
And, you know, it's rudimentary, but they're doing it and basically showing the images
of the thoughts of these people.
And it's incredible.
So you know, Star Trek?
Was that a Trekkie if you can believe it?
So, no, it's okay.
I believe it.
I believe it.
So if you think about Star Trek and Star Trek The Next Generation, they were like fully
(13:47):
formed Starfleet, very futuristic, very, very advanced.
And then they came out with Enterprise with Scott Bakula and like Dracula.
Right.
Exactly.
Scott Bakula of Quantum Leap fame.
So yes, when Enterprise came out, it was all about how Starfleet was being created and
started and everything rudimentary, right?
That's how I see this.
(14:08):
Right.
This is all very rudimentary right now, but I can very clearly see where this can go.
And to the point, so can the people in this article where they were very much saying,
you know, this has a lot of opportunity and potential in people with disabilities, especially
in language, in cognitive disabilities or a neuromotor function disability where, you
(14:33):
know, they're not able to write, they're not able to speak.
But if we were able to generate content off of their thoughts, maybe instead of large
language models, then we can actually help them to express themselves in a much more
more robust way.
So yeah, meaningful, right for them.
Very, very, very exciting.
And then it also goes on to talk about how this same kind of technology has the potential
(14:58):
to be used for break and by authoritarian regimes seeking to further the intrusive in
surveillance.
And like minority report kind of stuff.
Minority report or very 1984, but instead of just screens watching you all the time,
you know, maybe there is some kind of image resonator resonator that's there that's constantly
(15:23):
listening to your thoughts as you walk by it.
I'm so glad you brought up the potential for these technologies to be used for good and
for ill because that's what Paul Friedman, our guest today, is going to get into when
we listen to the discussion and we talk a little bit later on about the case for humanities
and medicine and the case for humanities education in general.
(15:46):
And I think this is a critical factor, continuing to teach people about ethics, about questioning
what's right and wrong, about risk assessment, about stakeholder analysis, right.
And just like curious minds, right, teaching people to love learning from a very young
age and not just accepting an answer from a machine is going to be so critical in the
(16:11):
next five years year.
Yeah, you know, we're on futures doorstep.
So yeah, so that's my article.
And I thought it was really interesting, but you have given us a great segue into our next
segment.
So stick around with Tech into the Limit and we'll be back with our interview with Dr.
Paul Friedman of Mayo Clinic.
(16:32):
Thanks so much.
Want to get in shape, but need an extra boost of motivation?
Do you wish you could take your health to the next level, but lack accountability?
Try the Healthinator.
This AI powered health coach will create a customized workout and nutrition plan just
(16:54):
for you.
It'll even police your snacking habits and bark motivational phrases at you like, get
off the couch you potato and quit whining, no pain, no gain.
I never knew I needed a robotic drill sergeant in my life until I met the Healthinator.
It's like having a relentless personal trainer who won't take any excuses or let me eat
that extra slice of pizza.
(17:15):
Thanks to the Healthinator, I'm shedding pounds and laughing my way to fitness.
Don't let your innate laziness hold you back.
Take the Healthinator today and hold yourself accountable to your health goals or else.
Welcome listeners.
Get ready to feel your heart skip a beat because our first guest on Tech into the Limit is
(17:37):
none other than the heart rhythm mastermind himself, Dr. Paul Friedman.
He's the chair of cardiovascular medicine at Mayo Clinic and let's just say he knows
how to give AI a pulse, pun intended.
With over 40 patents under his stethoscope, Dr. Friedman's like a mad scientist mixing
science and technology to create revolutionary non-pharmacologic therapy for arrhythmias.
(18:01):
Wow, that was hard to say.
As an electrical engineer turned medical genius, Dr. Friedman's work in remote monitoring,
signal processing, and AI has earned him recognition as Minnesota's top inventor.
Move over Thomas Edison.
But wait, there's more.
Dr. Friedman's also a professor of medicine at Mayo Clinic, educating the next generation
(18:21):
of health tech nerds, clinicians, and researchers in the art of AI wizardry.
With over 250 original scientific publications to his name, he's building the evidence base
for AI applications in healthcare and making us all wonder what we've been doing all our
lives.
So put your hands together and give a warm welcome to the man, the myth, the legend,
Dr. Paul Friedman.
(18:43):
Welcome Paul.
May I call you Paul?
Yes.
Thank you for the introduction.
Wow.
I feel I have something to live up to now.
Yes, we all do thanks to your wonderful accolades.
Paul, I know you're a dad, also a grandfather.
So I was hoping you'd humor us and kick it off with your favorite dad joke.
Oh my gosh, I'm so bad at jokes.
(19:04):
This is probably the hardest question of the day.
But here is Ellen Keller goes to a Passover Seder and someone hands her the matzah, the
sort of cracker, and she takes it in her hands and she goes, who wrote this joke?
That's as bad as it gets, right?
No, that's wonderful.
(19:25):
That's awesome.
Thank you.
Thank you for sharing that, Paul.
We'll kick things off with a little bit more serious question.
What inspired you to pursue a career in medicine and how did you end up specializing in cardiovascular
medicine specifically?
Well, great question.
I was looking to do something meaningful, important, impactful, something where you
(19:49):
interact with other people.
But I also always liked science.
And as an undergrad, I actually, my dad's an engineer and I was going to do electrical
engineering.
I said, you know what, I think I'm also going to study in, there was a liberal arts program
where I did my undergraduate work that was kind of a mixture of philosophy, history,
English, writing.
He goes, what are you doing that for?
(20:10):
And I said, well, you know what, I just like that, the humanities as well.
And so, you know, with that combination, it just seemed natural to go into healthcare.
And as I did spend time in it, I feel fortunate to be able to be a healthcare practitioner.
So Dr. Freeman-Paul, we've seen rapid advancements in AI and we're seeing an exponential growth
(20:34):
in new AI patent development, new AI research, you know, the democratization of AI that it
came with the release of CHAT GPT.
Everybody seems to be sort of an expert in AI anymore.
What excites you, an actual expert in the field, about the future of AI and healthcare?
How do you see it impacting the field to come?
(20:56):
Where are you most excited about its application?
So AI is a broad term, right?
But it's deep pattern recognition and it's the ability to in a meaningful way, call knowledge
or recognize patterns in a way that a human being by his or herself may not be able to.
(21:16):
And I think, and as we get into the conversation, I think you'll see more concretely what I
mean is it will turbocharge all of our capabilities.
And so it will help us make diagnoses sooner.
It will help inform treatments.
And you know, one of the first things that comes up is, is it going to replace us?
And maybe I'll just start there because that's like a core fear for many people.
(21:40):
And I think that it's like if you're going for a walk at night, you don't want to stumble
and fall.
So you take a flashlight and the flashlight doesn't replace your eyes.
It enhances them, helps them see further.
It's like an ultrasound machine versus a stethoscope.
You know, it's a quantum leap.
But health care is a very human and a very scientific and a very technical field.
(22:04):
And hopefully it will allow humans to maintain our humanity and sharpen our scientific toolkit.
I love that analogy of the flashlight.
That's perfect.
Very plain language.
I'm going to use that.
And the next time somebody talks about how AI is going to take over the world.
I think you got to give Paul and Nicol every time.
(22:24):
Yeah, every time.
It's free.
Oh, thank you.
So thinking about the role of humans in partnership with AI, right?
You talked about how it's going to amplify our capabilities and, you know, on a broad
scale, especially when it comes to pattern recognition, what is the role of humans in
AI development and use?
(22:46):
What ethical considerations do we need to take into account when we're developing and
deploying AI, especially in the field of health care?
Yeah, maybe I could give some specific examples.
So and kind of address that in that context.
It'll both show how it can be really powerful, but how we have to be mindful of it on multiple
(23:07):
levels.
So everyone's really with an ECG or electrocardiogram, also called EKG.
You put get wires on your chest and the electrical signals of the heart are recorded.
And now you can even do it from a smartphone.
And the EKG is a very powerful tool.
It can tell your heart rhythm.
It can tell a whole slew of other things.
But there are some things it can't do, historically could not do, like identify if there's a weak
(23:32):
heart pump or a valve abnormality.
It was not very good at those specific things.
And we created a neural network to do just that.
And so what we did was we took the ECGs from hundreds of thousands of people whom we knew
did or did not have a weak heart pump.
(23:52):
We actually knew their ejection fraction or the strength of the heart pump.
Normal is 50% or better.
And so to train the network, we would feed in an ECG and ask the computer network, is
there a weak heart pump present?
And it would read the voltages over this 10 second voltage recording.
And it would say the ejection fraction is 35%.
(24:13):
And we'd say, no, this one is 45%.
It has no idea.
But in the process of training, in order to minimize an error function, it starts changing
the mathematical values in all of its neurons.
Each neuron is a simple equation designed to mimic a human neuron.
If there's a big enough of an input, it gives an output.
And they're cascaded together to mimic human cortex.
(24:37):
So key point is to train the network.
Number one, we don't know what the network is looking at.
We just give it the whole ECG.
Whereas a human would read it by picking out components of it.
Here's a peak.
Here's a curve.
It might be looking at that or something else.
So to train it, it takes a lot of data and a lot of computing power.
But once it's trained, you can run it on a smartphone.
(24:58):
It's a simple math equation.
And when we're all done, then we can feed in anybody's ECG and say, is there a weak
heart pump?
Yes or no?
And it turns out it's incredibly powerful.
So we would measure a test by the AUC, area under the receiver operator characteristic.
Flipping a coin is a 0.5.
A perfect test is a 1.
An exercise treadmill test that many of us are experienced with is a 0.85.
(25:21):
This test, the computer's ability to identify a weak heart pump, 0.93 from an ECG.
Wow.
Yeah.
And so that's what I was just going to get at.
It's like, is it really all boiled down to confidence levels?
Yes.
So it's still another test.
And like any test, you have to know how to use it.
But it's a powerful one.
(25:42):
And I'll take this example one step further, and then I'll talk about some of the concerns
or things you have to be careful about.
We then said we've only given the computer the voltage time waveform, the ECG data.
We haven't told it.
Is it a, is the person a man, a woman?
What's their sex?
And we haven't told it the person's age.
(26:03):
And we know those impact heart disease.
So we added that information and it got no better.
We thought, how is that possible?
So then we thought maybe from reading the ECG it knows.
So we asked it.
And a computer reading an ECG can determine someone's sex with an area under the curve
of 0.97.
It's almost perfect.
(26:26):
It's better at determining someone's sex from reading an ECG than you or I are watching
the street looking at somebody.
I mean, it's just a very powerful test.
And so, you know, and I could go on from there, but the main point I wanted to make is now
you can see how this test, which takes 10 seconds lying down, it's inexpensive, it's
available around the world, the ECG now available on a watch, can now do something that before
(26:50):
required a CT scan or an MRI scan or an echocardiogram, an ultrasound of the heart.
So it becomes very powerful.
That's the strength of it.
But you would also ask me, what about the weakness or concerns?
Well, if we trained it on patients in Rochester, Minnesota, will it work on people from South
Africa or South America or South Korea or Northern Europe?
(27:13):
Because we're all a little bit different.
And AI is very good at recognizing patterns it's seen before.
But if it's never seen something, it's not so good.
And so we have to make sure that it will work for the diversity of humanity.
That's one of the serious risks about deploying it and trusting it.
Is it tested on a wide enough population?
(27:35):
Are you finding that there are groups outside of your organization that are willing to share
large volumes of data so that you can do just that and test the model outside of your microposam?
Yes.
So in fact, we have.
And for that specific example I gave, we've tested it first within our own system for
(27:56):
people of different self-described race and ethnicity and found that it's robust across
races and ethnicities.
It also actually can identify people.
If you give it an ECG and ask it once you've trained it, can you tell me what their self-described
race or ethnicity is?
So they're likely repolarization differences or epigenetic differences or dietary differences
(28:20):
or you know, it's hard to know what's causing it.
But there would be differences that the ECG can pick up.
And then we've collaborated with hospitals around the world where they would send us
anonymized ECGs and we would send them back what the AI says the heart pump is and they
have the actual answer so they can test it.
And we found that this particular algorithm is very robust.
(28:41):
But that doesn't mean all of them will be.
So we have to be very mindful of that.
So the ECG, you're essentially taking this electrical output from the heart and you're
translating that into mathematics, which is great, when then trying to identify the patterns
in those signals.
That's a very particular form of AI.
(29:04):
And so I think that there's a different set of ethical considerations when you're looking
at other kinds of AI like machine vision, for instance, which is the greatest example
of machine vision AI would be facial recognition and how it gets used and who gets to use it
and who gets to approve who gets to use it.
And, you know, do you have some requirement for informed consent to use it?
(29:30):
So those are all the kinds of questions that swirl around in my mind as we look at the
different kinds of AI that are being deployed.
What is the ethical requirement or obligation of the people deploying it or developing it
to provide the safeguards that need to be in place?
What's your opinion on that?
(29:50):
Well, that's a great point.
And even for the ECG, I'll just include that one.
For example, what if an insurance company wants to use it?
Is that okay?
Because I mentioned that it can, I didn't mention, we did a clinical study and it very
effectively in the hands of primary care doctors, if they get a notification saying there's
(30:10):
a weak heart pump present, it very powerfully identifies the weak heart pump.
But not only that, it also identifies who's going to develop a weak heart pump.
And it does that because likely the electrical signals are changing before we even see changes
on a CT or MRI.
So it's almost like it's looking into the future, but it's not.
It's just a very sensitive test.
But then if it can predict, in essence, or identify earlier than current testing, can
(30:36):
insurance companies use it?
Are there constraints?
Is that unethical?
And the same thing holds true with image recognition, as you pointed out.
The short answer is whenever we do research, it goes through an institutional review board,
where it's reviewed by experts, including physicians, ethicists, scientists, who are
not part of the research and can independently look at it.
(30:59):
But it's a bigger question when we then talk about ultimate approval.
And so I think that is important.
And it's important that we test it both in terms of assessing that it works in diverse
populations and that if there are rights that are being risked, just as regulations were
passed around genetic testing, we may need to do the same thing here.
(31:21):
Yeah, thank you for that.
I'm going to, we're going to jump around, I think, because we're actually already approaching
our time here.
But one of the things that I'd love to get your thoughts on or any stories that you might
have, where have you seen in healthcare?
Where have you seen AI fail spectacularly?
And what was like the root cause of it?
(31:42):
Where did it fall down?
Was it people?
Was it process?
Was it technology?
Yeah, oh, for sure.
Or was it you?
Yeah.
It's not you, it's me.
So we looked up a specific question in chat GPT about whether a procedure could be done
(32:04):
for a special kind of defibrillator implantation right after cardiac surgery, because we had
a patient along those lines.
A colleague was asking me, do you think it's okay to go ahead and proceed with implanting
a defibrillator, a special kind of defibrillator right after heart surgery, because the lead
would be placed right next to where there were metal wires from the heart surgery and
there were other considerations.
(32:25):
And I thought, it seems like it would be okay, but let me ask chat GPT.
So I just said, I didn't put in any specific patient information because there's a risk.
It could leak out.
And so we have to be very mindful about what we put into these systems.
But you can certainly ask a generic question, is it safe to implant this kind of defibrillator
right after heart surgery?
(32:47):
And it said, yes, it is.
And it gave me two paragraphs.
And then I said, provide references.
And it gave me references, Journal of Cardiac Electrophysiology, 2019, et cetera, et cetera,
et cetera.
I sent that to my colleague.
I went out to see another patient.
He mailed me back 20 minutes later.
I can't find these references.
I said, what?
He said, yeah, they don't exist.
(33:08):
So I looked.
It turns out, as you're probably aware, that the term they use is hallucinations, that
large language models hallucinate.
That is, they're designed to fill in the blank in a given sentence, right?
But if you think about it, maybe it depends on how specific the sentence is.
(33:28):
Because if you say, I am going, and complete the sentence, it could be tomorrow to the
store, to the hospital.
And so the longer the sentence, maybe the more specific.
Crazy.
Right.
And because it does that, when you say, give me references, it'll give you references.
And they look very real, but they're completely fabricated.
(33:48):
So we have to be careful.
And we have to test, vet, and validate any AI before we use it in a medical environment.
So a large language model is a far more general and more difficult to validate model in some
ways for medical space than the ECG one, which is a very focused.
(34:10):
And we're doing the same thing with image interpretation and others.
But I will make the point, because it expands our vision with an ECG, the example I gave,
if I asked an AI model to read an ECG and tell me whether or not heart rhythms, such
as atrial fibrillation, specific kind of arrhythmia is present, it would give me an answer.
And then I could look at the ECG myself and say, oh, you're right, or oh, you're wrong.
(34:34):
If I ask it to tell me something like, is there a weak heart pump present?
And someone has me the ECG and they say, did the AI get it right?
I would say, I don't know, get an ultrasound or get a CT scan.
So it is an interesting thing as we test and validate these more sophisticated tools, we
have to be very mindful of how are we going to test it?
(34:54):
How do we know if it got it right?
And I think that'll be part of the process on learning how to use these tools.
I think that's an excellent point, Paul.
And it kind of helps connect the dots back to our very first question, which is making
the case for humanities and medicine and for humanities education in general, just being
having an inquisitive mind, you know, asking questions, challenging assumptions.
(35:19):
And that's it's I love to hear that you and your colleagues are experimenting with generative
AI in a safe, intentional, purposeful way.
And then, you know, kind of using that opportunity to kind of poke holes in it, right?
It's like a little you're having a little debate club with chat GPT.
That's exactly right.
(35:40):
So with the AI ECG models, just to be very concrete, they have been tested extensively.
And I think we'll see those in clinical use soon.
And we have found in a couple of real world clinical trials, when you put them in the
hands of clinicians, you increase the yield of finding important undiagnosed heart conditions
by about a third.
(36:01):
So that's a meaning.
Yeah, we're talking about large language models.
I think we need to do a lot of testing.
We need to move quickly.
We need to make sure that physicians, that ethicists, that patients, that all interested
parties, engineers and scientists are at the table.
I think letting chat GPT go in the wild for people to interact with it with the appropriate
(36:27):
warnings was fine.
But to try using it for health care would be a problem.
There was a recent article in the newspaper, you may have seen how a lawyer used chat GPT
to prepare a brief.
It came before the judge.
And all of the references, much like my own experience, were made up.
It did not go well for that lawyer or for that client.
But it just underscores the potential risk to misinformation, disinformation, and how
(36:53):
we just have to be very mindful in medicine and in broader societal ways and how we apply
these tools.
Yeah.
So Paul, we're coming up on the end here.
I think we might have time for about one more question.
I'm going to be selfish and I'm going to be the one to ask it.
Where are you not seeing people focus on AI in health care that you wish they were?
(37:17):
There's a lots of untapped potential, but that's a really hard question because there's
so many efforts.
If you don't read the news every day, you're already out of touch.
And I think broadly speaking-
I've been telling him that for a while, Paul.
Backpatient diagnostic tools are being very rapidly developed, whether it's ECG, imaging,
(37:39):
other approaches.
Back office tools are being rapidly developed.
Summarizing medical encounters are being rapidly developed.
So it's hard to say that there's an area where there isn't a lot of energy because it truly
stands to transform health care.
There are so many inefficiencies in the system.
(38:01):
Now, on the one hand, I also envision a scenario where a patient gets an insurance denial letter.
I ask a large language model to create a response.
It goes in, the insurance company asks its large language model, and we have two large
language models talking to each other.
Exactly.
So I think that it will be interesting because at the end of the day, what all of us want
(38:26):
is the human touch, but backed by full expertise.
So your question is an impossible answer because I really do think there's just such an intense
focus in so many areas that it's hard for me to say there's anything being ignored.
But I think we want to make sure we don't lose touch up or ignore in this rush is our
humanity and that these are tools to help humans be better humans and better doctors.
(38:53):
And if we keep that focus in mind, I think we really stand to do great things.
Yeah, that's great.
Hashtag machines make care more human.
I like it.
Here you go.
Hashtag humans make machines more human.
Oh my God.
We're going to have to get Jimmy Fallon on here next with all these hashtags.
(39:15):
Paul, it has been such a pleasure to connect with you today, and we're just really grateful
to have you on our inaugural episode of Tech It to the Limit.
Thank you so much for your time and for sharing your expertise with our listeners.
Thank you, and I can't wait to see the podcast.
I look forward to it.
Hey there, fellow tech enthusiasts.
(39:41):
It's Elliott from the Tech It to the Limit podcast, bringing you another exciting sponsor
for today's episode.
What if I told you there's a solution that combines the wonders of technology with the
finesse of modern healthcare?
Get ready to have your mind blown by the incredible capabilities of RoboDoc.
RoboDoc is not your average medical assistant.
This marvel of artificial intelligence will astound you with its out of this world features.
(40:07):
Introducing the RoboDoc Premium Ultra Platinum Deluxe Plus plan.
By enrolling in the RoboDoc Premium Ultra Platinum Deluxe Plus plan, you'll receive
not only unlimited access to virtual consultations and personalized treatment plans, but also
exclusive perks that will leave you questioning the very fabric of reality.
Unlock the mind bending dimension of healthcare with features like the Quantum Teleportation
(40:32):
Module, allowing RoboDoc to instantaneously appear at your location with the snap of your
fingers.
And let's not forget the Intergalactic Concierge Service, where RoboDoc will book your appointments
across time and space, because why limit yourself to one timeline for medical care?
Don't miss out on this mind bogglingly extensive opportunity to join the future of healthcare.
(40:56):
Simply visit RoboDoc.ai and use the promo code TECHLIMIT to get 10% off your annual
subscription to the RoboDoc Premium Ultra Platinum Deluxe Plus plan.
Remember, it's not just a subscription, it's a prescription for a healthier you.
Wow, that was pretty great having Dr. Friedman on today.
(41:22):
Not bad for a first guest, would you say?
No, I would say I thought it was fantastic.
I'm really, really impressed with his immense knowledge.
Yeah, and I love that nerds can also have social skills, because he's clearly really
intelligent and also very personable at the same time, which is rare.
There's you.
(41:42):
And you.
Oh, well, thank you.
So what's your golden nugget today, Elliot?
What did you take away from Dr. Friedman that you didn't bring into this conversation?
When I asked him what areas of healthcare he's not seeing people working to deploy AI
or develop AI for, he said, you know, I don't think there is a stone that's not being unturned
(42:04):
at this point.
And that surprised me.
I was really surprised by that.
And I really want to look more into that to see if I can find places where there is still
more work to be done.
Now there's plenty of work to be done all over the place, but what's in the shadows?
Yeah, absolutely.
No, I think I thought that was a great nugget as well.
I think my takeaway, something that just really stuck with me and that I'm definitely going
(42:26):
to use in conversations as I talk with colleagues about AI and healthcare is that flashlight
analogy.
The fact that it's not replacing your eyes at night, it's just helping you see better.
It's amplifying your sight in the darkness.
As we think about AI and healthcare and how it's going to help us amplify and scale our
(42:50):
knowledge and expertise and our skills right at the critical time when our workforce is
shrinking, that's very exciting.
That it's a tool to be deployed like a light in the dark.
Love that.
Yeah.
That was a great analogy.
Well, thank you listeners for sticking with us for the first episode of Tech It to the
(43:12):
Limit.
And you're welcome.
Yeah.
Sarah, it was so great recording this first episode with you.
I'm so excited for what the season has to offer, what we're hoping to get accomplished
and do.
For our listeners, check us out on social media.
Look us up on LinkedIn, Twitter, Instagram, and most important, if you would, leave us
(43:37):
a five-star review on whatever podcast app you're watching.
That does a lot of really great stuff for us in terms of algorithmic optimization.
Speaking of AI.
Five-star reviews gets published higher in lists.
Yeah.
And don't forget the good old fashioned word of mouth marketing.
Tell a friend that you think would enjoy this kind of nerdery and insight into health tech,
(44:02):
additional transformation in healthcare.
Tell one person about the podcast that you heard today.
And yeah, thanks for tuning in.
We can't wait to share more great content with you in the future.
See you next time.
See you next time.
Tech It to the Limit is produced by Sarah Harper.
(44:22):
And Elliott Wilson in consultation with Chatchie P.T.
Because they are masochists and also don't have any sponsors.
Yet.
Music was composed by the world famous court minstrel, Eben O'Donovan.
To consume more hilarious and informative content by digital transformation in healthcare,
visit us online at techitothelimit.fund.
(44:43):
And don't forget to follow us on LinkedIn, Twitter, Instagram, and across the event horizon.
See you next time on Tech It to the Limit.