Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Charles Goldfarb (00:00):
Chuck, welcome
to the upper hand podcast where
(00:07):
Chuck and Chris talk handsurgery.
Chris Dy (00:09):
We are two hand
surgeons at Washington
University in St Louis, here totalk about all things hand
surgery related, from technicalto personal.
Charles Goldfarb (00:16):
Please
subscribe wherever you get your
podcasts
Chris Dy (00:20):
Thank you in advance
for leaving a review and a
rating that helps us get theword out. You can email us at
handpodcast@gmail.com so let'sget to the episode. Hi there.
Upper hand podcast listeners.
This is Chris. We're back withan episode that's dropping a
week early. Chuck is going to beinterviewing Dr alpesh Patel
from Northwestern University. Heis a spine surgeon, but has
expertise in lots of areas thatare broadly applicable to all of
(00:41):
our listeners, includingartificial intelligence. So
check it out. I'll be back forour end of the year episode that
is going to release at the veryend of December.
Charles Goldfarb (00:52):
All right, I
am here today with a really
special guest. Chris Dy is notwith us, so it's going to be
myself and Alpesh Patel. Alpeshand I have known each other for
a long time now. He was threeyears behind me finishing his
residency at Wash U in 2004prior to that, I think he did
(01:14):
undergraduate at Cornell MD atNorthwestern. There's a theme
coming and I would love to talkabout this, maybe another
podcast. You got your MBA fromKellogg, which is super
impressive, and you have been onfaculty at Northwestern since
you finished your fellowship.
You are a well known andrenowned spine surgeon focusing
(01:39):
on the cervical spine as Iunderstand it. And I love to
talk anything businessy or lifewith you. And today we're going
to talk about AI. So tell mewhat I missed. Tell me what you
want to add. Yeah, welcome.
Alpesh Patel (01:56):
Yeah. Thanks so
much, Charles. I appreciate it.
Are we going by Chuck orCharles? What's your official
podcast name?
Charles Goldfarb (02:02):
Yeah, you
know, I wanted to switch to
Charles as I got older, becauseI thought it sounds more, you
know, adult. Yeah, I can't, Ican't do it when I went to
business school. I'm like, Allright, I'm going to be Charles.
I'm going to introduce myself asCharles. And I failed, so I'm
trying, yeah,
Unknown (02:16):
okay. I mean, you've
been chuck to me for for, you
know, 24 years. So I'm going tostick with that one. That's a
hard habit to break, but, youknow, thanks for the
introduction. I do appreciateit. I will make one little mini
shout out just in that, in thatbios that I did spend five
years, five wonderful years, atthe beginning of my practice at
the University of Utah. And so Iwas there, yeah, with, with
(02:38):
Charlie Saltzman was my chair,and he had hired me out there,
and I phenomenal mentors. Itactually plays into a lot of my
career trajectory as you'dimagine, right? A lot of things
you see early in your careerhave this tendency to propagate
and and really push you forwardand propel you forward. So I had
a five wonderful years therewith with Charlie and Darryl
(03:00):
Brock Hughes, the current chair.
And then I came back to Chicagoand and that is where I was born
and raised, and outside of aperiod of time, in college, then
training it in residency withyou, and then those few years in
Salt Lake, I've been in Chicagoand affiliated with Northwestern
for a large, a large chunk ofthat time. Yeah, no, I'm excited
to talk. We talk about a lot ofthings, but I will, I promise
(03:21):
you, I'll try to keep my answershort, and I will try to focus
in on the AI conversation. Andwe've been doing some great work
at Northwestern on AI machinelearning and how, how it plays
into the world of of patientcare, specifically around
musculoskeletal care. Yeah,perfect.
Charles Goldfarb (03:43):
Briefly,
before we jump into that, you
are current President, Ibelieve, of the cervical spine
Research Society. And yourmeeting is coming up this week.
I believe is that right?
Unknown (03:57):
Yeah, yeah, absolutely.
I think we are. I am thepresident of the cervical spine
Research Society. We are ainternational organization
focused on improving the careand outcomes of patients with
cervical spine diseases throughresearch and education. And we
have our annual meeting actuallyin Chicago, just about a mile
from where we're talking rightnow, and it should be fantastic.
(04:18):
And a big component of thatmeeting, in addition to the
human factor of getting peopleinto a room together to exchange
ideas, is that is a conversationaround around research, and a
large chunk of the research thatwe be, we see being submitted to
the CSRS, to other societies, tojournals, you know, is involves
(04:39):
utilizing different types of AIbased methodologies, right? And
so, so I thought I appreciateyou letting me plug the meeting,
and I appreciate the chance totalk about it, but it does
overlap quite a bit. Yeah, Ithink you probably see that in
hand. We see that in a lot ofsurgical subspecialties as well,
right? Yeah,
Charles Goldfarb (05:00):
for sure, it's
interesting. And we're gonna,
well, I'm gonna, I'm not gonnasay, I was going to say that
it's interesting how research isnow far more than just about the
nuts and bolts of a surgery ortaking care of a patient.
There's so much else that'sbrought into the discussion of
research. It's all relevant. Butyou know, when we started in
residency. There was none ofthis. It was just the nuts. And
Chris Dy (05:23):
please make sure to
appreciate our sponsors. The
upper handsponsor@practicelink.com the
most widely used physician jobsearch and career advancement
resource. Becoming a physicianis hard. Finding the right job
doesn't have to be join practicelink for free
today@www.practicelink.com you
Charles Goldfarb (05:47):
All right, so
let's jump into the meat of the
conversation. And I first heardyou talk about artificial
intelligence at an AOA meeting,American orthopedic Association
meeting. It's probably been fiveyears ago. I don't remember
exactly when it was, but it wasan excellent I think it was a
symposium. It was excellentsymposium, and got my wheels
(06:08):
turning. I hadn't thought aboutit a lot before. Then I've
thought about it a lot since. Solet's start with some
definitions, easy ones. What'sAI?
Unknown (06:16):
Yeah, so that's these
are good places to start, by the
way, right? Because I think manypeople listening, myself
included, have heard these wordsthrown around for a long time. I
mean, AI, we I heard, I heardabout artificial intelligence
back in the 80s, when I was justlearning about what computer
science was and what what youknow, basics of research were. I
(06:37):
think the best way to think ofAI is to think of it as a broad
field of computer science,right? So it is a large field of
computer science that attemptsto create systems and processes
that mimic, or, I should say,really recreate tasks that human
intelligence would normally haverequired to do. It is, it is, I
(06:58):
think the key differentiatorhere is that it's, we're not
talking about a general AI thatmimics a human being. I think
that's where science fictiontakes us. That's where all of
our books and movies and havetaken us, certainly out of my
childhood was watching thosemovies growing up, so we need to
erase that from our conversationa bit. We're not talking about a
(07:19):
general sentient AI. We'retalking about tasks, tools,
processes that we can think ofas taking on the tasks that
would normally require humanintelligence. Perfect.
Charles Goldfarb (07:34):
And how do you
think about the definition, the
excellent definition you justgave it AI versus a definition
of machine learning.
Unknown (07:42):
Yeah, that's a good
one, because you oftentimes
again hear these used togetherand used interchangeably. So I
would again machine learning.
Think of it as a subset, right?
If AI is this large circle amachine learning is a smaller
subset of that. It's an exampleof artificial intelligence. So
(08:02):
think of it as a subset of AIthat involves basically teaching
computers to learn throughdifferent techniques, using
different algorithms. And whatthat learning means can vary
from technique to technique.
Some of it's what's calledsupervised learning, where
there's a human teaching andalgorithm, there's unsupervised
(08:23):
learning, where you have thealgorithm sort of teaching
itself through iterativelearning, and then you've got
other examples of that thatblend the two right between
supervised and unsupervised.
Sorry about that. I'm getting atext from my residents in the
morning. Hopefully that doesn'tpop up on your screen. So so
(08:44):
that's where I think of ml. Andmachine learning comes in lots
of different ways and forms. Ithas lots of different goals, but
generally speaking of it, Iwould have most people when we
introduce them to the conceptthinking of it as an example of
artificial intelligence. It'sprobably the one that we have
investigated the most, becauseit's mostly algorithmic software
(09:05):
driven, and the availability ofthose software is pretty widely
spent right now it's prettywidely available. So the
limitation there really isn'tthe access to data. Some of the
other applications of AIrequire, I think, a lot more
upfront technology. And that'swhere you won't see it, maybe as
widely applied, yet it's sittingmostly in the hands of private
(09:27):
companies or, you know, largecollaborations. So
Charles Goldfarb (09:30):
the machine
learning that affects you and I
so if I go on to one of the, youknow, chat, GPT or Claude or
whatever, and type in, you know,what is carpal tunnel syndrome,
the machine learning that we getthere is that harvested from web
crawlers, basically harvestingas much information as possible
from as many sites as possibleacross the internet. Is that
(09:52):
where that information isgleaned?
Unknown (09:54):
Yeah, so, so chat GPT
is one that we all have heard
about, right like, and I thinkit's pretty common. We certainly
I have it on my phone and Ithink, and I use it probably on
a daily basis at this point. Do
Charles Goldfarb (10:04):
you pay for
it? Do you have the $20 a month
one we pay
Unknown (10:09):
for I pay for it
because it's we also utilizing
this part of our research,levels of depth of it that we
need, that we couldn't getthrough it. But for a long time,
outside of that, for day to daylife, I haven't paid for it. You
know, the searches I'm searchingup don't require the speed that
we're talking about forsubscription service, nor do
they require the complexity, youknow, we I used it to come up
(10:31):
with names for our turkey trotteam. You know, that offloaded
some work from my from my head,you know, about a month ago. But
also for real things around workas well. But chatgpt, again,
it's an example of AI. Itutilizes machine learning
methodologies, right? But it'snot machine learning by itself.
(10:51):
There's a there's a largelanguage model behind it. There
is some predictive analyticsthat go into it. There's
iterative learning as well. Italso does allow you to use
voice, right? So there's a,there's a sound component to it.
You can upload images to to itnow, so it actually has computer
vision baked into it, which isanother subtype of AI, right?
(11:13):
So, So machine learning is onecomponent of it. Think I would
think of it almost and this isoverly simplistic, and more
savvy listeners will challengeme on this, but I think of
machine learning as the some ofthe gears that go in behind the
machinery, if you will. If I usea 18th or 19th century analogy,
but, but that's a but chat GP isan example of where we see a
(11:36):
potential impact of AIapplications in to us as
physicians, to us as patients,right? We're all patients at one
point in another, but even to usas individuals, maybe running
businesses or running researchefforts, is that this is a, this
is the first large scale, highlyvisible Well, talked about, you
know, AI application that thatwe've experienced in our
(11:58):
lifetime.
Charles Goldfarb (12:00):
Love it. So
one of the things our listeners
are thinking about now, ifthey're thinking anything like
I'm thinking, is, you know, andwe're going to this is going to
go in all different directions,because it's very hard to have a
straight line discussion aboutthis. But you may have answered
one of my questions, which is,which engine I guess is the
right expression do you use? Andit sounds like you use chat GPT.
(12:22):
I've done a lot of kind oftrying to investigate, and
actually in business school, wetalked about this. You know,
there are a number ofalternatives by your choice. I'm
assuming, did you prefer chatGPT over Claude, over Google,
Gemini over perplexity, overMicrosoft, copilot, et cetera,
et cetera? Did you choose chatCBT because it's better, or did
(12:46):
you choose it because it's it iswhat it is? I don't know. Why'd
you choose? Yeah,
Unknown (12:51):
yeah, let's be clear on
that one, by no means am I
advocating for one machine overthe other. It was the first one
available. It was readilyavailable. So, right? Go you
learn in business school, right?
Go to market. If you got firstmover advantage, right? You got
a big advantage. And I thinkchat GPT lives that it's got
first mover advantage, it's gotbrand recognition, right? So
when you think about these AIbased let's call them assistance
(13:12):
for right now, right? Or AIbased platforms, it's the one
that everybody's heard about. Sothat's why we use it. I use you
mentioned a couple of the otherones. Gemini, I've used that on
my phone. Co pilot, we have thatavailable as well. Which one do
I go to? You know, again, I wishI could say that's my day to day
life, that that's driven by alot of in depth thought analysis
(13:34):
and pros and cons. It's reallylike, well, chat. GPT is the
first one that comes to top ofmind for me. Now, what I would
say is, from a researchstandpoint and from a clinical
application standpoint, it'sreally important conversation to
have right which is to say, canwe rely on one platform or one
engine more than others? And theabsolute answer in 2024 as we
(13:59):
speak now, going into 2025 isthat we cannot, I don't think
anybody could say that one ofthese platforms is better than
others. When we look at it froma research standpoint, we have
to really ask hard questions,which then which then inform our
clinical applications. Is, arethese really reliable, right?
Right. Chat. GPT is sort of ageneralized, large language
(14:21):
model. It may not current statebe trained on In Depth medical
content, for example, let alonespine surgery content or hand
surgery content or the very,very small details that go into
our world. There's also, sorry,may not have the depth of
knowledge, because it doesn'thave the data in, because it
does grab data from. I mean,where they get their data from
(14:44):
is proprietary. You don't reallyknow for sure, and as the
lawsuits come out over theyears, we'll find out where the
data comes from. But the thoughtprocess is that the data was
captured or through largepublic, publicly available data
sources, right? Um. Um, but whenwe think about which of these is
best, you're what you're seeingright now in the market is that
you'll see, let's call them AIcompanies, because that's what
(15:09):
they'll market themselves as.
But application companies thatare solving for very unique
issues, right? They are going tocome out and solve for a very
specific problem, whether thatbe a specific diagnosis in our
language as physicians, aspecific procedural solution, a
specific thing, they're going tobe hyper specialized, AI
(15:31):
platforms or systems, rather notnot generalized platforms,
because they're limited on theirdata that puts in and AI ml,
what we've seen from our work atNorthwestern is that it's your
data in drives the quality ofthe outputs, right? If your
inputs are marginal orquestionable or inconsistent,
your outputs will be so there aswell.
Charles Goldfarb (15:51):
Love that. I'm
going to again, I'm zagging a
little, but touching onsomething you said a year ago,
because things are moving very,very quickly, we heard a lot
about one of the limitations youmentioned. One of the
limitations being access toknowledge and in depth
information and things likecarpal tunnel syndrome. One of
(16:13):
the other limitations was therisk of incorrect information
provided with a search and thatsome would label hallucinations,
so I might type in what's thebest way to treat carpal tunnel,
and they may the response may berotator cuff repair or something
just completely bizarre. What'syour sense of the state of the
(16:35):
quality of the information thatwe receive back from a query?
Yeah,
Unknown (16:41):
I think it's, again, it
depends on the question that's
being asked. If it's a veryspecialized or specific
question, I think you still needto be very skeptical of the
answer. If it's a generalquestion, you may find that the
answers are fairly accurate andfairly reliable, and you may
find that the more you ask it,the better it the better it
(17:02):
understands what you're lookingfor, right? So a lot of the
suddenness comes down to theprompts, the asks that we put
out there to these systems. Andwe have to get better at asking
better questions, maybe. Butsome of it is also the actual
data in and the example I'llshare is we have a research
project right now that we'rethat we're submitting, so that
this is not peer reviewed yet.
This is just going into into thepeer review process, but where
(17:23):
we compared, for example, threeor four different engines
looking at looking to do asystematic literature review,
right? So you remember Chuck,when we were residents, we would
do these systematic reviews. Ourjob would be to scour the
literature and then look forevery peer reviewed journal we
could find, look at all thereferences in those peer
reviewed journals, and keepdigging and digging and digging
(17:44):
and
Charles Goldfarb (17:46):
walk to the
library to do that. Yeah,
Unknown (17:48):
I'm just trying not to
date ourselves that much. I do
think we had a computer tosearch on, but nonetheless, you
would do that iterative process,right? So if you think about a
human task that might bereplicated by an algorithm, by
machine learning algorithm, partof an AI platform. Yeah, that
seems like a repetitive humantask. So let's see. Let's
(18:12):
compare. So we actually comparedour residents doing a
literature, a systematic reviewon a specific topic in cervical
arthroplasty and cervical fusionsurgery. And we compared that to
three or four different models,one of them being chat GPT, so
the other three being morespecific sort of medicine or
healthcare specific models. Andwhat we found was that there was
(18:36):
a fair amount of overlap thatthat we found a lot of the
articles that the humans found,which was cool. We like that's
always feels good. There weresome articles that our residents
found that the mission that thealgorithms couldn't find, but
then there were articles thatthe algorithms found, and
chatgpt in particular, foundthat not only we couldn't find,
they actually don't exist,right? So we identified a number
(18:59):
of articles that were suggestedto be actual review articles
actually not review but primarysources of information to
include in a systematic reviewthat actually don't exist.
They've never been published.
There's no authors of that name,and there's no papers of that
name. So that's an example wherean output in a hyper specialized
question you have to be veryskeptical of, right, very
(19:21):
skeptical of, but when I askedchatgpt for, you know, a recipe
for a really good, you know, oldfashioned, it does a pretty good
job of coming up with a goodrecipe for an old fashioned. So
I think you have to, we asorthopedic surgeons, we as
researchers, need to temper ourexpectations from these products
for now, that's not run towardssome finish line. And then we
(19:44):
also need to be really goodadvocates for our patients,
right? As always, we want toeducate and advocate our
patients, and they may findthemselves with these
technologies in hand, expectingthem to be sources of truth,
right? And so we need to be ableto explain to them. Be, not just
what a reality might be, but whythey may not be able to put all
their faith in that, in thatplatform. Yet,
Charles Goldfarb (20:08):
med students
on the back, I did, I think my
second AI sort of relatedresearch. And Carrie Reaver,
who's now a second year medstudent, did an amazing job.
It's been accepted forpublication in jdjs, which is
awesome, looking at thereadability of online patient
education materials in Englishand Spanish, assessing how chat
(20:32):
GPT did and reading level andall that stuff. So there's so
many directions. This an exampleof paper that wouldn't have been
published when we wereresidents. But there's so many
different important things to doand different directions to go
using chat GPT, so you're doingreally cool stuff. This is a
little bit of low hanging fruit,but I think important. No,
Unknown (20:53):
I think it's really
important. And I think that's an
example right there, Chuck, andwe've looked at a similar thing,
which is to say, Hey, listen,you know, we've invested a lot
of time and effort into thispatient education, content,
materials, is it readable? Is itunderstandable? And we're
finding again, and this is wherea machine learning program has
helped us get through thatprocess faster of understanding
(21:15):
it same things, right? It givesyou a lot of insights. So
there's that's an example ofwhere we can take these tools
and actually use them for a veryspecific question, a very
specific purpose, a veryspecific output, and rely on
them, right? Yeah, so they'rehyper specialized tools, yeah,
and that's accessible when wetrade treat them as general
(21:35):
tools, we start to seelimitations fast.
Chris Dy (21:41):
Mark your calendars
for March, 7 and eighth, for
checkpoint, surgicals. Nextcategory course, restoring hand
and wrist function, optimizingsurgical results and avoiding
complications. Join the coursefaculty, dial Kyle Dr Kyle
chapla, Dr Amber lease and DrDeanna Mercer in Las Vegas as
they review managementstrategies to assess, preserve
and restore hand and wristfunction. To learn more about
(22:03):
this and other educationalprograms, please visit nerve
master.com checkpoint drivinginnovation in nerve surgery.
Charles Goldfarb (22:12):
One important
thing to mention for all the
listeners is, you know, how doyou handle HIPAA and important
patient related information. SoI'll tell you what we do. I'd
love to know what you do. So wehave very specific guidelines at
Washington University that Ithink are pretty well thought
(22:33):
out. So we I cannot take patientinformation or Excel spreadsheet
and use chat GPT as publiclyavailable, but we have a
protected playground environmentfor it's called WashU chat GPT,
and in that environment, I cando whatever I want, because it
doesn't go out, it only comes inso that I can do and can't do.
(22:57):
We're on zoom right now, theZoom AI tool we can use, the
Adobe AI tool we cannot use, andthat's been turned turned off.
What other examples, or how doyou think about that? Or did I
capture a good amount of it? No,I
Unknown (23:12):
think that's a really
good example. I think everybody
you know, whether you're you'resitting where we are, which is
inside of a university system,right, which has a lot of
processes in place for for dataprotection and data privacy,
patient privacy. Sometimes Ifind myself trying things that
wanting to try things and I'mtold we can't do that. And while
(23:34):
I'm frustrated at as a, as a, asa researcher, I also understand
why we're why we have theseconstraints, right? Because
there is a massive privacyconcern that comes with it. If
I'm in a in a smaller grouppractice, let's say as an
orthopedic surgeon, right? Or orin a in a private company or
small company, this is somethingthat you really need to spend
time thinking about, which is,what are your security measures
(23:55):
going to be? And I think you'refinding, obviously AI companies
that are solving for that.
That's the beautiful thing aboutcapital markets. If there's a
problem, you're going togenerally find a solution. But
yeah, we have the similar kindof constraints. Northwestern,
for example, has a has astrategic partnership with
Microsoft, and so that gives ussome degree of freedom to work
within a constrained firewall,if you will, for lack of a
(24:17):
better term around copilot,right? And when we work with
other algorithms, what we'redoing is making sure that all
that work stays internal, thatwe're not actually actively
sharing any of that informationexternally. That's something
that again, 10 years ago, whenwe started this work on
predictive modeling, that wasnever a conversation, right? And
(24:38):
so now it, thankfully, is a bigpart of the conversation, one
Charles Goldfarb (24:43):
of my it's not
a, not a fun example, but just
how careful we have to be. Therewas a plastic surgery group in
town that was posting pictureswith, you know, patient privacy
pictures, but they wereunlabeled. Patients were really
hard to identify, but there wasmetadata. Uh, associated with
the pictures with patient namesthat led to a massive lawsuit.
(25:04):
The message being, we have to bereally careful here.
Unknown (25:07):
Yeah, no, for sure. And
again, we there's a lot of these
anecdotes, right? There was astory about about or there was
an example of a group that tookjust to demonstrate the privacy
risk that we may not be thinkingof, that took LinkedIn images
that physicians had posted onLinkedIn that had any kind of an
(25:30):
x ray or a CT scan of the headand neck, and they were able to
take that data and using an AIalgorithm, you're scaring me,
recreate faces of the people whowere supposedly, you know, de
identified, anonymized, and theycould recreate faces of who they
think these people might be. Andand now you may argue, well,
(25:53):
they aren't accurate. They can'treally do it. But the idea is,
you know, we're not that faraway. Then, if it's, if the
first go is not that is an ideain that direction. You'll
iterate, and you'll see peoplefigure it out. So even things
that we think are anonymized, wereally need to ask, you know,
twice and ask thrice and reallyfind, like you said, firewalls
(26:14):
that protect our ability to trythings out from, you know,
unintended consequences.
Absolutely,
Charles Goldfarb (26:20):
I want to ask
you one question, which I think
you could probably talk aboutforever, but I'm going to ask
you to keep it brief, because Iwant to get to the medical
applications of AI. When you,let's say you're doing a search,
and you're on chat, GPT, andyou, you know your query is,
what's the best way to treatcervical pain, I don't know.
Something explain to the to thelistener that hope I'm saying
(26:45):
this correctly, that this is nota one question, ask chat GP and
walk away with the answer.
Explain how you question andthen refine your interaction
with chat GPT to get to theanswer you need.
Unknown (26:59):
Yeah, no, that's a good
point. So so from a Research
Lens, we oftentimes ask one offquestions, because that's the
research methodology, right? Andso you you have to think about
it in the real worldapplication. It's not a one time
game, it's a repetitiveinteraction. So again, we we
when we look at how these modelsfunction through their
(27:22):
interaction with us as endusers, as humans, right? And
this speaks to a bigger issue,which is that, you know, in
current state, ai, aiapplications are really meant to
augment us as people, not toreplace Right, right? So if
they're augmenting us, they'rehelping us with knowledge,
decision making, insights,predictive modeling, whatever it
(27:42):
might be, but they're augmentingus. So it's our interaction with
these platforms is reallyimportant. So what's the
iterative prompt? Right? How doyou get really good at prompting
the different platforms to getyou the knowledge that you want
to get to at some point, not toget too meta here, at some
point, there'll be an AIplatform that does the prompts
(28:05):
for you, that will iterate yourprompts for you as it gets to
know what you're interested in,right? So that you can ask a
relatively simple question, andthe AI will figure out what
you're really asking. But we'rea ways from that, right? We're
away from that, probably in agood way, that we have some
barriers there. So it is amatter of asking again, I think
the take home is, if you want tointeract with one of these
(28:26):
platforms, ask a very specificquestion, know what you're
looking for? You yourself shouldknow what is it that I'm really
trying to figure out, doessurgery help cervical
radiculopathy? Right? And if so,how often? How frequently, what
can predict a successful outcomeversus not a successful outcome?
(28:48):
Now, I would argue these arequestions right now that as a
patient, I hope they feelcomfortable asking me, but in
that, you know, compressed timethat we have with patients in
the office, we may not be ableto get to the depth of this, of
the question we the patients,may not feel at ease oftentimes
asking these kinds of very, veryspecific, data driven questions,
(29:10):
or they may not even know whatquestions to ask, right? So
that's where an assistant likethis can give a patient
information. If I'm being veryoptimistic here, information, at
least with directionality,right? We may not tell them the
final answer, but it gives theman idea of what direction should
I head and what kinds ofquestions should I ask, what
kinds of things should I belooking for? And that's how I
(29:31):
try to educate my patients onhow to interact with
Charles Goldfarb (29:34):
with these
platforms. Yeah, it's we hear a
lot about the number, or thepercentage of patients that walk
into the office have been havinginteracted with the Internet.
And by Internet, traditionally,we talk about Dr Google, and I
think in St Louis, it still isGoogle, but that is going to
change pretty rapidly, I think,to interactions that are with
(29:54):
chat, GPT or others. So even
Unknown (29:56):
right now, Chuck, if
you go to Google today, the
first thing that. Pops up as aGemini answer. That's exactly so
even if it is Dr Google, it'sstill going to be a lot of it
might be informed by that topline that people see right which
is, which is, this is Geminianswer? Yeah,
Charles Goldfarb (30:12):
that's a
that's a great point. All right,
let's get to the meat of thisdiscussion. And and I knew this
was going to be a timechallenge, and I know we talked
about keeping this to 45minutes, we'll see this is great
stuff. So if you want to
Unknown (30:24):
do a part two, we can
always do a part two on the on
like, the in depth applications,right? Because that's, I think a
lot of times, what people arewondering is, what's out there
right now, and do I need toincorporate this into my
clinical practice? Do I what isgoing to look like in 510,
years, right? Like,
Charles Goldfarb (30:41):
yeah, that's
actually a good I'll
Unknown (30:42):
let you lead the store.
I'll let you lead the way. Yeah,
Charles Goldfarb (30:45):
we'll see.
We'll see what we can getaccomplished and what we feel
like we left on the table. SoI'll start with a broad
question, and then we shouldobviously take it to a different
level. Tell me how you and howNorthwestern is incorporating AI
on a clinical and educationalperspective, maybe not so much
the Research Lens, but clinicaland educational
Unknown (31:05):
perfect. Yeah, no,
I'll, and I'll speak mostly for
myself. I will say what we thinkwhat we're doing within
orthopedic surgery and spinesurgery at Northwestern I'll
give some examples of what we'redoing at a hospital level. But
it is, I would caveat and telleverybody this is still
relatively nascent, right? Likewe we are we in academics and in
large healthcare systems areusually not first to market, to
(31:27):
move towards things, right? Sowe're taking this a bit slower
than you might see in theprivate world, right, in non
healthcare applications. So howdo we use this? So there's a
couple of things. If we thinkabout clinical work and we think
about who is a technology goingto benefit? Is it going to
benefit a patient, or is itgoing to benefit us? Maybe like
as this physician, surgeon,healthcare team, right? So, if
(31:51):
we think about it as us, as a assurgeons, I think many of us,
many people listening, mayalready be utilizing some degree
of an AI based technologywithout necessarily overtly
thinking of it as AI, right? Soone example would be, is if you
do any kind of preoperativeplanning, and if you use any
kind of software for preoperative planning, which in the
(32:13):
spine world is again, stillrelatively nascent, but we're
moving in that direction, right?
We work with one company thatthat uses AI technology to
design custom made patientimplants for spinal surgery that
help us, help guide us towardsor lead us towards more
reliable, consistent outcomes,right surgical outcomes in terms
(32:33):
of spinal realignment. That'sjust one example. That is an AI
based technology that is alreadyin in in real life, already
being utilized, yeah, maybe atscale, because there's some cost
issues there still, and we stillneed long term outcomes to know
if we're making an improvement.
But that's one example, which islike pre op planning we think
(32:54):
about, you know, another examplefor patient facing lens, you
know, is around education. Wetalked about this earlier,
right? Can we do things to makepatient education simpler and
more reliable? And so one thingthat we've tried it is research.
We got to see if we can rollthis out, is to, is to take this
one problem, which is MRIreports, right? I'm not sure if
(33:17):
you have this in your practice.
I know I have it in mind. Manyof our partners do, which is
that our patients will willnowadays have ready, ready
access to all of their healthcare information. Wonderful. I
mean, the that level oftransparency is fantastic, but
all the information in there isnot at a level of readability
that most people, even the mostwell trained, well educated
person, may not be able tounderstand it, and they'll come
(33:38):
in with their MRI report inhand, a highlighter in the other
hand, and then just highlightseverywhere. So we're taking
these large language models anda couple different algorithms
and trying to convert thatlanguage into something that an
average person in the UnitedStates, whether that be fifth
grade or eighth grade readinglevels, can understand? Yeah,
(34:00):
right. That's a patient basedapplication that we'd love to
see if we can roll out awesome.
Charles Goldfarb (34:07):
Those are two
good examples. Let me ask you
some more specific ones.
Absolutely. So when you see apatient in clinic, do you
dictate your note? Do you typeyour note? Are you using an AI
assistant to create your note.
What are you doing now and then,I'll tell you what I'm doing.
Yeah.
Unknown (34:26):
So what I do is I
dictate a note using an
application. That applicationhas in since the time we started
with, it been bought and soldand bought and sold and is now
sits inside of Microsoft, andthey take that recording. And it
used to be that that recordingwas sent to a person to
(34:47):
transcribe, and I'm fairlycertain now that that recording
is transcribed by it by an AIalgorithm. So we get it back, we
are trialing a and. Be itlistening technology, right?
That listens to the conversationand creates a structured note
out of it. We're in a trial modethere. That's again, another
(35:08):
partnership with Microsoft. Whatdo you do? Chuck? What do you do
in your practice? If
Charles Goldfarb (35:13):
I'm reading
between the lines, and I never
shy away from, you know, gettingto the kind of specifics sounds
like you're trialing DAX copilot. Maybe, or maybe you're
not. But we have, we did a trialusing DAX cope out. And for
those who aren't aware, Dax isbasically Dragon, which was
nuanced, which was purchased byMicrosoft, and I have to say,
(35:35):
when I trialed it nine monthsago, it didn't, didn't meet,
didn't meet our needs, and sonow we are trialing a bridge
technology. And just for thosewho aren't aware, I may have
mentioned on the pod, I'm notsure, but basically, what
happens and a bridge is great.
We're moving on to the secondphase, and we're going to start
(35:57):
purchasing licenses. But it'sanother cost center, another
overhead generator. But what theway a bridge works, which is the
same way docs copilot, works, isI have to open my epic
application on my phone. I putthe phone in the office. It
listens, it records, and usuallywithin one minute, I have a note
which has been generated. So ittakes a conversation and it
(36:18):
reorders it, and it's superimpressive. I would say the HPI
and the assessment and plan are95% amazing. The physical exam
really depends on me awkwardlyverbalizing what I'm doing, and
it's, it's super awkward. So abridge for me has been a game
changer.
Unknown (36:40):
Yeah, I think, I think
that's a great example chuck of
how, not only do ourtechnologies iterate, but we
need to be able to iterate rightin our practices. We need to be
able to be critical and say,Hey, you're there. You're not
there yet. I think that. Butthis, that's just an example of,
gosh. I mean, I will tell youlike, I don't know a single
physician who lovesdocumentation. I can't think of
(37:01):
a single person who says I can'twait to go out and document
today, right? And we know theimplications, like, we know the
downsides, right, that this hashad on the profession in terms
of be
Charles Goldfarb (37:12):
specific there
for so, yeah, what are the
downsides that burden?
Unknown (37:16):
Yeah. I mean the
burden, it's not so much the
idea of documenting. We've beendocumenting for generations. We
need that. That's how youremember what happened in the
past. That's how you guide thefuture. You take implicit ideas
in your head and you make itexplicit in writing so that you
you can remember and then otherscan pick up for you, right? If
you're not there, there's afundamental purpose of
documentation. What's happened,though, right, in the last 15 to
(37:38):
20 years is really that thepurpose of documentation has
pivoted, right? And it's nolonger about patient care. It's
about it's about coding, right?
And it's about coding to themaximum amount you can while
also meeting all the regulatoryrequirements. So you layer in
those factors, and nowdocumentation goes from a useful
tool to a burden, and thatburden gets magnified. How many
(37:59):
patients a year do you see?
Chuck?
Charles Goldfarb (38:03):
I see 60
patients a clinic times, you
know, 50 weeks a year, andoccasionally some other clinics
thrown in. So, yeah, that's 3000but it used to be four to five,
but I've cut that well, yeah,you're
Unknown (38:15):
more important now, so
you can. So I'm right there. I
probably see about three to 4000people. We've got some of our
partners that are in sportsmedicine that are seeing 8000
9000 people a year. And then wethink about our primary care
docs who see maybe, they maymaybe have a panel of 2000 or
3000 but they're seeing them somany times, that's a ton of
(38:36):
work, and that is time andeffort that takes away from us
talking to patients. And I thinkbeyond the fact that it just
sucks to sit at a computer, isthe idea that we're actively
being robbed of time that wecould be spending with a
patient. So this is a in itscreate. We think it's a driver
for a lot of physician if youlike the word burnout, or don't
(38:57):
like the word burnout, or if youthink it's an injury or not an
injury, but a lot of thedissatisfaction of being a
doctor, oftentimes, it prettyconsistently, comes back down to
this documentation issue. So ifyou can take an AI solution and
make that more you know, make itsimpler, make it more reliable,
right? And make it somewhatautomated so it's happening in
(39:19):
the background that should makethings better. I I'm trying to
imagine, in my skeptics mind,how does that make things worse,
other than an accuracy issue?
What is it that would what thatwould create in terms of a
negative and it's really netpositive?
Charles Goldfarb (39:34):
Yeah, I think
that's all really well said. I
do think I'm gonna add add towhat you said. You can correct
me if you can correct me if youdon't agree, because I'm not
trying to inappropriate. Sofirst of all, you know, yes,
when we dictate our notes, it'sall about coding. It's also
about cover your ass and medicallegal. And it's always been
that, but with electronicmedical records, you know, they
don't have to interpret myhandwriting to assess, you know,
(39:56):
what I may have said. So, andthat's good. And the second
thing I'll say. Is I do. It'svery clear that the
documentation load, especiallyfor those in primary care
specialties, Does, does increasethe dissatisfaction with career
choice and the word burnout, asyou said, I would say my
experience with these clinicdictation assistants is
(40:21):
incredibly positive it. And Ifind that, you know, what we all
don't like is if we go to, if Igo to my primary care that I
really like, I'm sitting hereand he's sitting here typing on
his computer, and that'smiserable. No one likes that
with this tool. You know, I'mhaving a great direct
conversation with the patientnot writing anything, and so the
(40:43):
connection with the patient ismarkedly improved. And the price
of this technology is not a hugeburden, but it comes with at
least the way we're thinkingabout it is, if I'm going to
pay, and I don't, you know, somedoctors have scribes, and
scribes are very expensivetools. This is far less
expensive, but it's stillanother cost, and I better see
(41:03):
one or two more patients inclinic to cover that cost. And
that's just what medicine does.
Every layer we add brings anexpectation we need to generate
more revenue. Is that off base?
Or what do you think?
Unknown (41:14):
I think I listen, I
totally, I totally agree with
you. I learned when I was yourjunior resident to always agree
with you, right? So what I wouldadd to that, though, is it's,
it's, yes, you want to be ableto offset the cost, right, that
you're adding to your practice.
There's another component, whichis, which, when you're running a
business, right, and you'rerunning a group, which is the
idea of, like, Can you, can youhang on to talent, right? So you
(41:34):
may say, Listen, yeah, this isan additional cost. And if I
ledger it and I line item it,how do I balance it out? Is this
going to make my physicians, mysurgeons, my nurses, my mas pas
pts? Is it going to make themmore likely to leave me or less
likely to leave me? Yeah, right.
(41:56):
So talent retention, acquisitionand retention, I think, needs to
be added to that mix that youjust threw in there, which is,
hey, listen, it may not give methat time to see an extra
patient or two, but gosh, youknow what it does? It makes my
makes my my partners reallyhappy. It makes my nurses really
happy. It makes my PTS reallyhappy. And so we're going to
(42:17):
make this as an investment,because it helps us for
retention. That's just anotherexample, right, of where you
might find these tech, theapplication of these
technologies beneficial to yourto your clinical practices. And
we see this being being mirroredin non healthcare industries,
right? Chuck you. And I talkedbefore about the benefit of
getting an MBA is, hey, you geta chance to see what life looks
(42:37):
like outside of healthcare. Andyou see this applied pretty
widely already, right, in a lotof different fields, especially
in retail, consumer goods, thosekinds of things and and the
workforce component is a largepart of that, right? It's a
large part of their decisionmaking is not just, can I get
more productivity, or can Ireduce labor, it's also, can I
(42:59):
hang on to talent?
Charles Goldfarb (43:00):
Really, really
good and important point. All
right, I'm going to put you onthe spot and maybe make you feel
uncomfortable. And I'll start bysaying, I have done this. Have
you ever seen a patient walkedout of the room being a little
uncertain about what thediagnosis of the most
appropriate management might be,and gone to chat GPT and said,
(43:22):
Help me with this. I have apatient with X, Y and Z. What do
you think?
Unknown (43:27):
So I have I haven't
yet. I'll be honest with you, I
have sat there and I'vescratched my head and I've
wondered, would this be a usefulapplication of chat GPT? But
then I go back to the researchwe've done, and I and I say,
Listen, I can't rely on it tomake a decision. So what do I
do? I fall back right now intoold habits, which is, I go back
(43:50):
and I look at the literature, Ido a PubMed search, and I look
for relevant articles. Nowthat's time intensive. And so
yes, would I eventually love toautomate that? I would love to,
but I've got to really come to alevel of trust right on, on the
sources of data that I'm, thatI'm that I'm using, whether it's
directly myself or indirectlythrough a platform, and I'm I'm
(44:10):
just not there yet.
Charles Goldfarb (44:11):
Yeah, I would
say this, that's a really
important point for all thelisteners. Is, you know,
maintain a healthy skepticismfor me and my little world of
congenital anomalies andsyndromes. If I have an
undiagnosed patient, putting aconstellation of phenotypes or
appearance issues can behelpful, but that's more, you
(44:33):
know, add add up a number offactors and help me understand
so very different than a trueclinical intervention scenario.
Unknown (44:41):
So Chuck, I would
actually, can I ask you a
question? I know that I'm yourguest and you're the host, but
so would you have you evertaken, uh, or thought about
taking an image of a handanomaly and putting that out
there, just the image, puttingthat out to the world and saying
to the to a platform and saying,What is this anomaly? And can
you detect it through through acomputer vision application?
(45:04):
Yeah, it's
Charles Goldfarb (45:04):
a great
question. We have not done that.
We have talked about that as anext research step, because it
is a great question, becauseI'll give an example. I had an
amazing young family come in anoffice last week with a child
with a congenital difference.
And you know what we do now iswe, you know, you and I, and
anyone who's seeing a lot ofpatients, no matter what the
(45:27):
potential diagnoses are, we'rejust sort of doing pattern
recognition. Well, patternrecognition is exactly what what
this is, and I didn't know whatthe diagnosis was. So in our
research group, you know, wehave the code registry, which is
really an amazing platform, andso if we don't know the
diagnosis, we check, we check abox, and our classification
committee considers it. But yourpoint is excellent.
Unknown (45:51):
That's a great
structure, by the way, to have
in place, and I would commendyou to be willing to put it out
there to the world thatsometimes you know, and you're
one of the you're not going tosay this, but you're one of the
nation's experts in congenitalhand differences, right, in
anomalies. And for you to say,Yeah, we may not know sometimes,
and I'm willing to check a boxthat says I don't really know
that, by the way, itself justspeaks to like a learning
(46:12):
environment, a growth mindset,all these super positive things.
I also will say, I appreciateyou use the word difference as
opposed to congenital handdefect, which I think is what it
was called, right when we werein training. So I think that's a
great application of it. So Ihaven't used it what I have
done, though, with patients,because sometimes we're having a
conversation and and even if I'mnot dealing with the level of
(46:32):
Rarity or uniqueness that youare, there are times where I
will be talking to a patientabout a condition, and it may be
a cervical radiculopathy, alumbar radiculopathy, it may be
cervical myelopathy. So reallycommon conditions for us as
spine surgeons, but obviouslybrand new to the patient. And I
will just ask them. I said, Hey,can I have your phone real
quick? Let me type in thecondition so that you can start
(46:56):
to figure out what to read abouton your way home or when you get
home. I'm not sure if our phonesare actively listening to us or
not. I'm, you know, my, my theyare, as a lay person, I think
they are but, but I can't counton that, so I give them, I'd sit
there and I'll type it in forthem, right? Whether it be into
it's usually Google. And sothese are the things I want you
(47:20):
to read about, or I willspecifically send them to, you
know, surgical videos that Ithat I vetted already. So that's
the the depth of Interplay rightnow that I am comfortable with,
with patients. In general, Ihave had patients of mine that
are coming, whether it be fromNorthwestern or from one of our
other teaching institutions inChicago who come from a computer
(47:43):
science background, and thenit's a very different
conversation, right? Becausethey're interested in this. I'm
interested in this, but I trynot to, I try to have a mental
barrier up where I'm trying notto let it influence my my
decision making, because I justdon't know if I can rely on it
or not, right? But you said acouple things. Pattern
Recognition like these are broadconcepts, but what is a what
(48:05):
does an algorithm do that we'realready doing? How does a
computer take what's implicit inour head and make it explicit?
Right? To the world? PatternRecognition is one you do
predictive modeling in yourclinic all the time. Chuck,
right? You're looking at apatient, you're looking at their
diagnosis, you're looking at thestructure of their hand, you're
looking at the parents, you'relooking at the kid, and you are
(48:27):
putting all those factors intoplay to try to predict an
outcome. That's right, andthat's essentially what a
machine learning program istrying to do. If you if you do a
predictive modeling program,it's taking data out there and
try to predict an output. Theone thing that it can do
differently is that it may findfactors that we can't even
identify. Right and that's thecool part, but the hard part,
(48:52):
and this is actually what led usto that AOA symposium, to go
full circle, was that we don'tnecessarily how do we interpret
the output if we don'tunderstand what the inputs are
in a machine learning model, andif it's truly a black box, if
machine learning, and all ofthese applications to us are a
black box, and we all we see isthe output, but we don't know
(49:14):
what's going on in the inside.
We don't know how the sausage ismade. That's a problem, because
then we can't be critical of it.
And I think that's what led usto the AOA symposium, was we
thought, you know, listen, thisis growing in our research. It's
growing in our clinicalapplications. Most orthopedic
surgeons were not raised in thisworld and don't have the level
of critique, or, you know,critical thought, critical
(49:37):
tools, I should say, to applyhere. And can we move the needle
there? Right? We know how to,how to, how to look at a chi
squared analysis. We know how tolook for a regression, a basic
linear regression model. We knowwhat the pros and cons are
there, because we've seen thatover and over again. We need to
get to that same level of ofcomfort with some of the some of
the AI processes. Yeah. Yeah, I
Charles Goldfarb (50:00):
love that.
I'll tell you where my brainwent, and then I'm going to ask
you to sort of close this downby listing a few other things
either you guys are doing orthinking about doing. And maybe
that's a teaser for a follow up,if you're willing, where my
brain went during yourconversation. Was I imagine
this, you know, University ofChicago or Northwestern based
computer science expert comingin your office with, you know,
cervical radiculopathy, and yousit down and you're like,
(50:22):
geeking out on this with thisperson, and all of a sudden,
your clinic is totally off therails. 30 minutes later, you're
still in the room, and you'relike, my day is shot. But you're
super having fun talking aboutthis stuff, because, you know,
clinics go off the rails in somany different ways, but a great
conversation with the patient.
(50:42):
Sometimes it makes it okay, bythe way, I
Unknown (50:45):
think it also it always
makes it okay. Now, if you're my
next patient, you would notagree with that. That's right,
but if you're my currentpatient, I think you do, right?
And it may not be just acomputer science issue. It may
be more a matter of, like,listen, as we know, we had some
great mentors, right? Like, andso we got to see if your patient
(51:06):
feels heard. I think the numberone complaint we hear from a
patient, and in my practice, inmany people's practices, that
they The doctor didn't listen.
No one listened to me, right?
And that might be because we'reclick clacking away on a
keyboard the whole time. Itmight be because we're already
thinking about the 30 minutesI'm behind, the 45 minutes I'm
behind. But if you have a way toconnect to somebody, and if it's
(51:28):
through something that you havea passion in Yes, I do find
myself stuck in thoseconversations. I can usually
tell them I'm in a rabbit hole,because the resident that I'm
with or the fellow that I'mwith, you can see them take
their phone out slowly and startto be looking at their phone a
little bit. But then I again, Ialso have a great team. And my
team, you know, fights hard tokeep me, against me, to keep me
(51:50):
on time.
Charles Goldfarb (51:51):
The funny
anecdote, and I think the week,
this has been said on thepodcast that you and I both
learned from my currentpartners, Marty Boyer, that, you
know, 90% of the time the timethe patient walks in the room,
and within 30 seconds, within 10seconds, we know the diagnosis.
And so Marty will talk to thepatient about their pot roast
for five minutes, and that's theconnection point. And we laughed
(52:13):
at him in his notes talkingabout the pot roast, but of
course, he got it right. I mean,he really has a way with most
patients to connect in thatregard,
Unknown (52:24):
yeah, and this is a
whole nother conversation, which
is that that example you justshared, the ones that you
probably have from yourpractice, and how you connect
with the kids and their parents,right? The way we try to connect
with our patients atNorthwestern that is all about
building trust, right for sure,and I will bring this full
circle to the conversationaround AI. So the issue here is,
(52:47):
I think forever and a day we'vebelieved, and we've lived in a
healthcare world where peoplehave trust, like they come to
the door trusting us, right? Ithink that that level of trust
is really in question right now,and I think some of that level
of trust being questioned isjustified, right? And some of
(53:08):
it, some of it is probably avery emotional, driven
questioning based on what'shappened in the last couple of
years, right? So, but thatability to create and generate
trust is still something that wehave as humans that I don't know
that an AI algorithm or an AIapplication is going to be able
(53:28):
to replicate or replace now thatcan be challenged for sure,
right? Like, I trust my GoogleMaps to tell me how to get home,
like to cut through Chicagotraffic. What's the right way I
have trust in Google Maps to dothat. Sure, if I'm searching on
Amazon for something for my kidsfor Christmas, I trust in Amazon
(53:51):
will show me their algorithmicprocess will show me things that
you know that 11 year old boysor 15 year old boys like, right?
Hopefully it's all safe for workstuff on the 15 year old boy
side. So, but I think healthcareis different, right? And I think
this is the fundamental questionwhen we think about long term
application of AI technologies,when we're talking about patient
(54:12):
facing, applications, backoffice, stuff, revenue, cycle,
management, marketing,regulatory, contracts, finance,
that, that we're going to get tovery, very quickly if we're not
already there in some practices.
But patient facing ones have tobegin and end with trust, and
that's still where we come in.
And hey, you know what? If thatautomated scribe you describe
(54:33):
gives you five minutes in theroom to talk about, you know,
what a patient's going through,or to talk about their job or
their interests, or whatevergreat because you're that's the
win building trust. That's thewin. That's where a a surgeon
working or a physician workingwith an AI technology is a
better option than a surgeonworking by themselves or an AI
(54:57):
technology working by itself.
Yeah. Right? I think that'swhere we go. This idea of human
machine intelligence being acombined effort, not not a not a
competitive one. All
Charles Goldfarb (55:08):
right, we
really do need to close it down.
So I love some of the backoffice stuff you mentioned. You
know, the coding process whichwe have. We hire coders who are
wonderful, but you can see thewritings on the wall where the
future lies there, tease us witha couple other things either
you're doing or or might bedoing in the future. Yeah.
Unknown (55:25):
So, so on the back
office side, we there's so much
talk about, and that's, again,not surprisingly, where you see
a lot of the early technologybeing applied already. Yeah,
right. So, so, so we can talkabout that anytime that that's
probably more of a businessschool conversation, but
hopefully a lot of thephysicians listening care a lot
about that as well. I would say,from the surgeon facing side,
(55:47):
where our research is going isaround trying to get to better
predictions of outcomes, right?
Can we? Can we utilize machinelearning insights to get to
better predictions so we canbetter inform our patients what
to expect? Let's say, aftersurgery or after treatment,
because then that might affecttheir decision making. Right?
All this comes down tosupporting really good decisions
in healthcare. We want to thinka little bit about patient
(56:10):
education. I mentioned earlierone example of how we're trying
to translate medical orradiology as a language into,
you know, patient facinglanguage, English at a right
reading level. And then how dowe utilize, use AI technologies
to better communicate withpatients broadly? I think these
are some of the things thatwe're working on, from a
research standpoint, that I'msuper excited about. And I think
(56:32):
about the next five, six yearsfor our for our research
efforts, that's the fun part.
And then there's some really,really tangible applications to
solve problems for patients.
Charles Goldfarb (56:44):
Love that.
Look, I've taken an hour of yourtime, and we talked for quite a
while the other day inpreparation for this. This is
This is gold, and I think ourlisteners will agree. But yeah,
I'd love to circle back and gointo more depth. Tell me who
your most important partner iswith your research and moving
this field forward.
Unknown (57:04):
Yeah, so I'll name one
person. Sreed divvy is one of my
partners at Northwestern hejoined us in 2020 he's leading
our machine learning andorthopedic intelligence process
moving forward. And he'sfantastic. I'd love for you to
meet with him and consider, youknow, having him jump on he's
really been the, been the humanengine behind our machine
(57:28):
learning work. And then, again,more broadly speaking, though,
you know, we're, we are just acouple of researchers in a
really large institution, andwe're lucky to have really great
colleagues and with a lot moredepth of expertise and in
computer science. And then thelast group, I'll say is
actually, is actually ourstudents and our residents that
(57:49):
are right. We have a and threehas helped develop this. We have
this immense pipeline of talentgoing all the way up to the
undergrad population atNorthwestern who are deep into
computer science and datascience, and, you know, I have
to, whenever they talk, I'm soblown away by how bright these
students are. I also need tostop and take notes so I go home
(58:11):
and I figure out what they'retalking about, but, but they've
also really been an unbelievablesource of of ideas and talent,
you know, for us, love it. Love
Charles Goldfarb (58:19):
it. All right.
Thank you for your time. I lookforward to continuing this
conversation. Continuing thisconversation, and good luck with
your meeting this week.
Unknown (58:26):
All right, thanks so
much, Chuck, thanks everyone.
Charles Goldfarb (58:30):
Hey, Chris,
that was fun. Let's do it again
real soon.
Chris Dy (58:33):
Sounds good. Well, be
sure to email us with topic
suggestions and feedback. Youcan reach us at hand,
podcast@gmail.com
Charles Goldfarb (58:40):
and remember,
please subscribe wherever you
get your podcast
Chris Dy (58:43):
and be sure to leave a
review that helps us get the
word out.
Charles Goldfarb (58:47):
Special,
thanks to Peter Martin for the
amazing music. And
Chris Dy (58:51):
remember, keep the
upper hand come back next time
you