Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to Mediascape
insights from digital
changemakers, a speaker seriesand podcast brought to you by
USC Annenberg's Digital MediaManagement Program.
Join us as we unlock thesecrets to success in an
increasingly digital world.
Speaker 2 (00:30):
I am so excited to
dive in to one of our favorite
topics AI and healthcare andcoordination and a lot of really
innovative things that my guest, jim St Clair, is involved in.
Jim, I can't even name all thethings that you're involved in.
Speaker 3 (00:40):
I can't name all the
things I'm involved in.
Speaker 2 (00:42):
But you started out
as a naval officer.
Is that correct?
You were in the Navy.
Speaker 3 (00:47):
Many moons ago.
That's correct, yeah.
Speaker 2 (00:48):
Yeah, and then
everything you've done since
then has really been abouttransformation, whether it's
financial systems, healthsystems.
So I'd love to hear a littlebit about how you got into that
space, where your interest is,and of course, now you are a top
tech leader.
You is, and of course, now youare a top tech leader.
Speaker 3 (01:05):
You know, you're
really an innovator.
I feel like a top tech leaderJust take the praise.
That's right, exactly, thankyou.
Speaker 2 (01:17):
Yeah, just thank you
for being here, and I'm really
excited because I do have anumber of students who are in
the healthcare industry, and soI think it'll be really
interesting for them to hearfrom you what you're doing now.
But before we do that, let'sbacktrack and get to that.
You know, how did this becomeyour passion point, where you
saw that day but have been both?
Speaker 3 (01:31):
blessed and cursed, I
guess, over the course of the
last 24 years to becomeaffiliated with transformational
efforts of one type or another.
The first thing that kind ofboosted my career was around the
(01:54):
topic of systems resiliency andinfrastructure protection,
which became a top topic withinthe Clinton White House, and
we're still seeing theramifications every day now,
what we were trying to do backin 1998, 99 and into the turn of
the century.
I then got associated withfinancial transformation from a
(02:14):
government financial managementcontext because I went to work
with a large accounting andadvisory firm and their public
sector practice in the DC areaInterestingly enough, still lots
of parallels of financial datatransformation and things being
done with financial data modelsand standards that parallel very
closely to health care.
(02:35):
So the opportunity came up tojoin PIMS the Health Information
and Management Systems Society,which is the largest health IT
membership association in theworld, as the Director of
Interoperability and Standards.
So I was once again very muchheavily in these areas of HL7
and what was the early days ofFHIR and IG profiles and WDX12
(02:59):
and all this stuff that arestandards for how healthcare
transactions and healthcare datamove around and got the
sickness and the virus forstaying in health IT, which, of
course, as many of yourlisteners know and you of course
know very well, is a verytarget-rich environment when it
comes to anything to beconsidered transformational,
while at the same time, you know, very structured, bureaucratic
(03:19):
and disincentivized to do thewrong thing in many cases, or
right thing in many cases.
And from there worked with anumber of big companies, small
companies, spent quite a bit oftime in federal healthcare,
especially with the VA andsupporting VA IT activities, and
then got more into health techand working with a couple of
health tech startups, and nowworking across a couple of
different startups and trying toget a startup on my own,
(03:41):
dabbling in areas as we'vetalked about AI and ML,
patient-centric recordmanagement, patient digital
identity, digital twins, etc.
Speaker 2 (03:50):
Yeah, well, it's not
just, etc.
I'd love for you to really diveinto that.
What took you from because I dotalk to a lot of people who
have backgrounds in computerscience and data science and
have, you know, translated thatinto AI startups, and is that
the same kind of paths andtrajectory that you went through
?
You just saw, this is where weneed to be.
(04:10):
This is something that's goingto be very helpful.
Speaker 3 (04:14):
Yeah, so I'm thinking
about that for just a minute.
I have no academic backgroundthat supports being able to talk
about algorithms and Pythoncoding and large language models
and the like.
The closest I can offer is froma transformational perspective.
A couple of companies ago, weestablished one of the first
public sector partnerships withUiPath, which was a leader in
(04:36):
robotics process automation.
The cool thing about workingwith them and I offer them in
particular is they take theapproach, before working with a
customer, that you develop acenter of excellence around
robotic process automation,which really starts with
business process transformationand business process improvement
, the idea being you know, bringin a bunch of robots if you had
bad processes, because nowyou've automated a bunch of bad
(04:58):
processes.
And on the cusp of that andsome of the work I was doing was
this early thing called AI andML.
What I saw as a personal asidewas so many cases that it was no
longer oh, it's not aboutrobotic process automation and
improving your processes.
Ai will take care of all ofthat for you and you can just
throw a bunch of junk andsomehow, magically, the GPU will
(05:20):
calculate the answer you'relooking for and you don't have
to put in the sweat equity tomake your organization run the
way it's supposed to.
So I've gotten moreacademically interested and
involved with AI and ML wherethat's going, but still very
much feel strongly about theseprinciples and not to ramble on,
but they're coming up now inconversations with the groups
and coalitions I participate in.
(05:40):
Healthcare AI, which is it's notjust matter of using ChatGPT or
OpenAI or CloudSonnet orsomething like that to try and
solve a problem.
It's understanding what data doyou have and what is the best
architectural approach to dothat.
Do you want to use athird-party system?
Do you buy something?
Do you build your own?
Do you want it to run on yourlaptop?
(06:01):
Is it going to run in your datacenter, et cetera.
And those are all questionsthat I think we still are
wrestling with.
As we wrestle every day withwhat AI looks like and designed
to run at the local level.
There's been more and more workdone on these smaller language
(06:28):
models that can run locally and,of course, apple Intelligence
we're all waiting for is derivedaround a small model with, you
know, a back-end, back-haulcapacity to do larger compute in
a secure environment.
I really think that's whereit's going to be.
I think the ability to not onlyoptimize your compute
capabilities locally at thelevel of your phone, but, on top
(06:50):
of that, have the right datascience strategy to know.
Within this AI computeframework, I only need to X, y
and Z to get what I'm lookingfor, and then, beyond that, I go
to larger frameworks andparameters for other answers.
Speaker 2 (07:04):
Yeah, quick side note
my AI ML professor, who created
the program at Villanova for myMBA program, was at UiPath.
Speaker 3 (07:14):
Oh, fantastic, yeah,
yeah, all great people at UiPath
.
Speaker 2 (07:17):
This whole sector for
UiPath, so small world.
I'm sure there are some otherconnections I'm thinking about.
We'll talk offline, but youmentioned several really key
things.
What kind of models we're using.
A lot of people might not beaware that it's not just about
LLMs, but there are the smallerlanguage models and those are
really important.
Thinking about whether the datais secure right there have been
(07:42):
.
When we look at Californiaprivacy laws, we look at the
GDPR, we look at other areas, wesee that a lot of data, even in
healthcare, has been leaked,has been utilized and shared in
ways that it's not supposed to.
So with the advent of, you know, overlaying these other
technologies, I know that dataand privacy are still huge
ethical concerns.
(08:03):
Data and privacy are still hugeethical concerns.
So can these be partiallysolved by these new technologies
?
To create more secure systems,to make sure that data isn't
going out, that we all have ourdigital number or digital twin
so that it doesn't track back toour specific personhood?
Speaker 3 (08:22):
Yeah, and I guess I'd
first pontificate and say, of
course there's a great deal ofhubbub that was made about AB
1047 being killed in Californiaand Governor Newsom vetoing it,
and you know whether it was goodor bad.
What concerns me from aregulatory framework, control
framework standpoint is AB 1047may have been a really lousy
(08:44):
piece of legislation I don'tnecessarily think it was but
let's just say for argument'ssake it was.
The problem is that we killthese things and the same, quite
frankly, california-basedactors step up to try and argue
about, you know, hinderinginnovation and stifling
developments when they haveulterior motives.
And so now there's nothing.
And so you know that you arguephilosophically about, you know
(09:08):
better to have a bad bill thanno bill at all.
Perhaps, maybe, just maybe, bypassing it, we would get to see
through and work through whatthose secondary and tertiary
ramifications were.
But right now there's nothing.
There's nothing in the federallevel, there's nothing at the US
level, and between workinggroups I'm involved in in the
federal government, federal ITand health care they're each
(09:30):
trying to sort out what shouldwe do and what's the right
answer.
This just happened to pop intomy mind today as I was
participating in a discussionaround.
You know, I have to concede,and first of all, as background,
going back almost 20 years ago,I was an IT auditor and I
audited a lot of IT systemsusing federal NIST standards,
iso standards, et cetera, andthe premise there was,
especially back in those daysyou had a box, you secured a box
(09:53):
, you hooked that box to anotherbox, that made a network, you
secure the network, and we'rewell past a lot of that now,
(10:14):
fortunately, or unfortunately,just by the advent of technology
.
I'll find the links and sendthem to you if you like about
steps now currently taken tolimit bias in.
Ai actually inject bias in andof itself, so it inadvertently
introduces bias in an effort tostop bias.
Because of how the biasevaluation process works on top
(10:36):
of what a neural network isdoing to compute an answer that
you're looking for.
So it's almost kind ofquantum-like, which is well, by
virtue of observing the state,you're destroying the state that
it's in at the time and it'sgoing to be something different
the next time.
So yeah, that's kind of a muddyanswer, but I have been trying
to stick to basics and, inseveral of the groups that I'm
in, point out the number ofcontrol frameworks that are out
(10:57):
there, nationally andinternationally, that we have
always used for IT and, mostimportantly, risk management,
and say what's the riskassociated with your data, with
security, with privacy breaches,first and foremost, before
using this tool At a very highlevel.
There is a tendency for a lot ofindustries to make what we call
build versus buy decisions,which is, hey, can I just go buy
(11:18):
a solution that helps me dothis?
Ai, I think and going back towhat we talked about, the
smaller models provides you alot more opportunity to roll
your own.
You can evaluate potential AIapplications in the context of
well, what if I had somethingsmall that's enclosed entirely
within my secure enclave or mysecure perimeters of my network
and use that and not have tocome up with, you know, some
(11:40):
sort of agreements forinformation sharing with open AI
to go out and use that modelinstead, where I don't really
know where everything is goingor what happens with it.
Speaker 2 (11:48):
Yeah, oh my gosh.
I'm over here living my bestnerdy life right now.
Speaker 3 (11:53):
Fantastic.
Like I said, this is the lastminute of the day.
We're both kind of chuggingalong, so it's good to get
engaged.
Speaker 2 (11:59):
Yeah, so I'd like to
hear about some of the
organizations you're currentlyinvolved in these startups and
what you're trying to solve foreach of them.
Speaker 3 (12:07):
Certainly, as I made
the mention of Digital Twins, so
I do spend quite a bit of timevolunteering with some
membership organizationsCoalition for Healthcare AI, the
Digital Twin Consortium, whereI'm a co-chair for the Health
and Life Sciences Working Group,the Health AI Partnership,
which is run through DukeUniversity, another
federal-oriented membershipassociation called the Advanced
(12:31):
Technology and Academic ResearchColloquium, which is ATARC, and
they have a bunch of workinggroups around topics like zero
trust and generative AI.
And then I'm working on acouple of different companies,
including one potentially to dodigital identity and what we
call zero trust architecture foryour patient data and your
records.
And then, of course, how do webring AI and ML into that?
Speaker 2 (12:53):
Yeah, so let's talk
about what each of these
concepts mean for people whomight not be in healthcare,
haven't really, you know, arejust dipping their toes into AI
and don't know.
Speaker 3 (13:02):
What's he?
Speaker 2 (13:02):
talking about.
Speaker 3 (13:05):
Yeah, so I think
we've made some reference to
large language models.
That is kind of the bread andbutter of most AI ML
conversations nowadays.
These are kind of the latestiteration of what has been
around for a long time.
It's something called neuralnetworks, which is the ability
of computer processing to sortout relationships between parts
(13:26):
of information or to solve aproblem.
I'm sure many people arefamiliar with things like being
able to guess is that a pictureof a cat or a dog and how a
neural network you know guessesto that.
And the latest refinement isreally what I refer to as a
brute force method of hey, I'mjust going to go out and read
everything that's available onthe internet and use that as
inference to guide my neuralnetwork in training with things
(13:49):
like human feedback and othertechniques, to be able to create
a language model that gives meanswers when I ask it questions.
And for anybody who isn'tnecessarily using something, I'm
a big fan of Claude Sonnet andyou can get it for free.
It's very, very powerful.
I use it like four times today,so I haven't paid a dime for it
, and the answers are coherentand they're succinct and if you
(14:10):
use them a couple of times a day.
You'll end up saving yourselftwo hours in the course of what
you would have been writingyourself, so that's a good
example.
Now think forward and say, okay,it's not just going to be me
saying, hey, here's a papersummarize the top points of what
I should be interested in andwhat could be done next.
But here's all my medicalrecords summarize where you see
(14:31):
trends around particular thingsthat I should be concerned about
and what kind of medication youmight be recommended.
I was just reading and sharedit with a couple of friends.
Just read an article about as asurvey being done around
doctors using CHAT-GPT andsimilar large language models
that are available, and itquoted one of the patients
saying that they were aware thattheir doctor was doing this.
(14:53):
They were doing drug allergies.
They just used chat GPT for adrug allergy question and it
wasn't a matter that she wasconcerned about it so much as.
Oh well, that empowers mebecause I could just sit in the
comfort of my living room andrun my drug allergy query
through an LLM just as well asyou could, doc.
So thanks yeah.
Speaker 2 (15:13):
Yeah, so, and what do
you mean by zero data?
Speaker 3 (15:16):
So, going back a
couple of years ago some people
may have heard of SolarWindsbecause I think it made it into
the media.
This was a really big securitybreach involving a network
management company known asSolarWinds.
Solarwinds has been superpopular for having a really cool
product that allows you tomanage all kinds of network
(15:37):
things in kind of asemi-automated fashion and, you
know, it helps provide anorchestration framework for
doing things with servers andload balancing and all that sort
of thing.
What happened was that therewere I believe was suspected to
be Chinese hackers that wereable to compromise the security
certificate mechanism thatallowed different instances of
SolarWinds to talk to theircustomers' networks and as an
(16:00):
output of that, there was morepolicy in the federal government
to do what's known as zerotrust.
I happen to summarize it thatzero trust means I don't trust
you and you don't trust me,which means for any given
transaction or informationsharing, I have to provide you
with a trusted means of identityverification that you're
willing to accept and,conversely, the same thing, and
(16:21):
we also have to agree, excuse me, not only in the parameters of
that identity, but that ouridentity is adequate for
whatever it is we're asking todo back and forth with one
another and there has been atremendous amount of effort to
try and develop tools and vendorsolutions and architectures and
strategies to address that.
I happen to argue that identityis identity.
In the DOD, zero trustarchitecture, for instance, is
(16:45):
one of the five pillars that'sconsidered there.
I happen to take it a littlefarther in terms of identity
being a case where you know youwalk into a library, you find
someone you know.
Perhaps you have a close friend, an acquaintance, someone
you've seen at Starbucks andsomebody you've never seen at
all.
Well, obviously, the closefriend.
You have established a trustframework that will allow you to
(17:05):
start talking immediately aboutthe fact that your cat is sick
or your mom didn't make it backfrom the bar last night at a
good hour.
Then you have your acquaintance, where it's maybe hey, did you
go to class yesterday, did youget notes I can take?
And then the person you've seenat Starbucks you've never seen
before it may be back to justexchanging names and do you come
here often?
You've established a trustframework already on the basis
(17:27):
of I don't trust you any morethan X, and this is what we're
going to share in the context ofwhat we've established together
.
Speaker 2 (17:33):
Yeah, that's such a
great explanation.
Thank you, groovy.
And the idea of digital twins.
So one thing I was thinking ofis last time I went to the
airport, I never showed my ID.
I went through clear, had themscan my eyes.
I had you know I'd already hadthe digital approval or whatever
.
I know in California you couldhave your driver's license on
(17:53):
your phone cards and Apple Payor other cards on our phones.
So there are a lot of thingsthat we're really moving to
towards in this digitallandscape where we don't have to
carry paper money, we don'thave to have our license in our
wallet or our purse.
So can you explain how thedigital twin takes this idea
even further and how it relatesto health care and how you see
(18:14):
it as helping make sure peoplehave more security over their
data and their health records?
Speaker 3 (18:20):
Yeah, certainly.
And first, to start with, thedefinition of a digital twin was
actually invented by NASA forthe Apollo 13 mission where, for
anyone who's seen the movie,they come in, they dump a bunch
of parts out on a box and say,well, this is what the folks up
there have to work with.
What can we figure out how tobuild?
And so that's where the termcame from, where anytime you
(18:40):
create a digital emulation of aphysical system, you can refer
to it as a digital twin and, asyou can figure, with computing
power and all that growing asmuch as it has there is, I've
been exposed to a tremendousnumber of working groups that we
have in the Digital TwinConsortium, in engineering and
climate change and manufacturingand pharmaceuticals.
(19:01):
Where are there ways to createa digital simulation of a
physical process?
That I go to first, and I thinkI could be wrong, but I think I
heard, for instance, that NASAnow requires digital twins to be
created for all of their newacquisition programs.
So before you begin physicallybuilding a rocket, you have to
(19:22):
have a digital twin that showshow that rocket works and the
systems work, because it's a wayto understand.
Are there potential failurepoints?
Is it particularly expensive todo one thing versus another
thing and work through thoseproblems in a simulated
environment, you know, beforeyou build 22 feet of solid steel
rock and realize, oops, thisdoesn't fit together.
So in a healthcare context, whathas been going on to date and I
(19:44):
highly recommend the report ondigital twins in the National
Academy of Sciences, which isfree on their website Really
thick, but it's free Appendix Dis all about biomedical research
and digital twins and numerousuniversities in the US and in
Europe have done research tocreate digital twins, more
specifically of human systems,so like the heart or the lungs
(20:06):
or other aspects of that.
And then, of course,specifically for conditions, the
National Cancer Institutesponsored the Cancer Patient
Digital Twin Program inconjunction with, I think, about
five different universitieswhere, using high-performance
computing and other resourcesfocused on you know what things
for melanoma, which is skincancer, dermatologically, can we
(20:28):
consider in a digital twinmodel?
It was kind of like what youand I were talking about right
beforehand, which is, hey, Imight have this condition or
I've had a family history ofthis condition.
What can a digital twin modelfor me to know?
When I might get it or waysthat I can prevent it, or how is
it that there are 50,000 otherpeople who fit the similar
genetic, physiological profile,demographic profile as myself,
(20:52):
and how is it manifested withall of them and things that I
could recommend that I do ordon't do in order to improve my
health?
Speaker 2 (20:58):
Yeah, and to that.
We hear about designer babies,we hear about children.
You know, and decision makingof you want your child to have
this hair color, this eye color.
Do you think that in the futurethat this might also become
part of?
It is like really not just someof the testing that they do for
(21:20):
pregnant women, but it'll bereally advanced and they'll be
able to kind of almost futurecast what they're like.
Speaker 3 (21:26):
Yeah, I hope so, and
just as late as today had this
discussion.
I am a very strong believerthat and to set the context, we
talked quite a bit about thesilver tsunami that's obviously
our aging demographic, agingpopulation, many of which,
wonderfully, also have chronicconditions on average can have
up to six chronic conditions perperson, and against that you
(21:48):
have a changing demographic ofdoctors and physicians
themselves.
So you have an older population, rapidly growing chronic
conditions that needs care.
You have an American Academy ofFamily Practitioners where the
medium age of the AAFP is 55,and that was a couple of years
(22:08):
ago.
So in about the next sevenyears the median age will be 65.
That means that you know amajority of them will be
eligible for retirement and youjust won't have those docs
anymore for anybody.
So you have a growing need forhealthcare and resources, with a
rapidly decreasing andcurrently unsolvable problem
around the number of clinicians.
So I said you know the people,the thing we don't.
(22:30):
We talked about both of thoseproblems.
What we haven't talked about isthere's an infrastructure piece
there which is right now youand I are used to thinking well,
our health record resides atthat hospital over there with
that doc, and they've got it.
Well, when that goes away, thatwon't be the case and, as I'm
seeing right now where I live,in rural health care, there's
county after county after countythat may not have health care,
(22:50):
significant health carefacilities.
That drives maybe one or twohours away to get someplace that
you don't have an OBGYN within67 miles.
Those are going to necessitatenot only the ability to manage
your own health care data andagain using that zero trust
model for who you share it withand how, but we have to leverage
AI to be able to improve things.
(23:11):
So when I hear people say AIwill never replace docs, I say,
well, they damn well better,because pretty soon we're not
going to have a doc sittingthere.
You're going to have to havesomething or else, you know,
we'll be facing the samechallenges that developing
nations will be in our abilityto deliver care.
So what can we do?
That's a moonshot in terms ofleveraging all of this AI and ML
(23:31):
and quantum computing anddigital twins to develop, you
know, predictive analytics, todevelop the capabilities to
communicate with people atwhatever language or whatever
level, to convey ways to guidethem, to be able to do things,
and I just think that's anecessity and I say the clock is
ticking for the next five or 10years to solve it.
Speaker 2 (23:49):
Yeah, if that long.
I think about radiologists andhow there aren't enough
radiologists, and that's wherecomputer vision is really
immensely helpful.
Of course, you still need thehuman to look and make sure that
AI got it right.
It doesn't always, but it'spretty good most of the time.
But that it's pretty good mostof the time, but that is a great
case where AI has already comein to solve an issue that we
(24:09):
have and it's just going tocontinue.
And yeah, I think it's probablyeven less.
Of course, I'm not the personwho's in healthcare every day in
the space, but five to 10 years, I mean, just think about that.
If we're not planning for itnow, we're not going to be in a
good state.
Speaker 3 (24:28):
That's okay.
There's a prominent investorinfluencer on Twitter and he was
saying the other day in areally interesting thread about
just live three more years.
He's committed to saying justhang in there three more years,
because with everything he sees,he says three years from now
we'll have the geneticbreakthroughs, we'll have the
technology, we'll have thesethings and life expectancy will
shoot through the roof becausewe'll have all of these new ways
(24:48):
to address things.
It's just going to take threeyears.
I think three years is perhapsoptimistic, but I think we darn
well need to solve the problemsin five.
Speaker 2 (24:56):
Yeah, yeah.
Speaker 3 (24:57):
Yeah, that's fair.
Speaker 2 (24:58):
So, gosh, you're
working on so many different
things.
They're separate, but they'reinterconnected.
You talked about living inrural areas.
I mean, you live in Mississippi, so you're seeing a lot of need
for health care in your area.
What are some things that wecan consider?
What can we, as just citizens,do to help propagate health care
(25:20):
systems, to help, you know,advocate for the research that
you're doing?
Speaker 3 (25:25):
You know, I think the
biggest thing and this affects
me and you and everyone else iswe just have to culturally,
philosophically, take ownershipof our own healthcare data.
And, speaking from a personalperspective, you know, I know
what our family has been throughand the way we approach
proactively having our data,asking questions about it,
working collaboratively withdoctors the number of people I
(25:46):
have that it's like it workingcollaboratively with doctors.
The number of people I havethat it's like, hey, I got
diagnosed with cancer.
Well, what kind of cancer?
Well, they say it might be thisone.
Okay, what type and stage?
I don't know.
What do you mean?
You don't know.
If your tire just went flat,did you look up whether I have a
tire?
Yet I bet you went to the rightstore to say I've got a Nissan
Altima and I need this kind oftire and stuff, but on Altima
(26:13):
and I need this kind of tire andstuff, but you can't tell me
what kind of life-threateningdisease you have.
And, without being critical ofthe individual, it is to me
systemic and reflective of thefact that people just haven't,
for 10,000 years, thought aboutownership of their healthcare
data.
Around that We've obviously had, and continue to have, a very
large population of people whoyou know.
They work out and they eatright, I track my calories or I
see what my Fitbit tells me andI forget, and that's fine.
But that's what always been onehalf of one percent of the
population.
(26:33):
Maybe you know we're talkingabout doing this day to day in
the same way that you do yourtaxes or you manage your
checking account or you know howmuch gas you have in your car.
It's the same sort of mentalprinciple, except now it's about
knowing you know what's myblood pressure been like and
what changes have I had in myfood intake lately and is there
signs of inflammation and do Ihave to look at my vitamin K
(26:54):
again or something like that.
You know, just throwing pointsout, people don't take ownership
of that.
They go to a doctor.
A doctor, not much differentthan a shaman from 10,000 years
ago, says, oh, here's the vaporsyou have or here's the ill
spirit and this is what you'represcribed to do, and say okay,
and then you go back and youtake a medication that you may
or may not look up theingredients for, et cetera.
Speaker 2 (27:14):
Yeah, yeah, wow.
And even if somebody is livinga very healthy lifestyle, eating
all the right things, if theydon't understand their history,
their family history, the data,things still happen to people
who are taking care ofthemselves every day.
Speaker 3 (27:33):
Absolutely,
absolutely.
And it's funny, I go backalmost 30 years with a colleague
of mine that I worked with whowas a triathlete.
He was like 4% body fat and thenext thing you know he had to
cut all this stuff out of hisdiet, was eating like whole
grain bread and stuff like thatbecause he had dangerously high
cholesterol.
Even myself I myself allconfess is on cholesterol
medications, which hit me abouttwo years ago as a guy that's
watched his cholesterol numbersgotten tested every year, you
(27:54):
know relatively low BMI,physically active, et cetera,
and then all of a sudden mycholesterol was like sky high.
I'm like, wait a minute, Ifundamentally haven't changed my
existence and my behavior lastyear to when you took this, this
labs this year.
And so I said, okay, I'm goingto follow the recommendations,
I'm going to lose 15% of my bodyfat, I will reduce my body
weight 15%, which I did, youknow, do this and this, increase
(28:15):
water intake and all that sortof thing, and the numbers were
still off the charts.
I'm like that's not fair.
But now it's all back undercontrol very easily.
But again it was.
Oh, you know, my wife, I think,mentioned well, your dad had
cholesterol problems.
I'm like, oh, there we go.
Speaker 2 (28:27):
Yeah, the things that
are just unavoidable no matter
what we do, but we need to knowthem and we need to know that we
can at least try to mitigatethem.
And if we have to get onmedication, how do you think
this is going to change thelandscape of personhood, of
health care, of all the thingsthat we deal with in life, of
(28:48):
personhood of healthcare, of allthe things that we deal with in
life, to be able to have thesedigital twins, to be able to
take control of our own data andown it ourselves and not just
rely on whatever records we havefrom this healthcare system or
this healthcare system if youswitch a job or you move to a
different state?
Speaker 3 (29:00):
Yeah, you know, I
don't know, because I'm pretty
cynical at heart with a lot ofthings and I took the approach
of saying, well, we'll have todo this.
There are so many rosy voicesout there saying, oh, someday AI
will do this and this and this,and my answer is AI, damn well,
better do it, because nobodyelse will be doing it.
I'm sad to confess that I havealways found that Elysium with
Matt Damon wasn't just anentertaining movie.
(29:22):
It was kind of a foreshadowingof events, which is you're going
to have a larger, largerpopulation in the US and
elsewhere, that's, you know,technocratically disconnected
from what's going on, and you'llhave global elites that have
built their own digital twinsand longevity medicines and
they're going to live insomething that orbits the earth
with lots of grass and looksreally pretty.
That's preventable.
(29:43):
I don't know if it'spreventable, because I think in
a lot, lot of cases, we don'twant to prevent it by virtual
culture, but that is a risk thatwe take, or perhaps even a
variation of it that I see,which is kind of a leapfrogging,
(30:06):
which is, you know, the USalways known for them and make
them healthier and improvevarious factors that we can't
even necessarily envision rightnow because some tool hasn't
been delivered yet.
Nothing against Kenya, I justuse it as an example.
Speaker 2 (30:17):
Yeah, yeah of course,
but it is interesting to think
about how other countries aregoing to play a big role.
I mean, we do know that a lotof the tagging for Gen AI, the
labeling, is done in Africancountries.
Speaker 3 (30:32):
Yeah, good point.
Speaker 2 (30:33):
Where people are
maybe not making a fair you know
they're making a better wagethan they could, but not a fair
wage for the PTSD that they'regetting, because they're the
ones who are helping support ouruse of these technologies.
But there might be somebody inthat community who does have a
new idea and is able to have,you know, a spark.
So what is the one thing thatyou would impart to people who
(30:56):
are going on a journey, whetherit's getting deeper into AI and
how to leverage it, and whetherit's, you know, healthcare,
education, financial services,whatever their industry might be
?
And also, as a founder, howmuch are you seeing competition
for funding and for dollars whenthere are, you know, a lot of
(31:19):
people who are trying to createAI driven and technology driven
tools right now?
Speaker 3 (31:23):
Yeah, I'm sure
everyone's seen about the AI
bubble and then all you had todo is stick AI in your name
someplace and somebody throwmoney at you, and I think
there's still some of that.
I think it's also very SandHill Road Valley oriented as a
result, but I do see perhaps abit more equity.
So as people come up withcreative AI solutions that are
well, well substantiated models,with you know, adequate teams
(31:46):
and staffing, that they'reperhaps getting a shot at things
.
We're not quite there yet, butare optimistic for next year and
there's just a tremendousamount of resources available
for free to learn about AI andML and build things on your own,
and I'm sure there's a lot inyour audience that has
fundamentals experience incomputer science and coding and
this is just taking the nextstep.
Speaker 2 (32:07):
Fantastic.
And what is the best place?
If people want to learn more,they want to dive into your
research, they want to connect.
Speaker 3 (32:15):
Yeah, there isn't
many great places to learn my
research, but I always happy toconnect on LinkedIn, communicate
there very frequently, share alot of the stuff going on from
other smarter people onhealthcare and AI and technology
.
Also want to give a shout outfor health tech nerds.
It's worth the 10 bucks a monthto be a member of the Slack and
you'll work with, I think, oneof the finest community of
health tech folks out there thatyou can hang out with, so
(32:37):
that's a good one.
Yeah, that's a great place tostart.
Speaker 2 (32:40):
Fantastic, and on
your LinkedIn you do have Jim.
You have a quote artificialintelligence is whatever hasn't
been done yet from the 70s.
Speaker 3 (32:51):
I found that one day
and stole it.
I said that is perfect.
Speaker 2 (32:53):
I was looking for
something to change for the
banner back there and, yeah, andthat does seem to me to be kind
of the career trajectory thatyou've had and the interest and
engagement you've had in AI andwhat the future could look like.
Speaker 3 (33:02):
Absolutely, and I do
consider much of my intelligence
to be artificial, so that worksperfectly.
Speaker 2 (33:07):
Well, thank you so
much for a lively conversation,
for really sharing a robustdeepness about where we can go
in the world of digital twinshealthcare, how we need to
consider our data practices, ourown healthcare practices, and
so much more.
Speaker 3 (33:23):
Well, thank you very
much.
It has been a pleasure to behere, and I think you've been
far too kind in listening to mypontification, so wonderful.
Speaker 2 (33:32):
So Jim St Clair,
everybody, digital health, gen
AI expert par none, and thensomewhere in the back of the bus
.
And I will be back again withanother amazing guest to share
on Mediascape Insights fromDigital Changemakers very soon.
Speaker 1 (33:50):
To learn more about
the Master of Science in Digital
Media Management program, visitus on the web at dmmuscedu.