Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hi everyone, welcome
back to Tattoos and Telehealth.
Today we're going to talk aboutAI, how to vet your AI,
especially with regard to yourhealth, because AI is the new
way of getting great informationand it can be great.
But also there is a few thingsthat we want to talk about that
you just need to look out forand be aware of as we evolve
(00:25):
into this new culture ofinformation.
So my name is Nicole Baldwin.
I'm a board certified nursepractitioner.
This is my good friend andcolleague, kelly White, also a
board certified nursepractitioner.
Kelly's also a board certifiedin functional medicine, which is
absolutely just amazing.
And so we are providers atHamilton Telehealth, hamilton
(00:49):
Health and Wellness, and ourattorneys make us say that this
is not to be construed asmedical advice and this podcast
does not constitute a patientprovider relationship.
Constitute a patient providerrelationship.
(01:10):
So, kelly, let's get started.
And as the culture is changingand as we are all utilizing AI
because no one, no matter howmuch school we go to, no matter
how many degrees or letters orwhatever behind our name, we're
never going to be able to fiteverything in our brain.
Like I read studies all thetime, but can I recall them at
any given time.
No, like I'm not, you know.
(01:31):
No, we can't, we just have tojust keep you know, putting the
knowledge in, whereas AI hasaccess to all that knowledge,
right To every study, to everyall the statistics.
But let's get into today.
Something that I know wasimportant for you to talk about
was how just be careful,especially with relation to
(01:52):
health care.
So I'll let you take it.
Speaker 2 (01:54):
So I think that one
of the things we need to think
about is, you know, rememberback in the days when Internet
became a big thing and I know,especially in my brick and
mortar setting, patients wouldcome in and I Googled my
symptoms, I know I have this, Ineed this, this and this, and it
became a big crutch tohealthcare.
(02:15):
So what should have been anopen door to provide great
information and to aid in theprocess of providing care to
patients actually became ahindrance in the healthcare
process, because patients camein thinking that they already
had themselves diagnosed, theyalready knew what medications
they needed, they already knewexactly what was going to be
(02:37):
happening in step one, two andthree, without taking into
consideration the fact thatmaybe they had this going on or
that going on or a familyhistory of this, and so these
different factors negated whatthey thought they already knew
once they talked it out withtheir provider.
And so you know, I coined thephrase.
You know, dr, Google didn't goto school.
(02:57):
Not that Google was wrong ornot, that the internet searches
they were looking at was false.
It just didn't necessarily 100%apply to that person's
situation.
And so now fast forward to theworld of AI, where I'm seeing
patients that are messaging insaying you know, I chat GPT, or
(03:20):
I AI, or I did this, and it saysthat actually I should be doing
this.
And I've had to call patients afew times and say well, you know
, this is where I got it wrong,and I think that it's important
to understand that in the worldof artificial intelligence, it
is only as accurate as theinformation that it is fed.
(03:41):
It's just like you and I theintelligence that you and I hold
in our brains, like you said,is only as good as the
information that we gave ourbrains.
So it's only as accurate as thearticles we read or the
information that the lectureprovided to us.
And so that's the same thingwith artificial intelligence the
information that it gathers.
(04:03):
While it can spread its fingersa whole lot wider than we can
and pull all that information inand then give you a synopsis.
It sometimes is pullinginformation from human resources
, and that could be resourcesthat are based on human
experiences, based on someone'ssubjective opinion, not fact, or
(04:24):
something that is notnecessarily evidence-based.
And so whenever we're thinkingabout artificial intelligence,
we have to really be sure thatthe AI source we're using is
pulling from validated resources.
So I only caution people in thesense of absolutely use AI.
(04:45):
Please use AI, nicole, and Iuse AI.
We use it for lots of things,whether it is rewording verbiage
so we don't sound as country aswe can be sometimes, whether
that means that we want to typesomething up a little bit more
professionally.
We use it to help us withflyers and handouts and all
kinds of great things.
It does great stuff for us, butbe sure that the AI source
(05:08):
you're using is pulling fromreputable sources.
So, nicole, I know that you useone called Open Evidence, right
?
Speaker 1 (05:17):
Yes, I do, I do use
that.
I do use that.
I don't use that probably asoften as I should, but I do use
it.
There's a couple of differentones.
There's some medical ones thatI use, um.
There's a couple different ones.
There's some medical ones thatI use um, chat, gpt.
You know the is one that I use.
But even on the bottom of thatit always says for important
information, verify information.
(05:39):
Like it always says that in thebottom in little gray writings.
Even on chat, gpt, it says forimportant information, verify
you.
We can get it wrong.
Yeah, and what is important isthat, even though you may have
the exact same symptoms as yoursibling, you're still different,
(06:00):
you're.
You know there's things thatare different where a medication
that's suitable for them maynot be suitable for you,
especially if it's I mean, ifit's a family member.
You have a little bit more.
But yeah, you know, there's somany other things to consider.
It's not just I have a headache, what could it be.
It's there's so much more toyou as a, as a human, as a body.
If we could put everybody in a,in a, in a box to say if you
(06:23):
have depression, this is whatyou need.
If you have anxiety, this iswhat you need.
If you have high blood pressure, this is what you need.
Then that would be easy, right,but that doesn't work for
everybody.
It's there's so many variablesthat go into us choosing a
medication.
You know, it's family history,it's it's your habits, it's
genetics, it's your.
It's your your past medicalhistory.
(06:45):
Do you have a history of thishistory of that?
Are you at risk for this?
At risk for that?
I mean even as far as your,your background?
You hypertension.
We start African-Americans on adifferent medication than we do
for other types of ethnicitybecause they're more prone to
specific things, and so it boilsdown to so many more factors
(07:08):
than you could ever really enterin AI per se.
Speaker 2 (07:12):
Yeah, and I think
that that's an important thing
to keep in mind when when you'reusing those things.
So, like Nicole was saying, atthe very bottom of your AI
response there's going to be adisclaimer and at the bottom of
a lot of the medical ones we use, it'll list the articles that
that AI used to pull itsinformation.
So one of the ones that I useon a really regular basis open
(07:35):
evidence.
I use it daily while seeingpatients.
I do, I do.
I use it daily.
It stays open over on the side.
I use it when I'm researchingstuff for different patients,
because I see some prettycomplex patients and they have a
lot going on and, like Nicolesaid, I can't keep all that
stuff in my brain.
So there's times when I'm inthe middle of talking to someone
(07:56):
and I'll just reach over toopen evidence and I'll type
something in and it'll pullthose articles for me and I love
that.
Those are very good,well-vetted articles from the
New England Journal of Medicine,from PubMed, from JAMA, and I
can click on that article and Ican pull it up and I literally
have the research right in frontof me and I know those articles
and those sources you know likefrom the National Institute of
(08:18):
Health.
These are very well accreditedsources that I know.
I'm giving my patient up todate, accurate, very well vetted
resources and information thatI feel sound and good about, and
so I think that that's the mostimportant thing to keep in mind
when you're going through it.
Not that AI isn't wonderful,because it really is, but again,
(08:39):
guys, it is only as good as theinformation that it is fed.
And so if it is being fedinformation that is biased by
opinion or by other people'spersonal experiences only and
there's no facts to come behindit and by facts I mean good,
vetted, large retrospectivestudies with a good number of
people not 100, not 500, like10s of 1000s of people's in
(09:02):
these studies then thatinformation may not be the best.
And so that's kind of the pointthat I want to hit home here.
Like Nicole was saying, wecan't put everyone in a box.
You know if you guys have beenfollowing us and if you haven't,
you need to like and subscribeand follow us.
Then you know that Nicole didthis whole talk where there's
(09:23):
genetic testing available to seewhat kinds of antidepressants
and anxiolytic medications arebest for you based on your DNA.
So we don't all fit in that box, and so I really think it's
important to keep that stuff inmind and what was best for you
guys.
We want to be sure that theinformation that you're being
provided is accurate and up todate.
I personally don't mind if youcome to me and say hey, kelly, I
(09:43):
Googled my symptoms and this iswhat I think.
I'm happy to have that talkwith you.
I'm glad that you're being aproponent of your healthcare,
your body, your rules, and Ican't help you if you can't help
yourself.
So I love that you do that isaccurate, that it is informative
(10:04):
as well as it is informational.
You're getting your sources.
You're getting your informationfrom sources that are
fact-based.
Speaker 1 (10:10):
Yeah, absolutely,
absolutely and so that's just
something that, as we get moreadvanced and as we get more into
AI and people are utilizing itfor things with regard to your
health, ai doesn't knoweverything about you, and so it
is important to make sure thatyou, yes, look into things.
We want you to look up thingsthat it could be, but when you
(10:32):
do see your provider whetherit's us or whoever else, you
know, your regular provider Iencourage you to say here's what
I when I dug myself.
Here's what I found.
What do you think versus?
This is what I have.
This is what I need.
Here's what I found.
What do you think Versus thisis what I have.
This is what I need.
That kind of sets us a little, alittle off, because we don't
(10:53):
really know where you got yourinformation and it may be
correct, but it may not be, andso most providers are okay.
So, yes, absolutely I want youto research whatever condition
you know you have, whether it'shypertension, diabetes or or
whatever, but it is important togo into it with um, a little
bit of understanding that it canget it wrong.
It can get it very wrong, um,and so we just want to make sure
(11:15):
we touched on that today forsure all right, you guys.
Speaker 2 (11:18):
So I hope this
information was helpful.
I hope you found it informative.
Um, and as always, please like,subscribe, follow, share.
Let us know that you love us.
Leave us a message down in thenote, in the show notes.
We will always reach back outto you and if you want to come
see us, you can find us athamiltontelehealthcom.
Again, this is my greatcolleague Nicole Baldwin, and
(11:39):
I'm Kelly White.
We do hope to hear from yousoon.
Speaker 1 (11:42):
All right, have a
good day, guys.
Speaker 2 (11:43):
Bye.