Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Dr C (00:07):
And welcome back, Dr.
Washington, to the livepodcast.
We're so happy to have youback.
One of uh your episode was oneof the ones that uh I had
received so much feedback fromand questions.
And so you're back today.
And thank you because todaywe're gonna be talking about AI
and well-being.
(00:27):
So we talked aboutmisinformation, disinformation,
malinformation.
So today we get to talk aboutAI and well-being.
So let's let's jump right intoit.
Let's start with our firstpillar of learning.
So the term AI is everywhere,but many of us feel overwhelmed.
So, from a wellnessperspective, um, how is AI
actively creating and amplifyingmisinformation through maybe
(00:51):
defakes or chat box algorithmbiases?
Can you tell us a little bitabout that?
Dr Washington (00:57):
Yeah, so I'd like
to try to to widen the the
aperture just a little bitbefore we get that deep.
So, you know, what we'recalling artificial intelligence
is is really a large languagemodel.
It's uh what do you call it, uhmachine learning, you know, and
things like that.
So we've termed it artificialintelligence.
(01:18):
And from my research, I I tryto understand that definition of
what artificial intelligenceis.
Um, because when we say that,it implies a meaning onto
artificial intelligence that itmay not be.
So, first thing is that whenyou're looking at things like
(01:39):
deep fakes, the chat box,algorithmic biases, those can
all be mitigated by the human.
So we have the agency to readand to validate, look across
different platforms to make surethat how we're interacting with
the artificial intelligence ismeeting our instinct.
(02:00):
I call that epistemicresponsibility.
So the responsibility alwaysstays with us with what kind of
information the artificialintelligence or how we're
interacting with it should be.
Dr C (02:16):
So that sense of distrust
is so corrosive to our
well-being and it makes us feelisolated.
So that brings us perfectlyinto the pillar of Inspire.
In the face of this, could youshare a community story showing
how people turned maybe an AIdisinformation uh situation,
right, a crisis into anopportunity for digital
(02:38):
literacy?
Dr Washington (02:40):
Yeah, absolutely.
There are so many examples outthere, Wanda.
I mean, it's it's if you justwatch the news right now, you
can see all of those differentexamples in play, especially if
you're practicing some form ofmedia literacy or critical media
literacy and trying tounderstand why certain images
and certain videos or certainthings like that are being
(03:03):
perpetuated out there.
A great example, and I'll justgive you a specific one, would
be wrapped around when it comesto public speaking and in
politics.
So there there have beenseveral videos that have been
created and pushed out as beingreal.
You have the one where the Popedelivers a message, you've had
(03:26):
one where even Trump and Obamahave said things and and they're
not correct.
But what I like in media rightnow, and I know we give media a
hard time, but what I like inthe media is that a lot of the
outlets are bringing thosethings to light and saying, this
is incorrect, this is how youknow it's incorrect, and this is
(03:47):
what you should do about it.
That is important when becausemedia literacy has two sides to
it, and digital literacy has twosides to it.
So you you you want aprotective side so that you can
discern when something is fake,but you also want a creative
side where you can display thatand and make sure that you're
putting things out in a worldthat that is factual or you're
(04:11):
debunking.
Dr C (04:13):
Yeah, yeah.
Yeah.
So turning a moment of fearinto a moment of empowerment by
by really right uh gaining or orenhancing your your skills and
your knowledge when it comes toartificial intelligence.
Dr Washington (04:27):
Exactly.
Dr C (04:27):
I love that.
I love that.
And it shows that we haveagency, right?
So let's give our listenersthat agency and move to our
third pillar, flourish.
What are some actionable toolsor daily habits that can build
our AI-aware information immunesystem?
Dr Washington (04:47):
One would be that
the first piece would be to
experiment with artificialintelligence, but that activity
has to be very self-reflective.
So if you're using artificialintelligence to produce
something or to put somethingout in the world, is
understanding what you're usingit for and what you're trying to
(05:07):
do with it or what you'retrying to influence.
And then you gotta, of course,ask yourself the questions is
what I'm putting out therefactual?
If not, then then why am Iputting it out there?
And if I'm putting it out thereas a joke, then I need to make
sure I'm, you know, lettingpeople know it's a joke.
Or if I'm using an image, justrecently I shared an image.
I knew that certain pieces ofthe image was incorrect.
(05:32):
So I corrected that in the bodyuh of my post just to make sure
everyone knew.
The biggest piece though,Wanda, is playing with the
technology.
You have perplexity, you havethe cloud uh versions from you
know, you have the open AIversions, you have the Google
(05:52):
Gemini's.
It's playing with thetechnology, learning the
technology, but alsounderstanding how powerful the
technology could be and how touse it ethically.
I wrote a paper with my brotherthat talks about artificial
intelligence stewardship, andI'll share that with you so you
can share it with your readers.
(06:13):
Yes, absolutely.
But we wanted to reframe ourunderstanding of digital
technology usage around astewardship instead of a
disruption type model.
Dr C (06:26):
Wow.
So that's that's interestingwhen you say play with the
models, right?
Get to know them, get to knowwhat they can do.
Just recently, I believe lastnight, I received an AI prompt
that we've been using throughour sorority.
And I don't know if you saw thepictures that I put up this
morning, and I said, it's soscary that I literally uploaded
(06:47):
like a couple pictures ofmyself, told it I wanted to be
in this nice blue flowing dress,and you know, and the when the
I knew what was going to happenbecause I saw my other sorority
sisters do it, and I was like,Oh, it's pretty cool.
And it's kind of like all over,you know, social media right
now.
But when I saw the picture popup, I was like, uh oh.
(07:09):
Right.
I felt scared, yes, excited,scared again, right?
Because I'm like, wow.
I mean, if you zoom in, youcould see certain features
aren't, you know, right, but youhave to zoom in and you have to
take time to really look at thepicture, right?
(07:31):
And it made me think, and theneverybody on Facebook, oh my
god, it's so gorgeous, sogorgeous.
I'm like, I told you all in thecomment that it's an artificial
picture, yeah.
And everybody's like, Oh,you're gorgeous, it's so yeah,
but it because it's artificialintelligence, right?
So I'm like, I don't know howto take this.
You're telling me I lookbeautiful because I'm AI step
(07:52):
in, or am I beautiful withoutAI?
So that's that's the funnypart, but definitely I felt a
sense of hesitation.
Like, I need to start lookingat pictures a little bit more
closely, right?
Not just you know, scrollingand thinking that everything
that I'm seeing is real.
Yeah, and it brought me back tothinking of when I was a kid,
(08:16):
you know, hearing like but don'tbelieve everything you hear and
only half of what you see,right?
Right, right, yeah.
So I feel like that lesson issomething that I am now
continuously telling myselfagain.
Dr Washington (08:33):
Yeah, that is
pretty you have to do that.
You you have to continue tolearn, you just you have to
continue to wrestle with theseideas.
You know, this probably couldbe a separate podcast, but just
to mention so in 2023, Ipublished my book, Simulated
Realities, and my goal with thatbook was to start talking about
(08:56):
what you're talking about rightnow, is is creating these
realities of ourselves that'snot based in in anything real,
yeah.
Um and and what that does andand how that influences our
behavior.
So this is symbiotic, thisreactionary type type thing
(09:18):
happening in our culture.
I pull on John JohnBaudrillard's idea of
simulations and simulacra in thebook.
And I I rewrote the book.
I haven't republished it.
Of course you did.
The reason why I haven'trepublished it is because I keep
(09:39):
adding stuff to it becausethings are moving at such a fast
rate.
So they just released Sora 2,which I touch on in my book in
2023 before the video typetechnology was was in full
swing.
And I couldn't have imagined in2023 the realness of these
(10:05):
videos.
It's getting gravity right,it's getting water correctly,
it's it's getting humanmovements and and things.
And you can see some of thedifferent things where the the
the technology doesn't quiteunderstand how reality works,
but it's it's getting closer,and and pretty soon it really
(10:26):
will be indistinguishablebetween what what reality is and
implies the some semilacra inthere of making an imitation of
something that doesn't exist.
That's that's a huge huge thingfor our society to rap uh to to
to uh wrestle with.
(10:46):
And it's just funny thatBaudrillard was talking about
this in the 80s, you know.
Uh so yeah.
Dr C (10:54):
That's insane.
So uh so much, so much.
And it's so funny because I waslistening to a speaker, and
he's an AI guru, and hementioned while I'm here
educating you all on the latestof AI, I'm falling behind on the
latest on AI.
And I was like, it took me aminute, like, wait, what?
(11:17):
But then he explained it likeit's moving so fast that
literally what you knewyesterday is almost obsolete,
what you know today, right?
It's like you're retrainingevery day because it's moving so
quickly.
So thank you for for talkingabout that because I think one
of the things that we don'trealize is that we need to also
continue to tweak our strategiesas the as the technology grows,
(11:41):
right?
There'll be new techniques tokind of figure out what's real
and what's not.
So we'll have to we'll have tobring you back every six months
so that we can get like thelatest in um AI.
Yes, it is too fast.
Well, we're gonna take a veryshort break.
Uh, when we come back, we'lltalk about how to stay resilient
(12:03):
and hopeful.
So stay with us.
Welcome back.
Uh I'm with Dr.
Jerry Washington on the livepodcast.
We've learned about the problemand discussed actionable habits
now for our final pillar ofEvolve.
Dr.
Washington, with this constantflood of AI falsehoods, um, how
(12:28):
can we stay resilient andhopeful as individuals in and
our communities?
Dr Washington (12:34):
Uh this is gonna
sound cliche, um, but uh believe
in humanity.
Um there's some really yeah,there's some really great
authors that that have tried toto frame what's going on and to
give people a way, if if not togive them a way of thinking
(12:55):
about it, and and those thinkingtools can can help that.
So we know from from you knowancient times, you know, you
have the the printing press, youhave all of these different new
technologies that when theycame about, it said it's gonna
be the end of society.
(13:16):
I mean, think about the thestory of the Luddites, right?
The Luddites were trying, theydid not want this new technology
that's gonna take the jobs.
All we we we we adapt, we wecontinuously adapt.
But adaption comes with time.
So adaption plus time, right?
(13:39):
Plus learning, and you and Iboth know learning is about
change, right?
Change in some type ofbehavior.
And so we just need to we justneed to continue to learn,
understand that this technologyisn't determinative.
There's there's it's not doingit.
(13:59):
We are creating it, we areguiding it.
And just like any other socialtool, because that's what it is,
it's a social tool, somethingthat we're using to mediate
between each other.
That means we can shape itsocially.
That's why I write about it,that's why I continuously try to
learn about it, is so that Ican shape how this tool moves
(14:24):
and how it's becoming whateverit's going to be.
And we you just have to beinvolved with that.
Not everyone can do that,right?
Because there's typically theeither-or framing that I don't
know why we do that.
My brother says we do thatbecause we have two arms and two
legs, two eyes.
We only see things in binaries,but we we we we have to rely on
(14:49):
some people, right?
The experts out there that arepulling this apart and trying to
understand it in differentways.
Support those folks, go totheir workshops, read their
books, do all of those things sothat we can all stay abreast
because knowledge is a socialconstruction.
And and as long as we continuebuilding knowledge, we will
(15:12):
continue to thrive as ashumanity.
Dr C (15:15):
Well, I appreciate that.
Yes, and it's I I wasn'texpecting, I definitely wasn't
expecting to not lose hope inhumanity, right?
But I I love that.
I love that because if we'renot relying on each other, yeah,
we we have some we will havebigger problems as as we see
sometimes, right?
When we turn on each otherversus working together.
(15:37):
So before we close, I just wantto first of all thank you again
for coming back.
And like I said, in in a couplemonths, we're gonna have to
have you back so we can get ourAI um update from from the from
the uh pro.
But I just wanted to kind ofrecap key takeaways for our
listeners, right?
So under the learn pillar, wediscovered that AI acts as an
(15:57):
amplifier of information, and itcould be missed, dis or
malinformation through toolslike deep fakes and algorithms
directly impacting our mentalwell-being.
And then we also got inspiredby understanding by hearing
stories of transformingdisinformation into a learning
opportunity and reallyunderstanding and providing
(16:19):
communities an opportunity toturn a crisis into a empowering
opportunity, right?
To really think about digitalliteracy and connection.
And then under Flourish, yougave us some some great
opportunity, some great, oh, Ican't talk, some great practical
habits that we should continueto do.
(16:41):
And I know even in our firstsession, you talked about the
three-second pause, right?
And really checking our sourcesand diversifying our
information diet.
And now with artificialintelligence, it's are we we
need to stay focused oncontinuing to debunk and not
just taking things at face andvalue, right?
So building that muscle.
I I I kind of refer to likeworking out, right?
(17:04):
And and trying to get buffed.
Every day you're lifting, everyday you're lifting more and
more and more.
And so you're working thatmuscle, you're developing that
skill.
And that's the same way I seethis as well.
And then finally, in Evolve,you said, do not lose hope in
humanity, right?
So, how can we be active agentsof clarity?
So making sure that we're doingour own work, but also finding
(17:26):
hope and resilience in ourcommunity and in our own
mindfulness and our own mindfulactions.
So, did I get that right?
Dr Washington (17:33):
You got it right.
You said it better than I did.
Yes.
Dr C (17:36):
Is there anything that you
wanted to add that I may have
missed?
Dr Washington (17:40):
You know, again,
I think driving home that this
technology is not deterministic,it's not on its own trajectory.
We have agency, just like wesee in other social movements.
We can shape where thistechnology goes.
If we don't want it to docertain things, then we need to
(18:00):
hold people accountable.
Dr C (18:03):
Safeguards.
Dr Washington (18:04):
Yep.
Dr C (18:05):
Yeah, yeah.
Well, thank you so much, Dr.
Washington.
I appreciate your time, yourknowledge, um, and really
spending time with us to informand educate us as well.
So I I uh I wasn't joking.
We'll definitely have you backbecause I'm sure three months
from now we're gonna havesomething else, something uh
more things, more AI things,right?
(18:27):
Thank you.
All right, have a great one,and um, we look forward to
having you back.
Dr Washington (18:32):
Thank you.
Dr C (18:34):
So, to everyone listening,
the digital world does not have
to be a source of stress withintention and awareness.
You can navigate it withconfidence and peace.
Stress free.
Until next time, keep onlearning, stay inspired,
continue to flourish, and neverever stop evolving.
I'm your host, Dr.
C, and this is the LifePodcast.