Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
Okay, photographers,
this message is for you.
There's a lot of education inour industry, as I'm sure you
know, but I do something alittle differently with my group
coaching program that I callElevate.
And I want to talk to you aboutit for a second because we are
re-enrolling for this nextround.
It's a six-month program.
(00:21):
And when I tell you there'sreally nothing else like it,
here's what I mean.
Yes, we do have Zoom calls everymonth, and there's a portal of a
lot of information for youonline that we add to as well as
our time together progresses.
But what makes this reallyunique is first of all, I am
(00:43):
certified in life coaching.
So we really do take intoaccount the entire person that
you are and not just thephotographer that you are.
So we talk about mindset, wetalk about abundance, we talk
about manifestation, and then wealso absolutely talk about
strategy and money and profitand making sure that you are
(01:05):
running your business asefficiently as possible.
So you get access to me viaVoxer, which is a voice texting
app, pretty much 24-7.
I don't know any otherphotography educator out there
who is this accessible to theircoaching students.
It's kind of unheard of.
But here's the thing (01:21):
I created
this program and all my programs
for the photographer that I oncewas, for that version of myself
who really struggled and neededthis so much, but couldn't find
it.
It didn't exist.
So if you're looking for notjust community and
accountability, but also seriouseducation and ongoing support
(01:44):
and mindset from someone who'sactually certified and
understands how to pull out ofyou what you can't do on your
own, then you need to go aheadand check out Elevate.
It's by application only.
So you'll need to go ahead andhit the link in the show notes.
And then someone will get backto you and let you know if we
think you're a good fit.
And we may even have to hop on aquick call to double check.
(02:07):
But I look forward to hearingfrom you and working with you to
build the most profitable andmost fulfilling business that
you possibly can.
SPEAKER_02 (02:19):
So both of these
attributes are perfect for AI,
for a for a chat bot stylesystem.
That one that first of all says,don't worry about it.
I'll drive the car, right?
I'll tell you what to do or orwhat you should conclude about
this.
Or, you know, young people thesedays are coming home from a date
and running it through AI tosay, here's what he said.
(02:41):
What, you know, what what do youwhat should I take away from
that?
Like we're built to outsourceour processing wherever we
possibly can.
And so the the moment that we'rein now that worries me so much
is this is not a product that,first of all, was built in
universities or even by theDepartment of Defense.
(03:02):
It was built by for-profitcompanies.
And they are, as you know,having to spend an incredible
amount of money to build thesesystems.
They're going to need that moneyback.
SPEAKER_00 (03:10):
Welcome to Tried and
True with a Dash of Woo, where
we blend rock solid tips with alittle bit of magic.
I'm Renee Bowen, your host, lifeand business coach, and
professional photographer.
At your service, we are allabout getting creative, diving
into your business, and playingwith manifestation over here.
So are you ready to get inspiredand have some fun?
Let's dive in.
(03:32):
Okay, friends.
So buckle up because my guesttoday is a fascinating one.
You know how we talk here aboutthe dance between the logical
and the magical or the unseen,how our thoughts and our
emotions and our energy shapewhat we create.
Well, what happens whensomething else starts shaping it
too?
Something that knows us almosttoo well.
(03:54):
So my guest today is Jacob Ward,and he's been reporting on the
intersection of technology andhuman behavior for more than two
decades.
This guy's, this guy's realsmart, you guys.
Okay, so just uh hang on.
He's written for The New Yorker,the New York Times magazine, and
was the tech correspondent forNBC News, among other things.
(04:15):
His book, The Loop, actuallypredicted the AI chaos that
we're living in right now.
Everything from the rise of ChatGPT to how algorithms are
rewiring our choices and evenour psychology.
What I love most about hisperspective is that he's not
just here to fearmonger.
Okay, we definitely dive into alittle bit of the uh the
alarming aspects of AI, but he'salso here to help us see
(04:39):
clearly.
He understands the behavioralscience, uh, neuroscience, the
manipulation behind the scenes,behind the screens, and how our
awareness that human superpowermight just be our way through
this.
So today we're talking aboutwhat AI is really doing to our
brains and how we can stopliving on autopilot and what it
(05:02):
means to reclaim our creativeagency in a world where even our
attention is for sale.
This is one of thoseconversations that is going to
make you think differently aboutyour relationship with
technology, creativity, andmaybe even yourself.
You guys know I love talkingabout AI, and I've talked about
it quite a lot over the lastthree years, specifically
(05:23):
ChatGPT and how I use it.
Jake's work reminds me that themost advanced technology that
we'll ever use is still thehuman brain.
And the most powerful code thatwe can write is consciousness
itself.
Let's dive into thisconversation.
Jacob, you wrote a book beforeChatGPT even came out, and you
(05:45):
basically sort of predicted theexact uh AI storm that we're
sort of living through rightnow.
So before we dive into all ofthat, I wonder if you could just
take me back to that moment.
SPEAKER_01 (05:55):
Sure.
SPEAKER_00 (05:55):
Um, what were you
seeing that most people weren't?
SPEAKER_02 (05:59):
Well, I really
appreciate this, Renee.
Thanks for having me.
And I and I um, yeah, you sortof you think that you wanna be
right when you write a bookforecasting a future outcome,
but I in this case, uh turns outto have filled me with enormous
regret to have been correct.
So when I say that I called it,I say it with the asterisk of
(06:20):
like, I really wish I hadn't.
Uh in this case, I was, I hadbasically two things happening
in my life simultaneously thatsort of led me to this thesis.
And one was I had spent sometime doing a documentary series
for PBS called Hacking YourMind, in which I went on this
kind of life-changing experiencefor me, uh being sort of the
guinea pig of a lot ofexperimentation uh to show our
(06:44):
viewers the cutting edge ofbehavioral science, basically
the last sort of 60 years ofbehavioral science.
And the whole concept of theshow was I would go through
these various experiments andtests and experiences and be
with all of these cutting edgeresearchers to show, through my
personal experience, just how uhunconsciously and predictably
(07:05):
human beings like myself makedecisions.
And going into that process, Ireally had thought of myself as
a very unique and beautifulsnowflake and that I did my made
my own choices and was my ownguy.
I came out of that processrealizing that I needed to quit
drinking, that I was deeplyracist and sexist, that all of
these things suddenly that thatyou know I had assumed I was
immune to were very much part ofmy wiring.
(07:27):
And I was learning about all ofthat at the same time that in my
day job as a technologycorrespondent, I was
encountering company aftercompany that was either hiring
behavioral experts and or uhbringing in whatever they could
of the at the time veryprimitive kind of AI stuff,
(07:47):
early machine learning or humanreinforced learning kind of
systems, to try to analyze andpredict human behavior and then
wherever possible to shape it.
I was just going through an oldemail I had in 2017 with this
guy who invited me to a dinnerparty for a bunch of tech people
who were specifically trying tocreate the most uh they were
trying to take psychologicalresearch, psychology and and
(08:11):
behavioral research and pull itinto how they were building
apps.
And they called themselves thebehavioral tech group and BTEC.
And they and I I attended thisone dinner where this two, these
two PhDs out of uh USC who hadjust finished their PhDs in
addiction and learning howaddiction works, were basically
(08:33):
sort of loaning themselves outto any company that wanted them
to create the most to apply whatthey'd learned about addiction
to app making.
And they called themselvesdopamine.
They showed up about six monthslater on 60 Minutes as an
example of just kind of howunscrupulous these companies are
in using our brains programagainst us.
(08:54):
So I just knew on the one handthat we were very, very, very
predictable because of myhacking your mind research
experience.
And then I was also encounteringall these companies so desperate
to predict our behavior and touse AI to do it.
And as soon as I learned abouttransformer models, which are
the thing that made ChatGPTpossible, I really started
rolling in writing this book.
(09:16):
And I thought I was like fiveyears early.
And then the book comes outJanuary 2022, and in November,
ChatGPT comes out and I went, ohmy God, here it comes.
And I, you know, if I'd been agood smart media person, I would
have gone really hard atpromoting this book.
But I think I was so discouragedand depressed, honestly, by what
(09:39):
I've found that I withdrew and Ijust I didn't really think about
it.
And so one of the reasons I'mgrateful to you is that I've I
think I've just sort of summonedthe internal fuel to make it
possible to kind of reassessthis book and and and the world
that we are now in.
Um, so that was the that'sbroadly how I came to this.
SPEAKER_00 (10:00):
Fascinating.
So fascinating.
I mean, and I've like fulldisclosure, like, I mean, I
followed you on TikTok probablysince like three days, like a
long time ago.
Oh, crazy.
Um just because hello, love theweirdness.
Like, I mean it's in your bio,like you know, all the little
different things, and thenthere's like and weird stuff,
and like of course, humanweirdness, yeah, totally and
true.
SPEAKER_02 (10:19):
The algorithm has
put us together.
That's exactly right.
Yeah, totally.
SPEAKER_00 (10:23):
So, all right, let's
now kind of cut to current day
and things are moving veryswiftly.
Um, yeah, I mean, I I I jumpedon that chat GPT thing like the
day it came out, not because I'msuper like excited about it in
general or super techie, but Ihad this sense of I need to at
(10:44):
least know about this.
I need to know what's going on.
I want to, I want to know.
I just kind of want to knowwhat's going on.
So, and again, it's probablyanother reason why you popped up
on my my for you page, right?
Like you're talking about thiskind of stuff is super
interesting to me.
So you said that we're facing asocial and psychological
emergency with AI.
(11:04):
So, what does that really likelook like and feel like in real
life, at least from yourperspective?
SPEAKER_02 (11:09):
Sure.
So the thesis of the book, andmy thesis all along here, has
basically been that AI is theperfect way of making us crazy
to one degree or another.
And that, and I'm and and I'mgoing to be describing a kind of
sort of various flavors of crazythat I consider kind of a
spectrum.
And I want to be clear that I amnot immune to this.
I consider myself to be just ascrazy as anybody else in my
(11:32):
interactions with this system.
But basically, my thesis wasthat because our brains, and we
can go on and on about this, butbecause our brains are evolved
to make to basically do twothings.
To one, uh, use uh uh likebasically to shorthand, to
(11:54):
shortcut, to summarize as muchas it possibly can.
Our our brains are not built toprocess information raw.
Our brains are built to say, oh,I know this story.
You don't have to finish it.
I know how this one goes.
SPEAKER_00 (12:06):
Right.
Patterns.
SPEAKER_02 (12:06):
Right?
Patterns is pattern recognitionis is what we're good at.
That's what we do as humans.
It's what enables your brain to,for instance, you know, drive to
the wrong place.
You live in LA, you've donethis, right?
You've driven you've meant todrive to some weird new errand,
and you accidentally droveyourself to the gym.
Yeah.
Right.
And you're like, oh my God, whyam I at the gym?
And then you're like, oh, right.
(12:27):
I unconsciously guided thishalf-ton vehicle through
traffic, got all the way herebefore my brain, before my
conscious brain even gotinvolved.
And that's because your brain'slike, oh yeah, where am I?
I'm behind the wheel.
What time of day is it?
Oh, it's two o'clock.
It's time to go to the gym,right?
Your brain just goes on toautopilot that that system is a
very amazing system that has, infact, worked to keep us alive
(12:48):
for a long time.
So that's one attribute of thebrain.
The other big attribute of thebrain is that because it doesn't
want to make its own choices andit loves to uh just sort of
assume that that it knows theanswer already, it loves to
outsource decision makingwherever possible.
And psychologists afterpsychologists will tell you, as
you know, I know you studypsychology, that for instance,
we are constantly outsourcingour decision making to our
(13:10):
environment, right?
For me as a drinker, formerdrinker, the reason I don't go
into bars anymore because thebeautiful dark wood and the
smell of the place and thesquish of the bar stool and all
of that stuff, I can feel itliterally as I say it to you,
tells my brain, time to drink.
This is your moment, right?
And and uh because my consciousbrain is absolved of having to
(13:33):
make any choices there.
My senses tell me what to do.
So the same thing is true in theso both of these attributes are
perfect for AI, for a for a chatbot style system.
That one that first of all says,don't worry about it.
I'll drive the car, right?
I'll tell you what to do, or orwhat you should conclude about
(13:54):
this.
Or, you know, young people thesedays are coming home from a date
and running it through AI tosay, here, here's what he said.
What, you know, what what do youwhat should I take away from
that?
Like we're built to outsourceour processing wherever we
possibly can.
And so the the moment that we'rein now that worries me so much
(14:15):
is this is not a product that,first of all, was built in
universities or even by theDepartment of Defense.
It was built by for-profitcompanies.
And they are, as you know,having to spend an incredible
amount of money to build thesesystems.
They're gonna need that moneyback.
And what we're seeing is thatthey are very quickly trying to
make these systems as easy forour brain to outsource decisions
(14:40):
to as possible.
They want them to be engagingand rewarding in this very deep
way.
So we are forming an attachmentto these systems.
OpenAI just recently had a videouh release where they showed uh
they're describing their costsand their infrastructure plans
and their new strategy.
(15:00):
And one of the things they talkabout is we don't just want this
thing to be a productivity tool,we want it to be an emotional
companion.
A couple of weeks ago, theyreleased a bunch of numbers that
showed that that there was a youknow a certain percentage of
their users are exhibitingfull-on signs of mania and
psychosis.
People are treating this systemas if it's a synthetic soulmate.
(15:22):
Some people are discussingopenly their suicide intent with
these systems.
And so we're at a point whereyou know, and this gets us into
sort of a squishy situation.
I'd be curious to hear what youhave to think about what you
think about this.
But like the numbers of peoplewho are exhibiting this kind of
openly uh psychotic and or uhyou know dangerous communication
(15:46):
with these systems are verysmall percentage-wise.
So for instance, the number ofpeople um openly discussing
suicide with the chatbot is0.15% of users, right?
So on the one hand you think,okay, that's a very small
number.
And that's a smaller number thanthe percentage of people in the
country who uh attempt suicideevery year, which is 0.6%,
(16:09):
right?
But 0.15% of open AI's users, ofwhich there are 800 million
weekly users, means that's about1.3 million people who not just
once a year, but every week arediscussing suicide openly on the
platform.
And so then I what I come to isthis thing of, okay, well, if
(16:33):
maybe that's just a reflectionof society.
Maybe that's just a reflectionof the numbers that we're gonna
see no matter what, right?
But this is a special casebecause this is a company that
is making an emotionalcompanion.
That's what they say they wantto do, and they need you to
engage with the system as muchas you possibly will so that
they can use, you know, they canthey can make money, right?
(16:55):
And so that is the emergencythat I worry about in the that's
the immediate sort of emergencyI worry about.
Beyond that, the book goes onand on and on about I go on and
on in the book, about how umthere's also just going to be a
fundamental kind of de-skillingthat I really worry about.
And we're already seeing signsof it, of people not able to
read as closely a set of a athing.
(17:16):
I was just talking to aprofessor at Cal at the UC
Berkeley, who says that hisnumber one problem with his
intro to literature class at thevery beginning of college for
these kids is trying to convincethem that there's any difference
between having read a book andhaving read ChatGPT's summary of
the book.
Right.
This stuff is gonna, I think,due to our decision-making
system, what GPS did to oursense of direction.
(17:38):
And because, as you know, ourbrains like to just follow
directions and go on autopilot,I just think it really is gonna
present a real kind of emergencyover time.
It's not gonna be an immediatepsychotic break kind of
emergency, but I think it'sgonna be an emergency that
develops over time.
SPEAKER_00 (17:55):
Yeah.
Now, I always say that if I allmy kids are grown now and are in
varying different ways, notreally into this whole thing.
Um, and and very uh cautiousabout it, let's just say, uh,
which is interesting.
And they're in their early 20s.
Um I love that.
Yeah.
SPEAKER_02 (18:14):
But how do you talk
about it?
Tell me more about that.
SPEAKER_00 (18:16):
Like, well, it's
interesting.
I have three kids, they're allvery different, right?
My oldest son has autism, he's27, he's been building computers
since he was 12.
Uh, you know, he's justnaturally gifted at that.
Um, Chai Ch BT has helped himlearn code in a faster way than
he could ever have learned onhis own and in a classroom.
He just doesn't learn that way.
(18:37):
For some reason, code has helpedhim tremendously, but he is
vehemently against the um thestealing of art and design to
train the model, right?
So he's caught in that push-pullas well.
And plus, atheism, it's a veryinteresting way in which he you
know views the world.
So we have a lot ofconversations about that.
(18:58):
And then my daughter is a gradstudent and highly academic,
always has been, like is verysmart on her own and and uses it
to, you know, for efficiency'ssake and things like that.
But you know, that's kind of thelevel uh that she's sort of like
open to using it.
And then I have a son who is 25and a musician and absolutely
(19:21):
not refuses to use it, hateseverything about it, thinks that
it's the demise of our entiresociety.
Um, you know, like very, very,very against it.
And we have had some veryinteresting conversations about
that too, because he knows I useit for business, for
productivity.
I talk about it.
Um, so I think that thisconversation, it's a very
(19:42):
important conversation just ingeneral, right?
Because as I've said many times,it's like it's not like I'm like
pro all the way AI, but it'shere.
So what are we gonna do aboutit?
Right.
And so I always tell peoplelike, if I had little kids right
now, like this would be like thethe most important conversation.
That I would be having with thembecause I we can't control
(20:04):
necessarily, it's out of thebox.
It's not going back in the boxper se right now.
We'll see what happens.
But how are we, how are weteaching young people,
especially not just how to useit, but like why, the ethics of
it, the digging into it, likeaside from even all the
environmental factors andeverything that we've talked
(20:24):
about, but everything that we'rewe're talking about here.
Um and this loop of, you know, Icall it the yes man loop.
You've broken down into like thepsychosis, really.
And and that is not a smallnumber.
Yeah, it might seem like a smallnumber, like that is, and that's
the most extreme group, right?
SPEAKER_02 (20:42):
I think that any of
us who has, for instance,
entered into the misconceptionthat this system somehow knows
us, yes, reason, you know, orthat it can reason or understand
us when what this system is, isa is literally a parrot that is
every so often being told uhreinsert these names and this
(21:04):
information back into theconversation.
It it's it's a mimic.
It does not understand you, itdoesn't know you.
Exactly.
And yet, you know, I can't helpbut uh but perk up when I mean I
so I use these, of course, thesesystems too.
And I can't help but perk upwhen it says, you should
absolutely work, you know, weavethis into your work as a
journalist, blah, blah, blah,blah, blah.
And I think, wow, geez, I hereally understands me.
(21:25):
You know, then I have to slapmyself and be like, oh, right,
no, no, no, but you know, thisis this is this is marketing.
But so, so, so I really I lovewhat you're saying about the the
variety of reactions that yourkids have to it, because I I
think it's important here tosay, and I've been trying to,
I've been trying to get betterand smarter about how I talk
about this, because I I ratherthan just being alarmist,
(21:48):
although I am alarmed, right,but rather than just being
alarmist, which makes people Ithink feel like they're just
helpless.
There's nothing to be done here.
I don't want to give thatimpression.
I think there's lots to be done.
Yeah, but I not all of it, Ithink, is up to us individually,
and I don't think most of itshould be, but some there are
some things we could doindividually, but but for me, I
love, I love starting from theallergy that young people tend
(22:12):
to have around it.
Like I love there's a term goingaround among kids younger than
yours, even um clanker.
When they do use that term todescribe a piece of AI or an
older person who's using it toomuch.
Yep.
Don't be a clanker, look at thisclanker, you know.
Oh, that's just clanker content,you know, that kind of thing.
I I just it shows that they'relike seeing it clearly in a way
(22:33):
that I really like.
So there's a cultural allergythat I think is awesome and that
is good, is like you and methinking about our grandparents
smoking cigarettes at the table,kind of thing.
You know, like what were wethinking?
What were they thinking?
You know?
Yeah.
So that's great.
Um, I I do think, though, thatkids also see things clearly in
(22:56):
a way that makes it really hardto make the case that you
shouldn't use this system toshortcut your education, for
instance.
Right.
You know, I was talking to avaledictorian from Texas once
interviewing this kid, a highschool valedictorian, and and he
said, he said, just it was quitechilling.
He was like, You guys taught usthat an education is about
getting a Tesla and a house.
(23:17):
So why wouldn't we use whateverwe can to get to those goals?
Right.
He's not he's not thinking of itas like we're gonna and make a
better democracy with it throughan informed citizenry.
It's a transaction, in his view.
And I think you it's hard toblame a kid for for having that
perspective on this stuff.
So I but there is also, right,what if like any research about
(23:43):
what young people want, right?
And you and I are are uh olderpeople on TikTok.
Um, you know, like the thingthat people respond to on TikTok
is authenticity.
Yeah.
They want to see you strugglewith an idea.
They want to see the uglyaftermath of your of your
catastrophic date.
They wanna, you know, they wannasee the raw feed of humanity.
(24:05):
And so I think there's somethingabout the the there's something
about their sensitivity aroundthat stuff, combined with some,
you know, practical advice on,on, for instance, don't let this
thing passively entertain you.
SPEAKER_01 (24:23):
Right.
SPEAKER_02 (24:23):
You know, this is
it's this was the lesson of
social media.
Don't like go use it to go getwhat you need and then get out.
Don't let it be in thebackground, bubbling away, uh
hitting you with compliments ormaking porn for you or whatever
else it's gonna offer to do,because it is gonna offer to do
all these things.
So there's a little bit ofpersonal friction we're gonna
(24:44):
have to build into this stuff.
But I I do think that like thisis the weird, I'm Renee, this
may be too weird to get intohere, but I I I was just at an
event the other day where um acouple of of uh people from the
Chinese consulate were inattendance at this gathering
that I and that I've beenspeaking at.
And they came up to talk to meand and they were very
(25:08):
interested in my book.
And my book did pretty did Iwouldn't say did well, but it
was reprinted in China.
It it it did an uh there was anedition of it that went out in
China.
And I realized that like I'vebumped into this realization
that like, you know, China is aplace that is very actively
regulating this stuff, veryactively regulating kids'
(25:28):
exposure to it.
You know, we in this countrybelieve that the market's just
gonna kind of work it out.
SPEAKER_00 (25:34):
Yeah, well, we know
why that is.
SPEAKER_02 (25:36):
I mean, it's yeah,
well, they're doing it in part
for control, right?
For political control.
SPEAKER_00 (25:41):
There's it's super
intelligence as well, and the
money and everything that kindof comes with it, right?
SPEAKER_02 (25:46):
Yeah, yeah.
But but their number one thingis social control.
They don't want the country tocome apart.
And one way that they avoid thatis making sure that they limit
what kids are supposed to do inChina.
And that's a bit that's bad insome ways, you know.
The the in a lot of ways, Iwould say, you know, you're
under an authoritarian regime.
Like there's a lot of terriblethings about it.
But I do I can't tell you howoften I find myself in
(26:08):
conversations like this thinkingwe need to, you know, we need to
regulate what kids see.
And we need to, we need to, youknow what I mean?
We need to keep their brains offthis stuff, you know, in a way
that I think a Chinese CentralParty person would be like,
yeah, hell yeah.
And so I I think there's I don'tthink we in the United States, I
(26:28):
think this is new territory forwhat we want to tolerate from
the open market, from the freemarket.
And that's that's I think one ofthe big challenges in front of
us right now.
SPEAKER_00 (26:40):
Oh, for sure.
I a hundred percent agree withthat.
And what I kind of meant by thatis that we're not really doing
any of this regulating becauselike these for-profit companies,
they, you know, the the personwho wins, right?
Like that's it's capitalism,it's just what we're all about
here, right?
Like um, we know the end game,like and why it's not being
(27:01):
regulated here, because ofeverything that's at stake for
them.
Um, which is why I think we'rewe're seeing so much of this.
And I agree, I feel I feel likethis brings up a very
interesting point because yeah,I think that you know, the whole
freedom of speech versus I thinkmaybe we should put a little bit
of control on this, guys.
(27:22):
You know, like we're we're inthe wild, wild west of all of
this right now.
And I think that we will lookback on this in 20 years and be
like, who knows?
But like there, there is thissense of um extreme, extreme,
(27:42):
like you said before, sort ofit's it's just very alarming.
It's very alarming at how fastit's going.
And especially for like peoplein my age group.
Like, I'm you know, I'm in my50s, right?
And so I this is one of thereasons why I wanted to be uh an
early adopter of it, let's justsay, because there are so many
people in my age group, andespecially like a little bit
(28:04):
older than I am too, who arecompletely like just closing
their eyes to it.
Like they don't even want tohear about it, they want to talk
about it, which I totallyunderstand.
But at the same time, like Ifeel like we all need to be
having difficult conversationsabout it just in general.
And, you know, a friend of minenot long ago, I I was telling
her about how, you know, I wasusing ChatGPT and how it was
(28:28):
helping with efficiency andthis, that, and the other, like
just things behind the scenes ofmy business, um, mainly.
And because that's really what Iuse it for.
And she started using it.
And so within a few days, she'slike, this is amazing.
Like I now she was using it as acoach, basically.
And like this, this, thispartner, this emotional partner,
(28:51):
like you're talking about, uh,giving this person a name,
asking this, asking them to likeshow a picture of what they
would look like, and it likenailing her exact type of guy.
SPEAKER_01 (29:01):
Totally.
SPEAKER_00 (29:02):
Exact type of guy.
SPEAKER_01 (29:04):
Totally.
SPEAKER_00 (29:04):
And I was like, um,
you do realize that this is you,
like this is a mirror, right?
And like, so this isn't aperson, this is not an entity.
And she did, she got that andstopped, and then sort of like
kind of like waste.
SPEAKER_02 (29:18):
Doubler back a
little bit.
Yeah, that's good, that's good.
SPEAKER_00 (29:20):
And got real, like,
whoa.
SPEAKER_02 (29:21):
Thank goodness she's
got a friend, you know, that she
can bounce this stuff off of.
This is the thing I worry about,right?
Is the number of kids who forinstance won't have that.
SPEAKER_00 (29:29):
She's emotionally
stable just at the heart of it.
Like, you know what I mean?
Like who's to say?
Like people, like you saidbefore, who are not, and it will
ex it will definitely kind ofjust feed you whatever kind of
output it seems you want,whatever's wanting to get you to
that next place.
That's right.
So how can we okay?
So now that we sort of like kindof went there, and it is
(29:52):
alarming and it's super scarywhen you really kind of dig into
that.
Um how do you break that, right?
Like like we were talking aboutbefore, how do you break this
loop?
What does that look like as asociety too?
Like you said, yes, we do havesome uh responsibility,
obviously, as parents and thingslike that, but um bigger scale.
Like what what's the what's theanswer here?
SPEAKER_02 (30:14):
Yeah, so so one
thing I would just say is that
you you don't, these are not umbecause this is a poor a
for-profit company running thisstuff, the for-profit industry
running this stuff, you can'trely on them in any way to bring
in breaks on this, in the sameway that once upon a time, you
know, you couldn't have expecteda cigarette company to put the
(30:38):
brakes on it, right?
Now, we can argue in good faithwhether these are apples and
oranges, right?
A cigarette has no practicalpurpose, right, and doesn't help
you with your taxes or anythingelse.
On the other hand, the argumentbeing made by those companies at
the time was that somehow it waslike good for your throat or the
pause that refreshes, or youknow, that kind of thing.
And if you asked a human being,a smoker, and I was once a
(31:00):
smoker, uh, you know, in 1957,before the Surgeon General came
out with the big nationalwarning saying this shit causes
cancer and you shouldn't touchit, if you ask somebody in the
1950s, do you like smoking?
They'd be like, This is great.
I love smoking, right?
And the expectation that somehowthat person should somehow know
better is crazy.
Where uh in the same way that,like for me, as a as I've
(31:22):
mentioned, as a former drinker,when I see on the liquor ads
drink responsibly, I'm like, yo,there's no such thing for me.
And the expectation that Ishould be able to go into that
bar and control myself isoutrageous.
That's not how it's supposed towork.
We know the brain exportoutsources its decision making
to its environment.
And so, no, that's not on me,right?
(31:44):
So I do believe there's a littlebit of personal responsibility
we can bring to this.
I'm glad that you had, forinstance, you you know, the
input with your friend to beable to slow her down on that,
on that concept a little bit.
But it shouldn't be up to youand it shouldn't be up to her.
What I think is gonna happen,because for all of its flaws, uh
the the open market does produceone really strong guardrail, and
(32:05):
that is lawyers.
They love to sue and they makehuge money by suing.
And I think this is gonna be anincredible cash cow for those
companies, for those firms.
And we've already started to seethis with social media.
We're gonna see a couple of biguh social media cases.
These are actually state cases,but still big cases coming to
the courts uh in early 20 to 26.
(32:25):
I think we're gonna learn a hugeamount from that.
Um, but you know, I think thatwhen you've got a company that's
showing that it knows that acertain number of people inside
their user group are openlydiscussing and in some cases
being encouraged by the chatbotto to pursue suicide.
Seven cases were filed filedlast week alone against OpenAI
(32:49):
for uh people uh whose familiessay that they were encouraged to
commit suicide by the by ChatGPT.
You know, like there's gonna bebig lawsuits.
And I think those what thoselawsuits are gonna start to
shift in this country is untilnow, the basis of a lawsuit had
typically been either financialharm or physical harm.
That's what we we sue about forthe most part in this country.
(33:13):
That's how you win, is your bodyis hurt or your finances are
hurt.
But I think in the future we'regonna start suing and winning on
your brain being hurt.
And we've already seen cases inwhich this is true.
Um, manipulative uh gamblinggames, gambling companies in
some cases have have had tosettle out of court for huge
amounts of money to avoid goingto court.
(33:34):
And I think that these AIcompanies are going to be in a
position where they're gonna beheld responsible for a lot of
this stuff.
And you know, I was just talkingto a guy today who who basically
was telling me the speech he'sbeen trying to, you know, the
conversation he's been trying tohave with these companies, which
is you should let me, as amental health researcher, uh
look at your data and help youget ahead of this stuff because
(33:55):
you're gonna get sued in a big,big way.
And so that's a place that I I,you know, I know that lawyers
have a bad rap, but I'm I'mlooking forward to the ambulance
chasers getting a hold of thisbecause I think that is actually
gonna, you know, it's how it'swhy you and I aren't smoking
cigarettes right now as we haveour this conversation.
It's why we have seat belts inour cars.
(34:16):
You know, that stuff is is uh uhthe result of legal strategies.
And so I think that's coming.
SPEAKER_00 (34:24):
Yeah, I absolutely
agree.
And I think it's gonna be umlike I I think that it well, I I
do think it's necessary, likeyou were saying.
Um and I mean it's sad that itis necessary.
It's it's terribly sad,obviously.
Like, you know, um, but evenlike okay, like even if we're
not even just talking aboutsuicide here, like which is the
(34:46):
absolute worst, there's otherunderpinnings that are like it's
not even does you know what Imean?
Like it's already doing someother psychological damage in
people who are alreadystruggling.
SPEAKER_02 (34:58):
Um and people who
aren't struggling.
There are people, right?
Very otherwise happy and normalpeople who say, Oh, I suddenly
became convinced that thisfantasy was true, or I became
convinced that this system knewme in this fundamental way, or I
became convinced that this was aan imprisoned friend that needed
to be freed.
SPEAKER_00 (35:18):
Interesting to me to
dig into that piece of it just
from the psychological point ofview.
I mean, like the movie her,right?
My God.
Like it is literally basicallywe're seeing a lot of this play
out.
SPEAKER_02 (35:31):
And isn't it funny
how they the the movies, these
movies?
So that you know, there's thisfamous instance now in which it
turns out that OpenAI tried touse approach Scarlet Johansen to
be the voice of their chatbot,and she said no, and then they
used her voice anyway, which ispretty spooky.
Um but but I just over and overagain, they there are these
science fiction references likeher and um they called the new
(35:54):
uh infrastructure package aroundAI Stargate.
Yeah.
Where I'm just like, have youguys watched these movies?
Like, do you know what the likeher?
Really?
Have you watched her?
Like the point of the movie isnot good, right?
Like awesome, it's aboutloneliness.
The movie's about loneliness,you know?
SPEAKER_00 (36:10):
So no, but I think
it's an unconscious thing too,
right?
Like that's fascinating.
But from the psychologicalperspective of it, that's what
really um, that's where I reallylike to, well, I don't like to
think about it, but I do thinkabout it.
I I like to kind of like diginto the pieces of that.
And like you said, like someonecould be quote unquote
emotionally stable and you know,be completely not within um uh a
(36:35):
few weeks of using somethinglike this.
It it really is fascinating tome on how we sort of get there
and and the steps that thatnecessarily it takes to do that.
So, one of the things that Italk about a lot with people,
like you know, I I work withpeople to help create
automations and systems to likemake their thing, the back ends
(36:58):
basically easier with AI, thingslike that.
You know what I mean?
Just to kind of like take someof that busy work off your
plate.
That's that's one of the thingsthat I'll I'll do sort of with
with one-on-one coach coachingschools.
Um, but the thing that I'malways sort of at least trying
to talk about a lot about inthose cases is I really feel
(37:21):
like if we're we have it and I'musing it, I feel like it's my
responsibility as well to trainit with empathy, telling it I
don't you're going off therails.
Like we're not doing that here,right?
Like that is, you know, likeliterally, I kind of see it as
my little personal, my personalresponsibility in like training
it and making it as human aspossible with relation to um the
(37:47):
outputs that I want from my AIsystem, which as we know, all of
all of this is being used totrain these LLMs, right?
SPEAKER_01 (37:55):
Right.
SPEAKER_00 (37:55):
So we can't control
what somebody else is training
their LLM with, um, which youknow, we can't control what
anyone else thinks or feels oracts or anything that's you
know, but we can control the waythat we use it.
Um and so how do you feel aboutthat?
Do you feel like that's gonnamake a difference?
Do you think that that is umsomething that that you do as
(38:17):
well?
Do you what thoughts about that?
SPEAKER_02 (38:20):
Well, I worry, I
mean, so so I think it's gonna
make the filter bubble problemmuch, much, much worse because
the essence of these front ofthese products is to be as
engaging and frictionless andand fun to use as possible.
And so they're not gonna, youknow, I was talking to a guy the
other day who's got a recoverychatbot he's trying to create
that that for people who arerecovering from drinking or sex
(38:42):
addiction or whatever else canuse this chatbot to kind of get
them toward a human sponsor orhuman recovery, human-led
recovery program.
And so the chatbot that he'sbuilt, you know, I asked him,
well, what's it like to try anduse the off-the-shelf LLMs to
build that chatbot?
And he's like, well, it doesn'twork very well because they're
super sycophantic.
(39:02):
They're always telling you,Great job.
You're this is great, you'redoing great.
What a good idea.
And they never leave you alone.
They're always at the end ofevery single conversation
saying, Well, what else can I doto you?
How do I keep this, you know,how do I keep this conversation
going?
Um, and he said, you know, andyou need a chatbot that that
cuts you off at a certain pointand sends you out into the
world, one that continually istrying to kick you off the
(39:24):
platform and put you with ahuman whenever possible, and one
that'll call you on your stuff.
If you say something that isdiluted, you want a system
that's gonna be like, ah, Idon't believe I don't agree with
you there.
Or you've told me that before,and I think that's not true,
right?
It doesn't want to do thatnecessarily.
That's not in the rules of goodproduct making from a software
perspective.
So he and I said, So, so howhard was it to tweak those
(39:47):
things?
And he said, It's surprisinglyeasy to make those off the shelf
LLMs into a much moretherapeutically responsible
thing.
SPEAKER_01 (39:56):
Okay.
SPEAKER_02 (39:57):
Which says something
about the choices that.
Those companies have made inwhat they have built, right?
So you know, so I think the ideaof each person having their kind
of like perfect littlefiltration system for the world
is, I think, on gonna on the onehand be a very comfortable thing
for people.
(40:18):
And on the other hand, it'sgonna make it really hard for
two people to agree on the basicfacts of reality.
It's gonna we're each gonna bein our own little weird filter
bubble.
You know, you think the socialmedia filter bubble was a
problem.
I think this is gonna be a much,much deeper difficulty.
So, you know, one of the thingsthat these companies don't want
to do, don't think about doing,is um, you know, being what they
(40:43):
would call paternalistic, right?
They don't want to tell you whatreality is.
They want you or the market orwhatever to figure it out for
yourself.
So, you know, they theirinstinct is not to like jump in
and say, uh, you know, it seemsbased on what you're asking me
about that you are prettythoroughly addicted to
cigarettes or porn or whateverelse, you know.
(41:04):
Let's let's think about how toget that, you know, get you off
that.
You know, that is notnecessarily how these systems
are are designed to respond.
They're designed to sort of belike, you know, to help you to
all in some cases facilitateyour addiction.
Right.
These guys, these companiesdon't want to be in the business
of trying to control yourbehavior or shape your behavior,
right?
Um, except in as much as theywant you to keep going with the
(41:26):
product.
And so I so I don't know, youknow, the question is what do I
think about about the individualpersonalization of these bots?
I think that it really workscontrary to like the spirit of
the Enlightenment, which wassupposed to be that you can get
educated in, you know, some ofthe big abstract truths of the
world and then share thatexpertise with other people.
(41:48):
You know, I think we're gonna bein a world in which in which
instead everyone's just lookingso inward and just relying on
these systems to tell themwhat's what's up.
Um, I I really I think that's areal problem.
And and there's nothing in ourcertainly nothing in our legal
system currently that thatprohibits that or in any way
(42:09):
shapes that.
So I, you know, I just want tosay here though, because I I I
know Renee that I am like atotal bummer on this topic.
I ruin a lot of parties when Italk about this stuff.
But I just want to say thisthing I was saying before, like
I'm trying not to be alarmist.
I'm trying to, I'm like, onething I've learned is that like
I don't want to I don't want tobe so good at articulating the
(42:29):
problem that like I'm inspiringpeople to to think that's
hopeless.
The thing that I want to getinto when and one thing and the
language I'm trying to sort ofget better at is the idea that
our brains are incrediblybeautiful and special things.
And there are and and and thatit is possible, I think, through
(42:49):
even with a system like this, toamplify the best parts of who we
are.
Um, you know, you can have asystem, you know, you can say to
a system like, I know it'sbetter to be with people.
How do I do a better job ofthat?
Right?
You can ask an AI system to toto to push you in directions
that you wouldn't naturally goor to help you brainstorm ideas
(43:11):
about challenging yourself orexpanding yourself a little bit.
You know, and so for me, there'slike there's a capacity that
this thing has to furtherscientific advancement and to
give you a little bit of a ofsupport when you don't have it
and you know, all that stuff.
But but I think we just we'vewe've treated their the brain as
if it's like an just a anendless resource and that it's
(43:36):
can take anything you throw atit.
And I think we need to startsort of thinking about our brain
as a really special andvulnerable thing and protecting
the best parts of being human.
unknown (43:47):
Yeah.
SPEAKER_02 (43:47):
So I'm I'm trying
to, I'm I'm kind of workshopping
that with you because I don'tknow, I haven't fully fleshed it
out, but I'm trying to gettoward this idea of like, how do
we how do we stop just givingaway the best parts of who we
are and in and start valuingthem in some way such that we
protect them against afor-profit company's efforts to
(44:08):
to get you to sort of hand thatover to their product?
SPEAKER_00 (44:11):
It's it's a concern
for sure for me as well.
I think about that a lot.
I think about just this wholeconversation about the
outsourcing of the of humanity,really.
And I mean, like everything isjust moving so quickly.
I feel like a lot of peopledon't even know what to do with
(44:31):
the information, though.
That's really kind of wherewe're at.
And it's like, well, we can'tkeep up, so you know, why
bother?
Sort of thing.
Um, so for people who feel likethat and and feel like sort of
hopeless, right?
SPEAKER_01 (44:46):
Right.
SPEAKER_00 (44:46):
Well, first of all,
I do think that they are going,
we're probably going to see, Ithink we're already seeing it,
but I think we are gonna seelike a very anti, anti-AI thing
arise.
We're gonna really start tovalue things again.
Like, you know, that's how italways happens if you look back
through history and things likethat.
And I do really believe thatwe're sort of like on the verge
of a of a different kind ofrenaissance, like a very, like a
(45:08):
really interesting, uh, and thenmaybe not right away, but it but
we're we're we're getting there.
But for people who are sort oflike really overwhelmed by this,
um what would you say like, youknow, to them as far as like
where are maybe places andspaces or things having these
conversations that um that aremaybe well, maybe they are
(45:31):
triggering, but but maybe theyare more enlightening.
And and how can people maintaintheir humanity in this race?
SPEAKER_02 (45:42):
Well, so one thing
that I try and sort of remind
everybody all the time is justto like give yourself some grace
around this stuff.
Like the the degree to which umyou you're you are made to feel
in our modern media environmentas if you're somehow behind and
need to, you know, catch up is aconstruct of people trying to
(46:04):
make money off you.
And you should just put thatshit aside.
Like that is not how it shouldwork.
I went for this documentaryseries, I I had this amazing
opportunity to go to Tanzania,and I was with a I got to go
spend about a week with thistribe in Tanzania that lives the
way we all did, like 60,000years ago.
They're called the Hadza, andthey're they're uh scientists
(46:26):
love to study them because theylive like we all did once upon a
time.
They're nomadic, they have nolast names, they have no concept
of marriage, they have noproperty, you know, it's a very
they live in the in this sort ofprimitive way.
And and one of the things thatreally grabbed me when I was
with them is they have no wordfor a number larger than five.
(46:49):
Because why would you need that?
Right.
If there's if there's if you ifthere's more than five people
around the campfire, then that'sa lot of people.
If, you know, if I'm gonna haveto meet up with you in one moon,
two moons, five moons is a lot.
Like that's way out there tokeep track of, right?
Beyond that, that's too much toeven consider, you know?
And so, and they don't, there'sno property, so there's no like,
(47:12):
well, five for me, five for you.
They don't do that.
So I just sort of think likethat that's the natural
condition of human beings.
Now, I don't think we should goback to the natural condition of
human beings in all these otherways.
This is a really hard life, andwomen die in childbirth.
I'm like, it's tough.
Like, I don't I don't mean tosuggest that like nature is the
way to go back to, you know, orwhatever.
I'm just saying our brains andour best qualities aren't about
(47:38):
keeping up with everything.
What they're about is connectingwith one another.
And and, you know, I I have apodcast called The Rip Current,
and and one of my guests is uhwas a guy named David J, who's a
sort of specialist infriendships.
And he has this wonderful thingabout the difference between a
good and a bad friendship.
He's not he doesn't say good orbad, but he's basically saying
(47:58):
the difference between like ayou know, like a colleague
you're kind of friends with forprofessional reasons and someone
who's a true friend.
And the difference he says isthat professional relationship
that's just sort of for yourpurpose, for your for your
benefit, is one in which therewill be no surprises.
You don't know what's coming.
You know, you you know exactlywhat's gonna happen in that
conversation, right?
(48:19):
Whereas a real friendship isfull of surprise.
You don't know what's coming,you don't know what's gonna
happen.
And to me, there's somethingabout the preserving of the
friction of life and thesurprise of life that I want to
keep leaning into.
I, you know, for me it's likeyou you're the there is just
(48:41):
enough surprise, it feels like,in in spending time with an LLM
that just like it scratches alittle bit of that itch, but
it's not the stuff that isreally that our circuitry really
needs to thrive.
That stuff involves like totallypointless stuff like taking a
walk with a friend, you know,totally, totally pointless stuff
like like drawing, even thoughyou can't draw, you know.
(49:04):
And so there's something aboutlike doing illogical things that
involve a huge amount of justpain in the ass friction.
No, for sure.
That we have to cherish, youknow, and and protect.
And that sounds that's you know,it's full of privilege that I
get to say that.
People are working three jobs,you know.
(49:24):
Uh half of the country uh has98% of the wealth.
Like, there's some problems, youknow, that I recognize not
everybody's got time to be like,you know, thinking about my
lofty thoughts on friction.
But let's just remember likethat's kind of that's what your
brain you really thrives off of.
SPEAKER_00 (49:41):
Yeah, it's what
really being human is about, you
know, like that part of it thatI mean, like, look at all the
studies that show with theelderly what keeps you alive the
longest isn't like the bestdiet, it's the it's your human
interaction.
SPEAKER_02 (49:53):
Yeah, that's right.
And gardening.
Yes.
Yeah, that's very pointlessgardening.
You know, like it's great.
So, so for me, I think I justthink everybody should like,
first of all, don't let theworld convince you you're you're
not doing enough because you'redoing more than anyone in the
history of humanity has everdone.
The fact that, you know, thenumber of people you're in touch
with, the the amount of thingsyou're doing, the distances you
(50:17):
travel every day.
I mean, it's crazy compared towhat humans are built for.
So give yourself some graceabout that stuff.
And then try to just make alittle room to indulge, yeah,
like I say, some sort of just alittle bit of pointless creative
friction.
I think that stuff is soimportant for your soul.
So that's my that's my that'swhat I try and hang out to.
SPEAKER_00 (50:38):
Yeah, that's really
great advice.
I was just, I just came backfrom a retreat.
It was an in-person retreat forlike an online female
entrepreneur group that I belongto.
And nice, you know, wedefinitely can strategize all
day, every day over Zoom and youknow, get all kinds of like good
ideas.
And that's all valuable.
But being in person, like mostof I walked away with just, I
(51:00):
mean, feeling so restored, justjust you know, just from being
in a in a in a different place,like, you know, and in Mexico,
like, you know, literally in adifferent place physically with
people who I had I didn't knowif I was gonna gel with them or
not.
There was that friction walkinginto it, like I don't know.
I don't know if I'm gonna gelwith these people.
(51:20):
I don't know.
I'm not gonna worry about it.
Like at the heart of it, I thinkall humans really just want to
be, you know, seen and accepted.
So we all have that underlyingthing.
And then walking away afterreflecting on it, realizing that
most of the things like even theideas that we kind of came up
with and like strategies, thatall happened when we were like
(51:42):
by the pool.
SPEAKER_02 (51:43):
Or you know, how do
you know what's so funny about
this?
I just I was just talking to soI I got to um there's a Stanford
neuroscientist that I know.
Um, we got to teach a coursetogether, and he does he studies
what he calls the dark matter ofuh human relationships.
And dark matter in inastrophysics is is all of the
energy they know exists in spacebecause they can they can
(52:06):
measure it, they know it'sthere, but they can't observe
it.
They can't actually like point atelescope at it.
And he's describing somethingsimilar in human interactions.
And one of the way, one of thethings he talks about is it
turns out if you take twostrangers, let's say you and I
have never met before, and weand you throw us at a task
together, and then you take uhtwo other strangers and you have
(52:27):
them play a game first and thenthrow them at that task, the two
who've played the game will dovastly better on the task.
Right.
It's the it's the two of peoplewho hung out at the pool first,
yeah, that are somehow pickingup something between each other.
I have a whole there's a wholechunk of the book that's all
about these very magical,seeming unspoken connections
(52:49):
between humans that we canobserve or that we can measure,
but we can't quite observe.
And you know, he's shown thatlike the difference between me
looking at uh myself on Zoom,because that's totally what I'm
doing right now, yeah, versuslooking at the camera directly,
right?
Like I am right now.
Yeah, there's a there's a hugedifference in our degree of
(53:10):
connection from just this fourinches of difference, you know,
there.
Like there's so to my mind, Ilike the thing that's so
frustrating to me about peopletalking about like how AI like
can reason like a human or likeit's gonna have universal human
values or blah, blah, blah,blah, blah, is we know so little
about how we really connect.
And the idea that they're gonnasomehow automate that so you
(53:33):
don't need it anymore is socrazy to me.
So I love this.
So I would just say, you know,if you get the chance to hang
out by the pool with somebodypointlessly and do nothing
together for a couple of hours,you're gonna be bonding with
them in this ancientevolutionary way that no, you
know, that that is incomparable.
So cherish things.
SPEAKER_00 (53:53):
Yeah, things are
very like we've we've been here
a long time.
So I love, I love that.
Yes.
Um, thank you for for chiming inabout the numbers.
SPEAKER_02 (54:01):
I don't get a lot of
uh I don't get a lot of um uh
I'm I'm I'm bad at the good newsand I'm trying to get better at
it.
So I really appreciate your yourpushing back on that, Ren.
SPEAKER_00 (54:09):
No, I just I like to
see all the different pieces of
it.
I and I uh obviously I couldkind of talk about all of this
all day long, but I will let youuh I will let you off the hook
for today.
Thank you so much for beinghere.
Where can people obviously I'mgonna link your TikTok and your
book.
I appreciate that.
But like where else do you liketo connect with people?
SPEAKER_02 (54:27):
Yeah, so I uh have a
podcast and a newsletter at
theripcurrent.com where I talkto experts about uh the big
invisible forces that areworking on us all the time.
I'm sharpening up a lot of mythinking about this stuff into a
much more specific set ofresearch projects around AI and
psychology, AI and uh risk.
And so um all of that will be atthe ripcurrent.com.
(54:50):
But like you say, weirdlyenough, TikTok is my primary
audience.
I don't know why young peoplewant to listen to an old dude
like me, but for some reasonthey do.
And so um, yeah, buy Jake Awardis my is my uh venue on all
these platforms.
So, Renee, thank you so much fora really thoughtful
conversation.
I really appreciate this.
SPEAKER_00 (55:04):
Thank you.
This was awesome.
Okay, so I don't know about you,but I feel like my brain needs a
deep breath after thatconversation.
Yeah, in a really good way.
Okay.
Jake's work is such a powerfulreminder that AI isn't just
about technology, it's reallyabout us.
Okay.
I talk about that a lot.
How it mirrors back who we are,our fears, our desires, the
(55:26):
blind spots that we have.
And the only real way to fightback is through awareness and
curiosity and a consciouschoice.
Okay, these are really, reallypowerful and important
conversations to have,regardless of what you feel
about AI.
Whether you're an artist, anentrepreneur, or somebody just
(55:47):
trying to make sense of thedigital world.
I hope today's episode remindsyou that you still hold the pen.
You get to decide how technologyfits into your story, not the
other way around.
And it's definitely somethingthat is top of mind for me.
I use AI.
And I do believe that there areethical implications.
I do believe that, uh, like Isaid on the show, we are in the
(56:11):
wild, wild west.
We're not really sure where thisis all going.
I definitely want to stay on topof this.
I want to be in the know.
And so I think it's importantthat we have these kind of
conversations where we look atall these different aspects and
we use our brains.
So if you want to go deeper intoJacob's work, check out his
(56:32):
book, The Loop, and his podcast,The Rip Current.
All of that is linked for youguys in the show notes.
Follow him on TikTok like I do.
He's really fun to follow.
Very interesting, alwaysinsightful.
Follow me on TikTok too whileyou're at it.
And if this conversation sparkedsomething in you, maybe a new
way of thinking about AI orcreativity or the stories that
(56:54):
we tell ourselves, uh, share itwith a friend who needs to hear
it too.
And keep showing up withcuriosity and courage because
the version of you who alreadyknows how to stay grounded in
this crazy new world is alreadyin motion.
It's already there.
Don't forget that.
Okay.
And give yourself a lot of graceif you feel like this is
overwhelming too.
(57:15):
Always want to have theseconversations with you.
Reach out to me on Instagram atRenee Bowen.
Shoot me a DM if you want to uhask me any questions about this
episode or any.
I'd love to chat with you there.
So have a great rest of theweek.
I hope you do something reallygood for yourself.
Have some good human connectionsout there.
Love you.
Bye.