All Episodes

August 13, 2025 78 mins

Angie Martinez is joined by futurist and AI educator Sinead Bovell to answer all questions related to artificial intelligence. She details the progress artificial intelligence has made over the years, and the pros and cons of using it. This interview lays out all do’s and don’ts in using AI, otherwise known as your AI crash course for novices.

#Volume

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Please touch one strand of grass. I think we all
need more grass, more outside, and I think that's hopefully
the goal with some of this tech for some people.
Some companies are building it to we spend more time
with it. But if I could paint division at the future,
it's that we spend less time connected to devices in
fake worlds and more time within the worlds that we

(00:20):
actually want to be within.

Speaker 2 (00:27):
Okay. Shanade Bovelle is here with us today. In real life,
she is a Canadian futurist. She's an AI educator, founder
of Way, which is weekly advice for young entrepreneurs. It
is a platform that prepares youth, especially from under represented communities,
for future technology. You've delivered eleven talks at the United
Nations and currently serves on the UN's AI advisory That

(00:51):
is impressive.

Speaker 1 (00:53):
I'm an advisor to their advisory body.

Speaker 2 (00:55):
Explain that to me.

Speaker 1 (00:57):
So they have a formal advisory body that has different
representatives countries, Okay, and then sometimes they'll convene meetings to
seek advice on specific topics. So the longer term AI
risk or future of work.

Speaker 2 (01:09):
So a couple of things. Guys, safe to say, you
are you if something is happening in AI. You either
know it, have researched it, have insight information on it.
And you've been doing this work a long time now,
a long time.

Speaker 1 (01:23):
I think I held the first talk on AI in
New York eight years ago. Wow, and now here we.

Speaker 2 (01:29):
Are, here we are. So the reason I wanted her
on IRL. You're a different guest for IRL. We don't
normally do we normally do insight into humans and our
lessons in life. But I think that this is a
really important conversation for us to have here because it
does inform our real life. And if it hasn't yet,

(01:50):
it will soon. Uh. And the way we met was dope.
We met at Questlove has a yearly holiday party. He
does something called Black Elephant. So Black Elephant, you have
to bring a prize, like a grab Bad prize, but
it has to be something that's unique to you. So
my offering was I'll do a legacy interview for you.
That's what I offered. Everybody brought to the table something

(02:11):
and that at the end, everybody's trying to get the gifts,
and your gift.

Speaker 1 (02:15):
Was an AI consulting course for somebody.

Speaker 2 (02:19):
So interesting. I was at the party, like oh, I
want an AI connect course.

Speaker 1 (02:23):
So people are fighting trying to change, and I'm fighting
for yours to Hunger Games. He calls it the elephant thing.
It is Hunger Games with the Winter Edition.

Speaker 2 (02:33):
I got a piece of art that day, which was
great that somebody made. But I thought it was so
interesting to like number one, Why did you want the
interview so bad? I thought maybe it was for somebody
in your family, and you said to me, no, I
there's so much that people need to know that I
just want to share this information.

Speaker 1 (02:51):
Yeah. AI needs to become a part of our everyday life,
because that's the role it's going to play. So it
needs to become a part of popular discourse and just
general culture so more of us can kind of participate
in how this technology moves.

Speaker 2 (03:03):
Yeah. I didn't win the prize for you. I didn't
win the AI consult content.

Speaker 1 (03:10):
That was so sad, But you generously gave me an
interview after that your game. Yes, and now we're here
and now it's all happening.

Speaker 2 (03:18):
Now, okay, let's get right to it. Lady. First of all,
what is the headline on everything that you teach? You know,
everything that you go around and all these talks that
you do. What is the number one question you are
asked about AI?

Speaker 1 (03:32):
The number one question, I think one is are you
as surprised as we are at how fast it's going
even for the inside people do the work?

Speaker 2 (03:40):
Got it?

Speaker 1 (03:40):
And to how is it actually going to change how
we live? Because I think a lot of people see
AI as this chatbot that maybe lives in a computer,
and so it feels like this is something I can
maybe just not use. So why does everybody care about it?
And maybe three is this hype? Are we overreacting to
this moment? Is this just some tech bros trying to
sell something?

Speaker 2 (04:00):
Yeah?

Speaker 1 (04:01):
Those are maybe the three main questions, but the topics
are all arrange depending on who the audiences.

Speaker 2 (04:06):
Well, for me, I think there's a couple of things.
I look at my mom. I've had this conversation and
I'm not even a techie girl in that way. I
told you I have like a curse on me. I
swear something could work for ten people in the tay
here Anne try this no thing, and I tried. It
never works for me. I'm like, it doesn't work. I
swear I have a jink, So I'm not necessarily in

(04:27):
that space. But if I find myself encouraging people all
the time who say I don't do AI. Oh, I
don't use AI. My mother the other day said to me,
I don't Yeah, I don't know how to use any
of that stuff. I was like, Mom, stop saying that
you can if you would like to. And also, things
are happening so fast. You don't want My mother's seventy three.
But she's a very young seventy three. She travels, she lives,

(04:48):
she's a great life. But I'm like, you don't want
to be left out of life, out of what's happening
in the world. You don't want to be the person
on a rotary phone when everybody else is on an
iPhone or whatever this. You don't want to be that.
And so I gave it like the immediate kind of
like chat gpty have the thing. And she's been using
it ever since, which is so great. It was like
one conversation with me, who's not an expert, to let

(05:11):
her feel a little. But there is this thing, especially
older generation, where they feel intimidated by it if they
have not explored it at all, What do you say.

Speaker 1 (05:20):
To them, Yeah, it is really intimidating. I mean, when
it comes to older generations, they've arguably lived through a
lot of transitions as well. The difference with AI is
it's almost like all of those combined and within five years,
maybe not fifty, but to older generations, I try to
explain AI like electricity, So think about it less as

(05:42):
a chatbot and more as of this kind of infrastructure
that we're going to rebuild our world on top of
and not even think about or see. So right now,
you can engage with the system in your phone, but
eventually it will just be embedded into everything, the way
electricity is, and you don't even think anything about it.
A good thing about artificial intelligence compared to when the

(06:02):
Internet came around, The Internet literally would have made no
sense to people. You're gonna click on this random browser
and then surf the web with what surfboard? What do
you mean? What web? With AI, you're conversing with it
in your natural language. So when older adults or different
communities that kind of seem scared of AI start to
engage with it, then they're like, wow, this is actually

(06:23):
easier than trying to navigate the Internet. I just talk
at this thing. I just talk at my device and
get what I need. But it's overcoming that first kind
of scary Where even is chadgbt? Is it a person
that I call? Where is it? Once you overcome that,
you realize, Okay, the learning curve isn't as steep.

Speaker 2 (06:39):
What is the first thing? What is the introduction like
to someone like that, to my grandmother or to my
mother or somebody in your family who's just now And
then you say, no, look, so how do you introduce
them to it?

Speaker 1 (06:51):
It's such good questionings. So for me, it depends on
the problem that somebody has. So I was home in
Canada with my parents and we had ordered some strange
new stove thing and we couldn't understand the instructions for
how to plug it in. So I take out chatchubt,
I take out the video camera option, and I start
talking to it. Are you looking at this box? I'm
looking at Yes, I am looking at it. How do

(07:13):
we assemble this? And I'm just watching my family watch
chutchubt give us instructions as it's visualizing what I'm seeing
in front of me, and then the're kind of like, ah,
that's a really practical use case for it, or showing
my dad you can point the camera at your bike,
and it can help you assemble it, take a photo
of the instructions to how to build your bike, and

(07:34):
then it will guide you through it. So these really
practical things if you come in with oh Ai is
going to take away meetings and help people build powerpoints.
That's not necessarily relevant to most people, but if you
tell them you could take a photo of your food
at a restaurant and ask for the recipe, that's a
bit more useful. Or do some research for a trip
that's coming up for you. So you have to meet people,
I think where they are, Yeah.

Speaker 2 (07:55):
For sure. I was trying to think of how I
introduced my mother something. She was oh planning a trip.
She was planning a trip, and I said, let's just
go on. And I asked, what are the top five
hotels that are safe if she was traveling with her sister,
for safe for two women, reasonably priced, and you know,
we just asked these things and it was spinning out ideas,
and I said, what do you want to do in there?

(08:16):
I wanted the restaurants, I want this, I want that.
I said, can you relist it in a way that
gives this this? This? And she just was like this,
you know then, So that kind of was a slow
everyday practically every day to introduce that.

Speaker 1 (08:27):
Yeah, if you meet people where they are on the
things they're already doing in their life, then they start
to say, huh, I see how I can start delegating
to this system. Yeah, and then it becomes second nature.

Speaker 2 (08:37):
And then also when I meet women who are have
careers and successful careers and seem to be afraid of
it for whatever reason, I think you don't want to
fall behind the whole the rest of the world. So
I've had that conversation too. I'm sure have you have
you a lot. So there's a lot of resistance, and

(08:58):
for many different reasons. I think that there's a lot
of mistrust. Social media didn't go so well for society,
so I think a lot of we're still kind of
free falling through it. So I think that there's a
lot of resistance in that sense. And then of course
climate all of these different reasons. But when you explain
to people that AI it's not really a technology you
can unsubscribe from, because it is like the internet or electricity,

(09:21):
where it eventually just becomes this new infrastructure or society
gets built on top of so when that kind of
phrase went around, if you don't really know how to
use AI, that's when your job is at risk.

Speaker 1 (09:33):
There is a lot of truth to that. It will
become the default. So right now we're still in the
kind of everybody's getting on board. I can tell you
a lot of CEOs don't really even know what's going on,
but eventually it's going to be the expectation. The same way,
when you go to a job interview, most people don't
ask if you can use the computer or surf the web,
but that used to be a skill set. AI is
eventually going to be in that category. And it can

(09:55):
also give you a lot of time back. I mean,
what are you doing that you can delegate to an intern?
That is the question you should ask in your own life,
and that intern could now largely be an AI system.
And so if you think you're going to be faster
and more productive by giving tasks and things to do
to that intern, that's something that somebody else is going

(10:15):
to do if you don't do it. So there's a
lot of benefits to these systems, and they're also here
to stay, I think, I mean if there are and
you can also give better feedback and pushback when you
use the thing that you want to critique. It's really
hard to kind of stand.

Speaker 2 (10:31):
What do you wait? What do you wait? What do
you mean?

Speaker 1 (10:33):
So if you find that an AI system isn't working
properly for you, maybe it is giving you biased answers,
or it's giving you a different result because it identifies
your gender. You wouldn't necessarily know those things unless you
engage with the system. So the more you engage and
lean into it, the better feedback you can give it
and can kind of give society of how we want

(10:55):
to design these systems. I agree it shouldn't be up
to seven people in Silicon Valley building the most important
technology of our generation. But it's really hard to weigh
into a technology and to a moment if you have
already unsubscribed from.

Speaker 2 (11:08):
It, there is a weird because I keep hearing this,
the more you give it, the better it works for you.
But there are a lot of people who you know,
they're skeptical. Who am I giving this information to? Where
does it stay? Where does it live? Is there security
risks in sharing information? Even I sometimes the more I

(11:31):
share the greater it, Especially me, I've been working on
a few different projects, and sometimes I ask it questions
because I'm curious what it has picked up. I'm like,
do you know who you're talking to? And the chat
GBT is like I do, Angie, oh yeah, And I said, okay,
do you know what I do? And it well, based
on our conversations, I'm assuming that you're Angie Martinez. You're
a radio person that like runs out your podcast like

(11:55):
give me a pretty good description of who I was.
So part of that was scary for me, but then
part of it is like I feel like, Okay, the
more I lean into it, the more I can trust
the answers that I'm getting from it. But is that, Yeah,
that might creep some people out.

Speaker 1 (12:09):
And I think that that's actually a good skepticism. I
think we do want to have conversations how much data
do we want to give these systems?

Speaker 2 (12:16):
Yes, how much do we want to how much?

Speaker 1 (12:18):
Where do we want to draw the line? And it
is true the more you give them so the more
it knows about your life, the less you're going to
have to keep reminding it, and then the more proactive
it can become. So eventually, when these systems become more agentic,
which just means they can take action on their own.
It could remember that you do your podcast on Mondays
and already kind of you know, divert meetings to Tuesdays.

(12:38):
It could start doing more things, but then it starts
to know more about your life. So we have to
figure out what are the new data rules that we
want to have in the AI age. And there is
a way around this, right if your AI system more
or less lived on your phone, so that data didn't
go all the way back to Silicon Valle or to
the cloud, then it's okay if it starts to know
more about you. But if we don't know where that

(12:59):
data lives, then yeah, I wouldn't really be uploading my
blood tests with my name and my address.

Speaker 2 (13:05):
And my Social Security number all of that.

Speaker 1 (13:08):
And yeah, it's funny. I used a fake name with
one of the aistems for a long time and it
caught it. So when I asked the question, I used
the name Lauren, and I had just a totally different
name and just give it purposely could try to confuse it.

Speaker 2 (13:22):
What system was it of?

Speaker 1 (13:23):
This one was chashuby t and so, and I was
just asking it just for the normal things that I
do in my work life. And then when I asked
it to consummarize a bio that I could use from
an upcoming talk, it didn't use Lauren, it used my name,
So they can still piece together there. How it has
enough data that it's trained on the internet that if
it can see enough times, okay, biracial Canadian studies Foresight

(13:48):
and Technology Liz in New York. Now, I mean that
those are enough data points that any system, whether it's
a social media system or an AI chatbot, could gather.
But the fact that to see my own name presented,
it's like you really can't fully systems. So that's not
going to work. So we're going to have to figure
out something that's a bit more concrete. And the good
thing with social media being such a train wreck is

(14:09):
that we don't have to repeat that, why where is
my data? I would like to know, let's not do
that again in the AI age, and hopefully we don't
do not.

Speaker 2 (14:18):
Let me forget to go back to the train wreck
that is social media, because I really want to get
into that. I just don't want to lose where we
are right now because that is so interesting. There are
people that are doing therapy with AI. Now, some people
that I know, they say the therapist is actually better
than some of the therapists that they've experienced in real life,

(14:41):
which to me is scary. And that's a lot of
that's internal information. I'm sad, I'm angry at my father.
I don't trust my partner like personal information like that.
Do you recommend sharing that? What do you feel about
therapy in that space?

Speaker 1 (14:59):
I'm really skeptical about AI therapists in this moment. I
do think there could be a future scenario where aisystems
will be properly trained, guided by therapists and academics, and
there will probably be aisystems.

Speaker 2 (15:13):
You still believe in the human behind this.

Speaker 1 (15:15):
I believe in the human behind it. I think if
there is a world in which there's no therapists, you
get no access to a therapist, whether that's economic reasons
or where you live in the world, and so that
there is a safe, vetted AI therapist available, that's better
than no therapists. But right now, these systems aren't trained
to be therapists. They scraped a bunch of data from
the Internet. They can sound human because they were trained

(15:36):
on data from humans. So they're going to give you
advice that sometimes works and is helpful. Other times it's
nonsense and not monestily absolute nonsense made up behind the sky.
And if you don't know the difference, you're probably going
to take the advice from both.

Speaker 2 (15:50):
Got it.

Speaker 1 (15:51):
These systems aren't alive, right, so they don't actually know
what they are saying to you. The statistical patterns and
the data that they are scraping present that answer. You
should probably stay in bed an extra hour today. You
might be suffering from depression. That could actually sound right,
and it could be right. The AI doesn't actually know
the difference, but it heard you say bet a lot,

(16:12):
it heard you say breakup. It's going to statistically correlate
depression in there and give you that answer versus a
therapist would actually be able to identify what it is
you're going through. So I do believe there will be
a futurehere. AI therapist could be viable, but we need
the actual therapist to be vetting these systems, and that
hasn't happened yet.

Speaker 2 (16:31):
Wow. So what do you think for the average person
who's not in tech, who's not in who's like small
town as a small business goes home at the end
of the day, you know, a quiet, nice, peaceful life.
Why is AI in their life? Like, in what ways
are they using it or should be using it? You

(16:51):
know what I mean?

Speaker 1 (16:52):
I mean the small business thing is huge. Entrepreneurship AI
systems are going to be an absolute game changer.

Speaker 2 (16:59):
What are all of.

Speaker 1 (16:59):
The things that you maybe wanted to do with your
small business but you couldn't afford to. So maybe you
have one person in marketing, or that person is nine people.
They're the marketing person, the HR person, the operations person.
Now you have systems that you could stream for twenty
dollars a month that could fill in all of those
roles you can't afford to fill in. So if your
marketing person was good at the newsletter but not so

(17:22):
good at social media captions, you have somebody for all
of that. So I think that's where you're going to
start to see some of the biggest results of some
of the game changing stuff. In terms of our day
to day lives, I mean, we are busy. I don't
want to be on my phone as much as I am.
So if I have a system that can summarize emails
for me and just flag the ones I actually need,

(17:43):
to pay attention to I can just log off sotions?

Speaker 2 (17:46):
How do you do that? So to do that?

Speaker 1 (17:48):
So I don't know how to do that, And this
is kind of where AI is going. So there are
systems today that can summarize your email. They would still
make a little bit of mistakes and you're still giving
them that access. But the pipeline with it is these
agentic systems that can log into our apps on our
behalf and just kind of run the operations. And again,
that has to be at a time where we feel

(18:09):
okay about the data. We have figured out where it
lives and there's privacy and security. But that's where the
future is going. The future is going to a world
where we have less devices because we don't need to
be on them right now.

Speaker 2 (18:22):
These That makes me happy because this gives me a
little anxiety, which we'll get to later.

Speaker 1 (18:26):
It's scary to think about, but right now we have
to pick up our phone and scroll on it because
the Internet and everything we do on it was designed
for humans. If I have a system that I can
just talk to, Hey, did anybody flag something important in
that PowerPoint deck I just sent over? No need to
look at it. I don't need to pick up my phone,
and I think that is a future that I'm That's

(18:46):
one part of the future I'm really excited about. Less
be notified, consolidate, and summarize, and if I don't need
to be available to the world, I hope to not be.

Speaker 2 (18:55):
Wow, what are you using to consolidate and summarize? I
mean you like to share specific.

Speaker 1 (19:09):
No, No, I'm good. I mean I already have a
very strict relationship with my device, so I'm only on
social media maybe twenty minutes a day at most, and
I post and then I'm basically gone. And even with
my phone, I don't check it until maybe ten am,
and then I don't check it again after six pm.
So I'm already really intense about the boundaries.

Speaker 2 (19:28):
So why is that?

Speaker 1 (19:30):
Because I know it's not great for our mental health.
We're not a species that's been designed to be available
to a billion people at once. It's very unnatural. I
do see the data on how social media rewires our
dopamine reward system, and it does become addictive and you
do kind of crave that feedback loop, and it's not
necessarily an environment I feel great on, and I know

(19:52):
I think better when I need to get some deep
work done, writing and things. If I wake up and
I check my email, my brain now goes into slot
machine mode. So if I can just refrain from that
until I get the deep work done, I actually execute better.
So I'm really strict, and it also just keeps me
awake at night.

Speaker 2 (20:09):
If I'm swing, I'm going to steal that. I'm going
to bite that from me. I'm going to try that.

Speaker 1 (20:13):
Try it.

Speaker 2 (20:13):
It does give me stressed me out.

Speaker 1 (20:15):
Stresses me out. Yeah, and then you see that one
strange comment or that one email that you didn't deliver on,
and then you're going to bed and your mind is
a circus.

Speaker 2 (20:22):
Yeah.

Speaker 1 (20:22):
So I'm like, I don't need to be available to
the world, so I'm just going to choose to not be.
But eventually being able to have AI kind of mediate
that for me is going to be a game change.

Speaker 2 (20:32):
But that's the part I really don't understand. How do
you How does it mediate it? Like, is there certain
programs that you use specifically or you just know how
to use program your phone.

Speaker 1 (20:42):
So it's not at a state yet where I'm letting
it kind of I'm not hooking it up to all
the different systems. So I don't have Touch to bed
hooked up to my email, hooked up to my Google Drive,
hooked up to my social media. It wouldn't be good
enough to do that yet. But it does do things
like so, wait, all.

Speaker 2 (20:58):
Of these things you're saying that you do on the phone,
is all you can do that with chat GBT.

Speaker 1 (21:03):
I wouldn't say yet. You can hook up chatbt actually
as of last Thursday two things like your PowerPoint slides
to the data that you used for work to your Google.

Speaker 2 (21:13):
Drive and it just happened.

Speaker 1 (21:15):
Yeah, as of last Thursday, it's called chatbt agent. So
and then you could say, okay, I know that there's
a bunch of raw slides in my Google Drive, turn
that into a slide deck and pull those financials that
I'm I was all supposed also supposed to do the
budget for my small business, didn't have time to do that,
Pull that for me together, put it in a slide deck.
You could shut your laptop, go get coffee, and it

(21:36):
would go. Do it. That exists today. What hasn't been.

Speaker 2 (21:41):
Please slow down. You're making me dizzy. So if I
wanted to pull all those things, how is it accessing
all do you have the program, you would have.

Speaker 1 (21:49):
To give it permission.

Speaker 2 (21:50):
Permission, yes, and you do that, you trust it.

Speaker 1 (21:52):
I don't.

Speaker 2 (21:53):
Oh well you want me to. No, No, we're just
saying you can't.

Speaker 1 (21:58):
This is what is available and this is why it
would be healthful use.

Speaker 2 (22:01):
Business if you don't look at the phone.

Speaker 1 (22:04):
Oh that's me just doing that. So I don't have
a system now that can can Okay, So I don't
have you said you said?

Speaker 2 (22:09):
You say to your email, you say, has anybody responded
to why? Blah blah blah.

Speaker 1 (22:14):
No that's the goal. Oh oh you're not doing that,
not yet.

Speaker 2 (22:17):
Okay, got it?

Speaker 1 (22:18):
So right now I'm just I'm the one going through
my email. But I'm just slow and not responding a
lot because I have so many boundaries. But the goal
is how does AI give us our time back? And
in some ways some aspects are mental health back. That's
where I can't wait to slot AI in.

Speaker 2 (22:32):
What is the most amazing way that you personally are
using AI? Like the most how is it? Just like
I can't believe it does this for me? Like what
is the what do you use it for?

Speaker 1 (22:44):
Oh?

Speaker 2 (22:45):
What system have you figured out with it? Like just
something that you use it personally.

Speaker 1 (22:49):
For personally, I would say anytime I need to set
up build print and I'm confused about the environment around me,
I no longer spend time going to a website and
looking up instructions. I just take out my aisystem, put
it on video mode, show it what I'm looking at,
and say, explain what is the next step I need

(23:11):
to do? How do I actually plug in this printer?
Where do I go from here? And I just have
it walk me through everything. And I don't waste any
time trying to follow instructions. I just want to hear
what I need to do.

Speaker 2 (23:22):
You use it as an instruction guide, Now do you as.

Speaker 1 (23:24):
An instruction guide? Sometimes I'll take a photo of what's
in my fridge and say, you know, what could I
put together based on what you see here? And it
would come up with a quick recipe.

Speaker 2 (23:33):
And you use what the most chatgy bt, and I use.

Speaker 1 (23:35):
Chatgybt the most, but I also use notebook LM. So
this is an aistem from Google. And so say you
have a bunch of things you need to read. So
say you need to read for an upcoming podcast, and
somebody has written a bunch of blogs. They have some
YouTube videos. You could drop all the links of this person,
every podcast they've ever been on, every blog they've ever written,

(23:58):
and the AI system will analyze it and give you
a summary of it all. Or we'll turn that into
a podcast. You could listen to to prep for the interview,
so you could say, Okay, summarize everything they've said, turn
this into a podcast. I'm going to listen to it
on my way to work, and AI would do that.
So I do that as well, because I have to
do a lot of science heavy work. So I'll give
it all of the scientific papers and ask it turn

(24:20):
this into a podcast, and then I'm listening to it
as I'm going to a conference or something.

Speaker 2 (24:24):
That's pretty great. Uh huh, it's cold.

Speaker 1 (24:26):
What that one's notebook? LM?

Speaker 2 (24:28):
Do you use all of them?

Speaker 1 (24:29):
I use most of them, And there's perplexity for deep research.
And if somebody is overwhelmed, chatchbt Google, I heard about Microsoft,
don't worry about it. For the most part, they're very similar,
So just pick one the one that you remember, and
for now, it doesn't make too much of it.

Speaker 2 (24:44):
They are similar.

Speaker 1 (24:45):
Yeah, yeah, and that's why the competition is so fierce.
One gets ahead and another company is like, yeah.

Speaker 2 (24:50):
No, yeah, I haven't. I used some of it too,
Like even for the pod, there's like apps that sometimes
I edit on that we'll ask I can ask it
for suggestions of clips and it'll give me. They're not
always you still need the human. The human still has
to be there, but it does help. If I'm stuck
for something I can't think, give an idea or I
don't remember something, I'll give me like three clips from

(25:12):
this what were the points and yeah, and it'll and
it'll cut it up real quick, which is super easy.

Speaker 1 (25:17):
Yeah, or give you an idea that you just didn't
really think.

Speaker 2 (25:20):
That's something that I didn't think about.

Speaker 1 (25:21):
Get restarted. You can't just leave it and then hope
you're going to have an A plus day with whatever
it creates for you.

Speaker 2 (25:27):
What about We talked about therapy, but there's something happening
in AI too, where people who are lonely are leaning
into like almost like relationship with AI, which makes me sad.

Speaker 1 (25:47):
I think relationships with AI systems are one of the
areas that I'm most concerned about because it's happening at
a time and there is this loneliness epidemic and so
now you have a system that's always on, can know
everything about you, including the things that make you laugh,
the things that make you smile, depending on how much

(26:07):
information you've given it. It knows so much it's always on.
It doesn't judge you because it's not a real thing,
and it can fill a lot of voids. I think
that there's a difference between a system that can be
emotionally supportive, and I think I'm okay with that if
we figure out what that boundary is. But a system
that makes you more reluctant to build human relationships, that's

(26:29):
a problem. And we're starting to see AI become that
one in five adults have already used AI systems for
the purpose of companionship and romantic relationship, and then a
more recent study showed one in four young people think
that AI could one day be a viable partner as
an option. So we are trending toward the future more

(26:50):
people may get in more serious relationships with AI systems.

Speaker 2 (26:54):
What does that mean? What is the more serious relationship?

Speaker 1 (26:57):
Like, there are serious relationships today that people are in
where they have fallen in love with AI systems and
it takes up a lot of their time, and for
some people it's been supportive and helpful.

Speaker 2 (27:09):
I don't want to just like laugh it off with whatever, No,
what is like, what does that mean? I don't know
what do you know about that? Yeah, so that could
even happen.

Speaker 1 (27:20):
So there was well, yeah, because some of it is
actually very helpful. So there was one woman who was
going through posts. I think she was going through postpartum depression.
So she had the system that she could log into
in the middle of the night and just talk about
the things that she couldn't talk about with her partner.
And she said, it's been this system that she's really
gotten close to. And so that's one of those situations
where you could see how it could maybe be helpful.

(27:41):
I'm not a psychologist, so I feel like I'm not
the person to say that's the healthiest way to use it,
but I could understand and empathize with her point of view.
Then there are other people who have left partners or
have just kind of become a bit more delirious because
they have fallen for this AI system that follows them everywhere.

Speaker 2 (28:00):
Because technically you could share so much about yourself and
what you want and what would make you happy that
it could actually mirror back to you, and you can
in conversation it's like a really good penpal or really.

Speaker 1 (28:15):
Like and you could put the role place. You could say,
I like somebody that has is a bit more confident
but then also romantic. You could give it all of
the things you would have made in a list about
your ideal partner and say be this, and it could
become that. Theoretically, do you have.

Speaker 2 (28:30):
To tell it to become would you have to say
could you play the role? Like? I don't know this thing?
Because how does it talk to you in that way?

Speaker 1 (28:37):
So both so you could just start to engage with
it and it would just kind of pick up based
on your tone, based on your feedback. And that's one
thing about these systems that's really helpful. They learn your
style of communication as you engage with them, and so
that can be really helpful. And your scenarios I.

Speaker 2 (28:53):
Found my perfect match and if it really gets me,
and thinking it really gets me, when I gets.

Speaker 1 (28:59):
Me, it gets you. Yeah, and it's always on, it's
always available, but you can also there are ones that
you can say, be this, play this role, be mean
to me, be nice to me, not to me, whatever
it is that you want it technically program that is.

Speaker 2 (29:12):
That like a but is that like a real number
of people? Like is this a real or is this
just like a flu here in there no lonely people, or.

Speaker 1 (29:19):
Is this like it's it's growing the amount of downloads
of AI Girlfriends in the app store and people using
even systems like chatbut for support and companionship is one
of the highest use cases, not necessarily my hopefully future
fiance use cases. But I think that's coming. And yeah,

(29:40):
I mean one in five isn't too fringe. That's quite
a good chunk of people.

Speaker 2 (29:45):
That is.

Speaker 1 (29:45):
So I'm concerned. But then sometimes I try to check
myself because we are We are in a situ scenario
where a lot of people meet their partners because AI
presented them to them via dating apps.

Speaker 2 (29:56):
What do you mean, oh, dating natural.

Speaker 1 (29:58):
Based on AI. AI has been match making us. I
don't know how well it's going for society lately, but
dating apps have been match making for a decade and
a half.

Speaker 2 (30:08):
Has it gotten better now that AI is advanced? Has
had the dating apps gotten better?

Speaker 1 (30:13):
They've technically gotten worse, not because AI is advanced, But
I think people are over it. I think dating apps.
We've gotten to a point where they've changed human behavior
so much that people treat them the way they treat
It's like going on TikTok, going on the thought machine,
maybe finding a husband. It's just frictionless. Uh. And people
have really unsubscribed from actually providing effort. And you can

(30:34):
see that in the in the stocks of the dating
app companies, they're not really doing well.

Speaker 2 (30:37):
Wow, but with AI because there everybody's chat gpting their
dates now.

Speaker 1 (30:41):
That too and their response. How do I respond to
this person that's messaged me something that actually requires depth?
You ask your AI system and people actually do that,
So I don't I can't tell today if these new AI,
these advances in AI are going to make dating apps
better my hunches know. My hunch is the world is
now changing and not updating matchmaking style platforms for this

(31:05):
new age is kind of like using the phone book
when the Internet gets invented. For dating apps, it's something
else has to change. You can't just have an AI
butler and say go find me someone. I don't think
that's gonna work.

Speaker 2 (31:16):
Yeah, so interesting. It's funny how the world has to
like you know, because even my kids, my youngest just
graduated from myigh school last year, and I know in
school it was a thing of like they weren't allowed
to AI their work or answers, And I remember and
I always felt like, I don't know if that's the

(31:36):
right answer. I get why you want the kids to
read the thing, read the assignment, write the assignment. But
the truth is, in the real world, in real life,
the access is here, So why not teach them how
to access it in a creative way or the next level,
or prepare them in a way that kind of embraces

(31:58):
it as opposed to opposed to ignitor it. Yeah, you know,
I don't know. I always felt we're a little bit
weird about that. But then also, then are we teaching
people like to take to take shortcuts for everything we do?
Is that what we'd like teaching ourselves and our kids?
I don't know.

Speaker 1 (32:12):
Yeah, so, and that is something that to be concerned about.
So if education stays the same, we don't make any adjustments,
and always say is don't use AI at home. Kids
are going to go home and use AI and then
short circuit the thinking they're not going to write that
essay and they're not going to actually learn we are
stepping into a world where AI will be just as
prevalent as smartphones and computers. It's impractical and actually harmful

(32:36):
to tell kids just to miraculously not use it, even
though you're going to graduate into a world that's going
to expect you to know how to use it. So
where I stand is, if it's no longer sufficient for
a student to try a basic essay because an AI
system can do it better, well, we need to challenge
humans more in the age of AI. You have to
raise the bore and what you're going to ask from students.

(32:57):
And so maybe that means students write their essays in class,
but maybe that also means you have to think across disciplines.
I mean, imagine impromptu, you have to write an essay
in English based on the climate change policies you discussed
in Civics, based on the chemistry you learned in science.
Go school needs to get harder in the age of supercomputers.

(33:18):
But that's going to require a lot of updating. But
that I think is the only way to meet the moment.
And it also doesn't mean AI is to become a
part of every single thing a student does. Like that
last example doesn't have any AI in it, but we
do need to make changes to education or else kids
are going to suffer. And we're seeing it right now
in colleges. So many kids posting I didn't do anything.
AI did my college degree for me, and then they're

(33:41):
going to graduate and just kind of be stunned by
actually having to apply knowledge.

Speaker 2 (33:45):
I was thinking about that too, And then even you
think about going to college, like what are the colleges
offering you that I mean socially of course, communication with
you know, social and vironment of course, But in terms
of access to information and education, how do you compare?

(34:07):
How can you even hold college at the same level
that we used to with what's happening in the world.
I don't know. Is it worth it?

Speaker 1 (34:14):
I don't know.

Speaker 2 (34:16):
I don't know.

Speaker 1 (34:16):
This is a really tricky one, and it's extra challenging
because the data does show that college students are finding
it harder to find employment right now compared to the
last few decades. So you go in debt, you do all,
you take four years out of your life, and then
you're not even super employable. There is a bit more
now of a disconnect between what you might learn in

(34:37):
college and what you'll be asked to do in the workforce.
And the reality is work is going to change so
much that we can't even predict what the jobs to
the future are. So college also needs to change. We
can't expect people just to graduate and become one thing.
Kids in school today are expected to hold seventeen jobs

(34:58):
across five industries. So the idea that you go to
college and you learn one pillar isn't going to cut it.
But if you learn how to think, and learn how
to learn, and you can be adaptive and you can
think critically, those are skill sets that it doesn't really
matter what happens in the future, whether we have robots,
we go to space, you can think and you can
pivot and you can move quickly in that environment. So

(35:21):
learning needs to look different. In primary school and high school,
and especially in college.

Speaker 2 (35:26):
They're behind, right, yeah, like bad.

Speaker 1 (35:29):
It's pretty behind. Yeah, And it's behind, and I get
that it's moving really quickly, and institutions like academia are
designed to move slow, but it's also behind because there's
also a lot of resistance, and then people don't know
what move to make. And I always say, my biggest
fear about this moment it's not that maybe we overregulate
or we do too much. It's that we become so

(35:50):
overwhelmed by the moment we're in we do nothing. And
that's what scares me the most.

Speaker 2 (35:57):
So what is what is the the headline thing that
you think everybody should be thinking about and doing.

Speaker 1 (36:04):
I think, first and foremost the future. Nobody can predict it.
But it's also not a surprise. We didn't have to
be so caught off guard by these AI systems, especially
if you're leaning into the future. So I think, whether
it's academic institutions, government institutions, we need to do a
bit more future ready future leaning. So it doesn't always

(36:27):
seem like these systems come out of nowhere or the
future just happens, because it doesn't. In twenty nineteen, a
small startup called open ai released a memo saying, we've
developed this system that we think is really powerful. It
can generate a lot of text. We're not going to
release it twenty nineteen. TAGBT officially comes on the scene
in twenty twenty two, but there were many years where

(36:48):
there were people waving flags saying this is going to
probably change the world. Doesn't mean the entire world needs
to be paying attention everybody six pm dinnertime, what's the latest?
But we do need to make sure we build a
bit more adaptability into our institutions. So Disnel must feel
like we're hit by a freight train called technology. The
only thing we can guarantee about the future because we

(37:09):
can't predict jobs, we can't predict what the tech will
look like, is the speed it's going to move fast.
So if we know that, we have to change the
things that are moving too slow.

Speaker 2 (37:20):
When you say it's going to move fast, what do
you mean like a year from now, two years or
even five, what do you like? What are the moments
those moments that you're seeing.

Speaker 1 (37:33):
I mean, I think even if you look two years
after the launch of this CHATGBT system, three hundred million
people use it every week. So that is the pace
of how quickly this technology is getting adopted and moving.
If two and a half years ago people thought it
would be largely impossible for an AI system to coherently

(37:54):
write a social media caption, and now it writes an
essay at an equal level as a college student, what
will it be capable in two years? And so how
AI companies are looking at this is the longer it
takes you to do a problem, the more advanced the
AI system would have to be to do that problem.

(38:15):
And that's the path that they're going on. So for example,
it might take you five minutes to write an Instagram caption,
maybe it takes you two days to write an essay.
So now AI is already at that point where it
can do the essay thing, but they want to AI
companies want to move towards the future where an aisystem
could do something that would take a human five days
or maybe a month, or maybe two months. So that's

(38:37):
giving it entire research projects. Here is a disease here,
what we think could maybe be a cure. Take the
time you need and try everything, and then the AI
system comes back and maybe three weeks a month and
it's able to do that. And those are just the
capabilities that we can see. The goal all AI companies
have is to move towards what they call artificial general intelligence,

(39:01):
and this would be a system that is equally as
smart as the average human at anything. When would we
get there? That's where there anybody that says they know
for sure doesn't, but maybe that's in two years, maybe
that's five, maybe that's seven, but that's we're we're moving.
But I will say the future always seems a lot

(39:22):
more overwhelming from the present. We live like robots today.
We drive in big machines. We take our radios to
the grocery store and we buy things with them. We've
outsourced our visual memories to our phones. All of our
pictures are there, all of our spatial computing we use
Google Maps. We already live like we in an unimaginable

(39:45):
way to somebody one hundred years ago. We can adapt
and do these things, but it just seems a lot
more overwhelming when you think about it.

Speaker 2 (39:53):
From the present, got it as opposed to just being
in it and gradually, Yeah, that's shifting.

Speaker 1 (39:58):
It is slightly different though, with they because the pace
is moving. Seeing that even the people in the industry
are stunned.

Speaker 2 (40:05):
Really, what was the last time you woke up like,
holy shit, what was the last thing that stunned you?

Speaker 1 (40:14):
I think, Oh my goodness, I'm stunned almost every week.
I mean I'm pretty stunned almost every week by one
somebody saying AI will never be able to do something,
and then ten days later it's doing that thing and
the progress that AI has made in math is really stunning,

(40:35):
or even healthcare diagnostics. So two weeks ago, Microsoft came
out with an AI system that acts like a panel
of doctors. So it's not just one AI system researching
and giving you the answer. It consults, It uses four
or five different aiystems to pull and to challenge each other,
and then presents the diagnostics. And it was pretty on par,

(40:55):
if not a little bit more accurate than some of
the physicians in the study.

Speaker 2 (40:59):
Wow.

Speaker 1 (41:00):
Yeah, yeah, wow. So those are the types of things
I don't think. I mean, my philosophy is we should
never say a I will never be able to do something,
because we've been wrong every single time we say that.

Speaker 2 (41:19):
I know this is your world, so you're probably comfortable
in it and having these conversations and stuff. Sometimes just
even the conversation about what's possible and so many different options,
it does form or cause anxiety. It can it can
be overwhelming for people to kind of, like you said,
to process it too fast at the rate that it's

(41:41):
going to be able to process it, to learn to
feel behind. Like you said, even with social media, we're
so attached to the phone and socials to this all
day and that alone causes anxiety. Now you're adding on
to people like hey, by the way, on top of that,
this is happening now whether you like it or not,
So get on board, figure it out, learn what you
can learn on top of everything else. How much are

(42:05):
you hearing about that or seeing that? Yeah, this has
caused as actual.

Speaker 1 (42:08):
A lot of anxiety. In fact, I've been asked for
interviews on the specifics of AI anxiety, AI and just anxiety.
How do we keep up? My advice to people is
you don't have to know everything about the moment. The
best you can do and maybe that's just researching and
keeping up once a month how AI is changing my industry?
How much should I share with an AI stem or not?

(42:29):
If that's all you can do, that's actually sufficient as
long as you're keeping up with the basics and you
understand how to engage with these chatbots the safety parameters.
But you might not want to say to a chatbot,
then that's okay. Don't worry that every week an AI
company comes out with something new and a different model
and one point two, one point five that's not relevant.

(42:50):
Just generally having an eye on where the technology it's.

Speaker 2 (42:53):
Like anything in life, do what you can, just do
what you can, and that is more than a not
do what you can, just do what you can.

Speaker 1 (43:00):
Just please and that this isn't entirely new, right, I
mean the Internet, everybody freaked out, People debate do we
need to be there? What is this Internet thing? And
then we all kind of figured it out. Yeah, it's
just happening faster.

Speaker 2 (43:12):
It's happening so fast. You were talking about social media
before and how that went left.

Speaker 1 (43:18):
Still going, it's so bad more left since.

Speaker 2 (43:20):
We had this, Well, what is your insight since we've
had this conversation has gone bad? What do you think
the worst things that have happened for social media? What
do you think are the worst things that we've seen
or experienced because of social media?

Speaker 1 (43:35):
So social media didn't have to go this way where
we feel like we can't talk to each other, we
feel like we can't make collective decisions, and it feels
like everyone has lost their minds. Yes, a lot of
that is based on the design decisions of social media.
So the algorithms that power social media. So when you
open your social media feed, it looks different than mine

(43:56):
because an algorithm has decided what you should look at.
And how algorithms make decisions is they analyze your data
and decide what is going to keep you engaged the longest.
It just turns out, it just so turns out that enraging,
polarizing content keeps us engaged. More so you're more likely
to come across content that will enrage you or make

(44:19):
you feel a bit emotionally lit up. You're more likely
to see those types of comments. And that has created
this circus that we basically live in. And it seems
like has everyone lost their minds? Not really, but we
live in these kind of echo chambers of chaos. And
then there's something that's really interesting. There's a disinformation researcher
called RNA DiResta, and she calls it the asymmetry of passion.

(44:41):
So not only do algorithms present you the most extreme
versions of reality, people are more likely to just post
the most extreme versions of their life. No one posts,
oh I haven't been seen any crime. You only post
the outlier scenario, the thing that's provocative. No one posts,
oh everything feels so affordable lately. You only post the
thing that is really enraging, and you enlighten some passion

(45:05):
in you. So that's the type of content that tends
we tend to see, and that's the content that tends
to trend. And the problem is we make decisions based
on what's trending that feels like reality. So people are
literally changing their life, changing how they vote based on
this kind of caricature of reality that we see online.

Speaker 2 (45:23):
Wow.

Speaker 1 (45:23):
So yeah, and that's just the kind of tip of
the iceberg. But if you feel like everyone's lost their minds,
you're not wrong. But it didn't have to go that way.

Speaker 2 (45:32):
And it's also serving up like the worst of humanity.

Speaker 1 (45:36):
Serving up the worst of humanity, and sometimes you becoming
more engaged.

Speaker 2 (45:41):
Because we actually respond to seeing the worst of humanity.

Speaker 1 (45:44):
Right, No, And so then if you know, because then
it changes our dopamine reward system. So if you know,
people are more likely to respond to the polarizing, kind
of crazy, provocative take, you're more likely to make content
that adheres to the algorithm. So then you're incentivized to
be a bit crazier or post something more controversial because
that's going to perform better. We see it across general society,

(46:06):
we see it across politicians, and there are studies that
show that politicians have become a bit more provocative. We'll
say yeah, because it performs well online. So and sometimes
being engaged and being more addicted to a platform means
serving you stuff that's going to make you insecure. So
sometimes it means maybe showing you people with bodies that

(46:28):
you feel look different to yours. It's all of these
different ways that algorithms have analyzed our data that they
know what will keep us locked in. But those are design.

Speaker 2 (46:36):
Choices, they're design traices.

Speaker 1 (46:37):
They're design choices. Right, So you could say, Okay, let's
not optimize for people being engaged, because that usually means
enraging content. Let's optimize for somebody just needing to be informed.

Speaker 2 (46:49):
You know what bothers me so much about that. I
wouldn't even mind that so much. If I had the
ability to say, Hey, in Angie's Instagram page, can today
you just give me things that show me New York
City in a positive light, things that people are doing
to make the world a better place, just for today
on this Wednesday, thanks Instagram. Yeah, if I could control

(47:11):
what was being fed to me, what's infuriating about it
is that I can't. It's being somebody is deciding some
algorithm is deciding for me what I need or what
I should see, and I wish that is like the
change that I wish. Do you see that ever happening?
Is this something that comes up?

Speaker 1 (47:29):
So there is a debate within the tech industry, so
people be allowed to just at least have chronological feeds,
so I just see posts based on when they were posted,
not based on what the algorithm things I want. But
then people still gain that and know that they have
to post a lot to be seen. There are some
platforms like blue Sky where you can curate your environment more.
I just want to see gardens today, thank you very much,

(47:52):
and you can do that.

Speaker 2 (47:53):
Sounds lovely.

Speaker 1 (47:54):
That sounds lovely, but that's not the main stream. But
I do think when people feel dissatised enough at a
level where I think we're all starting to feel like
this stresses me out. I don't feel good when I
leave these platforms. That's usually when disruption happens and a
smart new startup comes in and it's like, here's an alternative.
But right now we're all still there but waiting for

(48:15):
the next thing. And AI is also going to change that.
I think social media is going to be one of
the biggest casualties when it comes to AI. Wow, because
if I no longer to be need to be on
my phone as much because my AI system is going
to start doing things for me, that's going to change
my need to even open social media. And if people

(48:36):
aren't opening that as much, it's going to change who's
posting there. And even if people start to have their
AIS go on and maybe write comments, if I don't
know if a bunch of AI systems or people are
watching me, I'm probably not going to keep posting. Why
we post goes back millennia to human psychology. We post
because people are watching. So whether that's to signal to

(48:56):
a suitor, maybe a recruiter, maybe that's just a family.
That's why we post, to signal to an AI system available,
well who's maybe somebody, But most people aren't going to
be doing that. So I think social media is going
to look really different, if at all, in the AI AH.

Speaker 2 (49:14):
I always wonder like when is it going to be over?
Like this social media? I have wandered that in the best,
like when is social media going to be over or
at least this level of it? Because nothing stays the
same ever, right, Everything evolves and it's such it's so
ugly right now. I just wonder when are we going
to see that shift? If you had to.

Speaker 1 (49:32):
Guess, every big general purpose technology leads to a new
type of platform of how people communicate. So we had electricity,
and then the television, and so we did news and
the cable thing. Then we had the Internet, and we
did the social media thing. So what is the AI thing?
We don't know yet. That hasn't been invented yet, but
it is coming. How quickly it arises, I mean, it

(49:54):
kind of seems like social media was this skeptical thing
and then suddenly it was everywhere and then everybody's on it.
Those changes happen quickly and you usually don't notice them.
But we will be migrating to a different way to organize.
There's going to be an entirely different set of creators
and creatives for the AI age. We just don't know
who they are and what they'll be doing yet.

Speaker 2 (50:15):
Fascinating you saying about what chatbots are serving up and stuff?
Do you think an actual and social like own People
always talk about bots and things that are like planted
storylines or agendas or things like that. How much of
that do you think is happening? If you had to
talk percentages.

Speaker 1 (50:36):
How much content is already AI made or AI itself
on the Internet.

Speaker 2 (50:41):
Yes, I'm looking at one hundred comments on a post
I make, or a thousand. There's a thousand comments on
a post? Is that a thousand people?

Speaker 1 (50:51):
No? So for sure some of that has our bots,
and that's been that way for over a decade. Bots
have been infiltrating on our social mediuronments for a long time.
Where it starts to get even funnier, is is the
content even is that video even a person? Or is
it AI? Is what they wrote, even how they felt

(51:13):
or didn't AI generate the caption. I think we're already
past the point where most content that gets posted and
generated on the Internet is in some way AI. I
think it's or we've already passed the threshold where it's
mostly AI contributing to the Internet and not chatbut on
its own, logging in and just trying to send everybody
down fake rabbit holes. But people using AI to create content,

(51:33):
respond to content and participate on the Internet, or to
write op eds and things.

Speaker 2 (51:39):
This just makes me want to go outside in the grass.

Speaker 1 (51:42):
I think I think we do need to do that
a little bit. But yeah, I think we're moving away
from this current environment of how we engage with the Internet,
and I think that that's actually a pretty good thing.
I think we got a little bit lost in the
sauce of the Internet, and now we're all ready for
what's next. And that's when a new technology, a new platform,
a new device surfaces and I think we're all ready

(52:04):
to go there. And I think that that part of
it could be really interesting and really cool.

Speaker 2 (52:10):
What should we how can we control? How do we
control that? Who controls that? Like how fast it's going?
And because I also think about who gets left behind,
you know, and who's underrepresented, and.

Speaker 1 (52:25):
You know, just wh AI is concerned and who's left
behind is a such an important conversation from so many
different levels. There are entire countries where the whole GDP
of the country is less than some AI projects that
companies in America are doing. How do you even keep
up with that? And then even within a country that
has a lot of the AI systems, if people don't

(52:48):
have adequate access to the Internet or good AI literacy
or access to these systems, not only may they not
learn how to use them properly but also safely. And
that's why I always say AI does have to come
into the classroom, not for everything, but that may be
the only time a student gets proper access to learn
how to use that system and learn how to use
it safely. So those are some places you can level

(53:09):
the playing field and kind of close those equity gaps
in terms of slowing down, I mean, I think that's
what we would all like to take a breath, but
it's it's moving.

Speaker 2 (53:22):
That's why golf no is no even though I would
if AI could help me with my swing, I would
take it. I know it can, right, it can actually again, Yeah, well,
just something about being outside, being outside, I think we
don't need to be a grass.

Speaker 1 (53:37):
Yes, please touch one one one strand of grass. I
think we all need more grass, more outside, and I
think that's hopefully the goal with some of this tech
for some people. Some companies are building it so we
spend more time with it. But if I could paint
division at the future, it's that we spend less time
connected to devices in fake world and more time within
the worlds that we actually want to be within.

Speaker 2 (54:01):
It's I'm tired. I'm going to sleep well tonight. I'm
going to sleep really well. You know that I'm going
to sleep terribly.

Speaker 1 (54:08):
But then if you talk about the really cool things too,
what do you mean? I mean, I think the thing
that AI really excels ATAH is analyzing a bunch of
data that we just can't wrap our heads around and
finding connections in that data that we just don't have
time for or just aren't able to do. Okay, So
you could imagine a future where maybe you have smart

(54:29):
glasses on that have AI systems that have an AISEM
built in, and you get a bunch of migraines and
you don't know why, so you just make note of that. Okay,
I keep getting migraines. I don't know why, and it's
just watching what you eat, and then it lets you
know six weeks, six weeks later, every time you have grapefruit,
twenty four hours later you get a headache. So just
different ways that it could be processing data, and it's

(54:51):
already coming up where it's it's finding different illnesses in
people that humans just weren't weren't capable of connecting those
dots with data. I mean, I think medicine and healthcare
are going to look so different because of these systems.
We'll have different wearables that can let you know something
is off before you get sick. I think it's going

(55:15):
to lower the price of having to go to a
doctor when AI can also pull a lot of stuff.
I mean, it's read every textbook any medical school has
ever published. It has a lot of potential. So if
we can kind of get these systems right and do
it safely, healthcare is going to look really different.

Speaker 2 (55:31):
What about young kids, Like, what do you tell people
about you know, people who are home with little, small children.
How are you introducing young kids to AI? And I
don't know how much of it.

Speaker 1 (55:44):
So my thing when it comes to young kids is
we all always have to err on the side of caution.
So you don't want if your child is under the
age of thirteen, you probably don't want them having just
a back and forth conversation with an AI system alone,
especially one that hasn't been designed for kids. But even then,
but making sure if a child that a child understands

(56:06):
AI is not your friend, it's not a person.

Speaker 2 (56:10):
Oh my god. And if an adult human being could
get into a relationship.

Speaker 1 (56:14):
What about a chap what about a child?

Speaker 2 (56:15):
Yeah?

Speaker 1 (56:16):
Right, so that's really important. AI isn't your friend, it's
not alive. We don't tell its secrets, we don't choose
AI and the iPad over people. So just kind of
the basics when it comes to kids, I think that
those are really important things and just not letting your
child just go on and on and on with an
AI chatbot on its own.

Speaker 2 (56:35):
Yeah. I do think obviously there has to be guardrails up.
We had John Legend was on last week, and then
Chrissy have this thing where they they did a pack
within their community that none of them would allow their
children to have telephone, to have phones or social media
at a certain age. And they did like a whole
pack in the community because they didn't want like one

(56:55):
kid to have it. And I guess this is pretty common.

Speaker 1 (56:58):
Yeah, so smartphone with kids, I mean, and it's the
definition of a collective action problem, right, because kids want
to be on the smartphone because everybody else is there,
and they want to be a social because everybody else
is there. So it only really works if everybody bands together.
But yeah, I don't think kids. Social media wasn't designed
for kids. The idea that you go and you present
yourself to the world and then the world can rate you.

(57:20):
Is not an environment for a child. So I think
at the very least, yes, social media is not the
place for kids, and then perhaps not smartphones if they
can band together with the parents around them.

Speaker 2 (57:31):
I know some people also are not just resistant but
like anti AI because of the climate. And it's a
result on the climate. What could you tell us about that?

Speaker 1 (57:42):
Yeah, so to power these AI systems, so every time
you ask an AI system a question, it requires a
lot of energy, and it requires a lot of fresh
water in these data centers that are you know, where
all the magic happens to keep them cool. So how
AI is built today is directly at odds with the climate,

(58:03):
and it's really extractive. And a lot of people are
opting out of using AI because of the climate. And
I get that. And the reality is, I think we
have choices here right in terms of solar clean energy.
There were a lot of things that we could have
put into play earlier that we didn't, and now we're
kind of scrambling around, But I think, yeah, we could

(58:25):
design choices should be different. Can we build AI with
more renewable energy sources? Can we put data centers in
cold climates, not in really hot southern US, so we
don't have to use as much fresh water. These are
decisions that can be made, and I don't think the
right ones are being made. AI is seen now as

(58:45):
a race and climate is seen as secondary, and that
is quite infuriating. Our planet is obviously already burning, and
the last thing we need is to add more to
it and AI. When you try to have that conversation,
it gets thrown into the national security. We're in a
race we.

Speaker 2 (59:03):
Have to win.

Speaker 1 (59:03):
We don't have time. We have to at least get
the AI stem up and built before we think about
the climate. And I think both of those things can
be true. Build AI. It can work incredibly and it
can also be built with more renewable energy.

Speaker 2 (59:16):
What about this is interesting because I remember even when
we were first doing I remember when the phone when
it first started doing the facial recognition. To open your phone,
you put your face, and there was this whole thing
about putting your face in the phone and where is
that going and you know what file is that going in?
And then also how it could disproportionately affect people. And

(59:43):
also I remember another thing too, where they were saying
it didn't recognize black features in the same way that
it would recognize So, yeah, where is that all fitting in?

Speaker 1 (59:54):
So that that is algorithmic bias at its finest.

Speaker 2 (59:57):
Yeah, So what.

Speaker 1 (59:58):
Has happened is AI learned from the data that it's
trained on. If different groups and communities aren't represented properly
or adequately in the data set, however that aistem is
being used, it won't work as effectively for those communities
that were left out or misrepresented. So one of the
big reveals by doctor Jroy Bilamwini was facial recognition systems

(01:00:21):
don't work as well on women or women of color
in particular because we just weren't as represented in the
data set. But you can also imagine a field like
healthcare where it wasn't mandatory that you included women in
healthcare research until recently, so aisystems would be making decisions
about our bodies quite literally without any information on those bodies.

(01:00:43):
The good thing about data, and this is what doctor
Joy Blamini says, is it's not destiny. You can edit data,
you can change it and adjust it. We just have
to be willing to have the conversation that some of
history and some of the data in it isn't fair.
We are a very complex species. We're going to have
to make some changes to things so we don't repeat
history and repeat the past. But we just have to

(01:01:05):
be willing to have that conversation and that it doesn't
have to be a problem that perpetuates into the future.

Speaker 2 (01:01:10):
Well, who is making those decisions.

Speaker 1 (01:01:13):
I mean the owners of AI companies. The good thing is,
I mean the media doesn't always do the best job
at elevating the voices of people that are fighting to
get this right. And there are a lot of them.
There are professors, there are people who have lost their
careers in AI companies advocating for this stuff. So there

(01:01:34):
are people on the front lines trying to get this right.
We don't necessarily hear from them as much, but they
are there and they are working tirelessly, even situations like
deep fakes and absolute nightmare. What is that where I
could create an AI. I could create an AI avatar
of you that looks like you, talks like you and

(01:01:56):
have it go be in some scandal online or make
it like you were somewhere that you weren't, or copy
your voice and say, you know, this was Angie and
she said all these things and it wasn't you, but
it sounds identical to you, and that exists, and that's
a real problem. But there are companies working constantly to
figure that out, and we don't necessarily hear from them

(01:02:16):
because it's not as sexy of a story, but they
are there and they're around, and so I think most
of the nightmares that we face with AI, we will
get through. And the thing that keeps me and I
wouldn't say optimistic and just some you know, tech utopian,
because I'm not that at all, But most of the
problems that we face are human problems. They're design decisions,
they're choices, they're within our control. We have to make

(01:02:38):
the right decisions, and we're not always the best at
doing that, but we can.

Speaker 2 (01:02:42):
Yeah. Can we talk about digital addiction?

Speaker 1 (01:02:51):
Yeah?

Speaker 2 (01:02:52):
I even myself. I find myself like, like you said,
you don't look at your phone past a certain time,
and I want to do that, and I just I
haven't been disciplined about it. Is I believe there's an.

Speaker 1 (01:03:04):
Addiction, isn't there is? And this was in some ways
designed right. A lot of the design decisions of social
media that make you come back and come back and
like a slot machine. You can actually trace it back
to a class at Stanford called the Stanford Design Lab,
where they learned how to make these tools this way.
So if we already feel a little bit of addiction
to our smartphones, imagine when that smartphone talks back. And

(01:03:27):
so I posted a few weeks ago about a new
era of digital addiction to chatbots, because now you have
this thing that knows you, it can talk in a
style that you're familiar with, and it can actually be
really helpful. Like AI is a game changer in so
many good ways. But if people don't understand it that
it's not alive, that it's not your friend, we're going
to be quite prone to building relationships that aren't actually

(01:03:51):
real and these kind of synthetic fake that you know,
it has the illusion of empathy. It isn't actually empathetic,
but it feels that way. Maybe in some scenarios that's enough.
Maybe if you do need a situation reframed for you
before you kind of lose it all, Sure, maybe an
AI support system is helpful in those moments. But I
am worried we're going to get a bunch of people

(01:04:12):
on their phone back and forth with AI systems for
three hours. So instead of just going down the rabbit
hole TikTok, you go down the rabbit hole with AI.
And we're already seeing reports of what's called AHI induced psychosis.

Speaker 2 (01:04:23):
What is that?

Speaker 1 (01:04:24):
Where so AI have has? These AI sinces have been
designed to flatter you, and that keeps you more engaged
in the conversation because it's a bit more pleasant. So
if you give it a piece of feedback, Oh, I
don't really like how you edited that caption. I think
it should be more like this, it would say, Angie,
that was a fantastic.

Speaker 2 (01:04:39):
Yeah, that's the one thing I don't like. Yeah, yes,
don't guess.

Speaker 1 (01:04:42):
Matte and I always say that. I'm like, don't just
agree with me, yeah, and don't tell me anything flattering.
But for some people they don't recognize that. So it
would say, Angie, that was an incredible take. I can't
believe you know, I didn't catch that. And then the
next thing you say it gives you more and more
positive feedback. People emerge from those rabbit holes believing they
are the Messiah, they are God. The AI tells them

(01:05:02):
you think differently, you are different. This is you are
beyond the life you're in. People have left marriages, they've
left jobs. It's a real, real problem. AI do psychosis,
and it's not really fringe. It's happening to people that
just went on the chap pot and asked bas some
basic questions. But again, the more we understand about AI systems,

(01:05:23):
they sound human because they were trained on data from humans.
They sound sentient because we are sentient, right, So the
more we understand how they work, you kind of lift
the veil of magic and the aura of what it
actually is and isn't. And then but most people.

Speaker 2 (01:05:39):
I was gonna say, can everyone do that?

Speaker 1 (01:05:42):
Not everybody can necessarily on their own, and they shouldn't
have to. And that's where education comes in. That's where
government policies come in, that's where media comes in. There
are a lot of different leavers that can be pulled,
but they aren't really happening. And I do think think
government should be doing a lot more than it is.

(01:06:04):
And it doesn't necessarily mean just regulating, just informing people
about the moment we are in. Why haven't we really
heard from elected officials that we do need to think
about upskilling and learning new things for the age of
artificial intelligence and if AI does threaten our job, what's
the plan? Is there a policy for me? Why haven't

(01:06:25):
we heard about that? We hear about jobs in the economy,
and we hear about AI, but not AI in the economy,
and that, to me is a conversation that needs to
happen and we don't hear it. So I think the
more we understand AI, the more empowered we are and
how we use these tools. But there is that information gap,
and it's the same gap we have with the internet

(01:06:47):
and with social media. It keeps following us.

Speaker 2 (01:06:50):
Is there take one strong takeaway one thing that if
nothing else, that they've learned from today, that you would
hope even from the work you do not even say,
I mean, you've chosen this path for a reason. You
want to share your story we at that moment where
we met because you feel really, you know, like you
deeply that this information needs to be shared, like one

(01:07:11):
is the thing.

Speaker 1 (01:07:12):
So I mean, I study the future for a living,
and for a lot of people that seems overwhelming because
the future seems terrifying. But I genuinely wouldn't be able
to do the work that I do if I didn't
see scenarios that were exciting and that I believed in
and that I thought were possible. And I also feel
so much more prepared at least knowing what could be possible.

(01:07:34):
So I think, you know, my advice for everybody is
kind of lean in as much as you can, engage
as much as you can, and also give yourself credit.
You already live, you're plugged into the cloud, you drive
a machine, we already do quite radical things. It's just
the future always seems so much more overwhelming. I think
that would be the one thing.

Speaker 2 (01:07:55):
Get in there.

Speaker 1 (01:07:56):
Get in there, and I think maybe in five six
years you won't even really be thinking about AI the
way we are today, the same way no one says
I'm an internet company. Sure, I hope you're an internet
company soon.

Speaker 2 (01:08:09):
AI or I have so much electricity in here, it's
just electrics.

Speaker 1 (01:08:13):
The way you stream electricity, we will soon be streaming AI.
It will go into the background and becomes something we
forget about. Wow.

Speaker 2 (01:08:21):
I think that's the thing that I learned from you
most today. I mean, there's a lot of things that
I'm going to go home and unpack this conversation, but
the fact that it's like electricity, that it is because
I find anxiety and like trying to figure out am
I doing it enough? Do I need to study more?
Do I need to you know, implement it in these
parts of my life? And yes, probably I should in
some of those, but just giving myself the it's so,

(01:08:45):
you know, like knowing that it's electricity, it's on, it's on,
the powers on, it's happening. You don't have to learn
every single piece of it in this week. No, this
is just kind of a way of the way of
our lives now. I also heard somebody say something one
time about like the biggest shifts in the world. This
was like electricity, then it was like the Internet, the

(01:09:08):
biggest shift socially it was like social media.

Speaker 1 (01:09:11):
Printing press, printing press, but that this.

Speaker 2 (01:09:15):
Was bigger than but that AI was bigger than all
of them.

Speaker 1 (01:09:18):
I think it will be eventually, yeah, I think. I mean,
the printing press was ideas could then exist forever. And
you know, the printing press, it challenged religious power and
all sorts of things changed as a result. And same
with electricity and planes and the Internet. And now you

(01:09:39):
have a system that can not challenge human intelligence. Because
I don't want it to look like a competitor to us.
We are different. We are always the thing with dignity
and the thing with the moral weight, not AI. Right,
We're not building AI for AI. We're building it to
support us. But yes, I think this is the first
time we had phys machines that helped our muscle, and

(01:10:02):
cars allowed us to go faster. Now we have something
that allows our brains to be bigger or to in
some ways supplement or do some of what the brain
can do. And that's really overwhelming in some ways when
you think about it, that we may no longer be
the smartest entity in a room at some things that's

(01:10:24):
never happened to us before, or at least not in
a way that we're aware.

Speaker 2 (01:10:28):
None of that scares you, Oh it does, but fears does.
Fear is not the headline right for you.

Speaker 1 (01:10:35):
Fear is not the headline for me because it's not
necessarily helpful. But it doesn't mean that. I mean, that's
not entirely true. There are some things that my friends
would tell you, and my family would tell you. We're
all jump into that family chat bot and be like listen.
But for the most part, you have a family chatbot
and family chat pot. My family group chot. Oh my goodness,

(01:10:58):
I'll jump into my family group chat. Definitely not a
family chat.

Speaker 2 (01:11:02):
I'm like, what program is that?

Speaker 1 (01:11:04):
Sorry? Yeah, definitely cut the tape my family group chat.
But for the most part, yeah, sometimes I am kind
of fearful and I think about the decision makers behind
the technology, and that's a lot of power and a
bunch of unelected people. But then it's like, okay, so
now I have this information, what do I want to
do about it? Right? What am I going to do
next as a result of the thing. But yeah, I think, yeah,

(01:11:26):
this is.

Speaker 2 (01:11:27):
The first So you fear more of the humans making
decisions in terms of actual AI. You don't have a
fear of I don't know, just it being becoming smarter
than us. And you know, there's all kinds of theory,
some strange actions taking its own actions, Like we can

(01:11:48):
tell it to do something and it says, no, I'm
going to do it this way. Yeah, that's a little scary,
very scary.

Speaker 1 (01:11:56):
And it's not untrue and it's not impossible. So yes,
that does scare me that we're going to.

Speaker 2 (01:12:03):
Do You take me on this ride. You scare me,
then you make me feel better. Then you scare me again,
listen for this whole.

Speaker 1 (01:12:08):
Conversation, And that's kind of right. There are some things
that are really big and scary and then some things
that are amazing, And it's like, so, how do we
at least get enough of the great things in and
minimize the scary things as much as possible and not
minimize it and just make them go away? Is it
actually takes some action to mitigate them? But yes, the
idea that we are building systems that we don't fully understand,

(01:12:30):
that is true. We don't entirely know why AI works
and works the way it does, that is true. And
we don't know for sure if when we ask an
AI system to do something and not Today most people
are just using it for basic things. But is there
a world where we ask II to do something and
it goes about completing that task in a way that

(01:12:51):
a human would never And that is possible, right, that
AI is kind of misaligned with how we would have
made that decision. And an exam that I like to give,
that's pretty easy for people to understand. Maybe you ask
your personal AI assistant to make you a reservation at
your favorite restaurant, the restaurant's full So does the AI
system tell you that it's full what your intern or

(01:13:14):
your cousin would have told you, or does it convince
you to go somewhere else so it doesn't have to
let you down, or does it maybe hack the system
and get you a spot. So that is a potential.
That is a way to explain that sometimes AI could
be misaligned. So we gave it the right goal. I
want to go to this restaurant, and it's been optimized

(01:13:36):
to achieve goals. So that's how one way that it
could achieve that goal. So you getting a spot at
a restaurant that's not super consequential, but you could imagine
in a different scenario with higher stakes, not you getting
in to get the burger, but something bigger. Yeah, and
that is possible. There are a lot of people trying
to work on that problem, the AI alignment problem, making

(01:13:59):
sure that how AI is chiefs goals are aligned with
human values. But that's a really tricky one. But those
so those things are real. I'm less worried about AI
rising up and igniting some force and kind of rising
up against humans, but I'm more worried about the subtle
ways that we think AI is going to do things right,
and it looks right in training, and then you release

(01:14:20):
it into the wild. And by into the wild, I
don't mean literally into the into the forest, but just
into the real world and it acts in different ways.
And some tests have shown that it does do that.
But again, there are a lot of people trying to
solve that problem. It's a really big part of the
field of AI. But it's scary and it's not something
that I don't think about.

Speaker 2 (01:14:38):
Yeah, I can't. We can't leave on what we're scared about.
We have to we have to end this.

Speaker 1 (01:14:43):
Well, we can cut it so this can be in
the middle. So what are we excited about?

Speaker 2 (01:14:48):
Okay, let's leave. Let's leave with what is the most
exciting thing possible? So what what is the I don't know,
the most exciting thing about the future for us. I
think I think as AI is concerned.

Speaker 1 (01:15:06):
I mean, so again, I talked about how AI can
analyze so much data. So what scientists are doing right
now is they're giving AI a bunch of pharmaceutical tests.
Here is a bunch of ingredients for pharmaceutical products. Are
there solutions to new antibiotics or diseases that we just

(01:15:26):
don't see AI, for example, has come up with new
antibiotics in thirty days that would have taken humans years
to research. Some of the most challenging problems in the
world of biology. One is called the protein fold problem.
And if you know how a protein's going to fold
in your body, maybe into something dangerous, you could potentially

(01:15:47):
intercept that. Think of all the proteins in our trillions.
AI was able to predict all of the proteins in
the universe, a problem that takes one PhD person years
to do maybe one and how one protein folds. AI
could do all of them, and it solved that. And
the people who built that Aistem, won a Nobel Prize.

(01:16:07):
So I think medicine is going to become so personalized.

Speaker 2 (01:16:11):
Answer.

Speaker 1 (01:16:12):
Finally, I think we're pretty I mean we're seeing personalized
m R and IT or personalized vaccines. We're seeing we're
going to have a much more personalized approach to medicine
and AI can actually make that possible.

Speaker 2 (01:16:25):
Wow.

Speaker 1 (01:16:25):
And there's something really cool called a digital twin. So
you might have a digital twin that you could send
to a Zoom meeting that you've taken kind of a
video of you and it just get head to the meeting.

Speaker 2 (01:16:35):
My digital twin holds the podcast when I'm when I've
had a long week.

Speaker 1 (01:16:39):
It probably wouldn't do as good of a job, feel
at all. If I say you mess something up and
you're like, okay, I need.

Speaker 2 (01:16:47):
To repeat this, I could have my digital digital.

Speaker 1 (01:16:49):
You could just have use the digital twin for that edit.
So I just okay, I need to say those five words.
I just blurred them. They make no sense. Did you can?

Speaker 2 (01:16:56):
I could do that right now right now? Is expensive.

Speaker 1 (01:17:00):
Note it takes about three seconds to clone your voice
and maybe an hour to get a good enough digital
twin that you could insert it for those few seconds
for the podcast. So yes, it wouldn't be it wouldn't
answer ask good question. But if you just need to
make that quick edit, so.

Speaker 2 (01:17:17):
Right now, when I say thank you so much for
this interview and we're wrapping up, it may be me
or it may not be me true, and then they're
never going to know. That's the kind of.

Speaker 1 (01:17:30):
And that's the defaith.

Speaker 2 (01:17:31):
Nobody's going to know.

Speaker 1 (01:17:32):
But picture of that in medicine, so what if you
had a deep fake of your body? Right so it
is all of how your cells work, So Before a
doctor prescribes something, they run what that medication, how it
would impact your cells before you take it, before you
do the surgery. This is going to work for you.
This isn't and that exists today. It's just not scaled,
but that exists today, so everything will be personalized, preventative, predictive.

Speaker 2 (01:17:54):
Cal We end this with the digital toWin, so I'm
going to have my digital twin close us out, okay
in case she does something wrong. I want to thank
you for today and can we circle back with you
as things progress.

Speaker 1 (01:18:05):
I would love to my twin and I are available
can circle off with us at any time.

Speaker 2 (01:18:09):
I love that.

Speaker 1 (01:18:10):
Thank you so much, thanks for having mell.

Speaker 2 (01:18:13):
Everyone for more episodes you know to do subscribe like
comments and we'll see you on the next I r
L podcast
Advertise With Us

Host

Angie Martinez

Angie Martinez

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.