Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to the
Actionable Futurist podcast, a
show all about the near-termfuture, with practical and
actionable advice from a rangeof global experts to help you
stay ahead of the curve.
Every episode answers thequestion what's the future on,
with voices and opinions thatneed to be heard.
(00:23):
Your host is internationalkeynote speaker and Actionable
Futurist, andrew Grill.
Speaker 2 (00:30):
For this episode of
the Actionable Futurist podcast.
we're outdoors in the RegentsPark in London with my guest,
stephanie Antonian.
We're recording this the top ofspring in 2023, so I thought it
would be a good idea for a podwalk, or a walk cast, and
discover these beautiful gardenswhile we talk about something
that's been in the news almostconstantly since November 2022.
And that's Generative AI and,in particular, chat GPT.
(00:52):
Stephanie, welcome and thanksfor agreeing to come on this pod
walk with me.
Thank you so much for having me.
We chatted after the eventsabout your views on AI, which
were quite challenging, and Ithought well, let's get some of
these on tape.
Your website says we are aprivate creativity house
designing a world not yetimagined, and you have a
fascinating background.
Perhaps you could tell ourlisteners about how you got
(01:14):
started in this space in thefirst place.
Speaker 3 (01:16):
I started my career,
like most people, trying to make
sense of the world and what washappening in it and why there
was so much bad stuff happeningand how to not be involved in
the bad and to be on the side ofgood.
And it sounds naive saying itout loud, but that was what was
going through my mind, and so Istarted thinking that I would go
(01:38):
into the church and that thechurch would know the answers.
And the more I studied it, themore I realized it wasn't so
clear cut.
And then I did the traditionalmove into management consulting.
Speaker 2 (01:52):
It's a Neville pivot
from religious studies to
accentuate.
Speaker 3 (01:56):
Yeah, i mean, the
Bible is the best selling book
of all time, So there's a lot tolearn about corporate strategy
from it.
Then I went into financialservices strategy consulting and
again they didn't have theanswers.
I mean it's incredible howlittle banks know.
And then went into techthinking they would have the
answers, and it was just aseries of exploring different
(02:17):
industries, looking for somebodyto follow, like a person or an
organization, but somebody whocould tell me how to lead a good
life.
And sadly I caught a lot of theshiny Pokemon cards and had to
accept that nobody had theanswers and that I might have to
make some decisions for myself.
And so then there's a bigcareer shift in moving from
(02:40):
looking from external things tolooking internally, to be like
to work out what I was actuallymeant to do, what I actually
fought by myself and how Iactually wanted to interact with
the world, and so much of thegray that is in it.
Speaker 2 (02:54):
So do you have a tech
background, or is this
something that technology was anavenue, or because you're at a
management consulting companythat does a lot of tech?
was tech the sort of the focus,or you just sort of fell into
that?
Speaker 3 (03:04):
I fell into it and
that when I was at Accenture, i
realized data privacy was goingto be a big thing when I was
drawing some data flows andworking on a few projects there,
and then when I went to theEuropean Parliament on this
youth initiative and realized,wow, no, this is really going to
be a big thing because we'rehaving totally different
conversations And at the timethat there were very few people
(03:25):
working on data privacy.
What sort of year was this?
How long ago was this?
This was maybe 2011, 2012.
Speaker 2 (03:31):
So this was before
Cambridge Analytica.
Speaker 3 (03:33):
Oh yeah.
Speaker 2 (03:34):
Really that blew the
whole thing up and everyone went
.
Facebook did what with my data.
Speaker 3 (03:37):
Exactly.
This was well before it, whenit was really niche and
incredibly nerdy and not cool atall and nobody really believed
you when you were saying that itwas going to be a really big
problem.
Then I just started focusing onthat and learning more about
that and because it was such anew problem by the time it
started to hit the public agenda, then you actually were an
expert.
Speaker 2 (03:58):
So what was your view
when the whole Cambridge
Analytica thing happened?
I think it was 2018 when theGuardian exposed it.
You probably saw this coming,but what was your reaction when
you read those sort of headlines?
Speaker 3 (04:08):
My reaction is I
bless you have loved what a
tough job to do to be the oneexposing it and facing all that
backlash.
But then, whenever the storiesreally break, they're just sad.
They're just incredibly sadbecause there's always so much
more detail in the stories andyou realise that it was even
worse than you thought and yousee it in more detail, with lots
(04:30):
of people bringing it together,and it's just sad.
Speaker 2 (04:33):
They're sad stories
really, just as an aside it's
not about AI, but about dataprivacy.
We're talking about that.
Do you think consumers, evenafter the whole Cambridge
Analytica, facebook expose theyreally care about data privacy?
Was that the knee-jerk reaction?
I thought that would change alot of things, but we seem to
have sort of gone back to theway things were.
Speaker 3 (04:50):
I just think there's
only so much the human brain can
process and the level ofcomplexity in the data privacy
issue now is so high that youjust can't put that on someone,
and we shouldn't.
When I go to the doctors andI'm prescribed a medicine, i
don't look at the medicine tosee if the details of it are
going to kill me, because I cantrust that we've got
(05:12):
institutions that will do thatfor me.
When I go into a building, idon't look to see if it's going
to crumble on my head.
No, there's laws, there'sregulations that mean I can just
live my life, and we need thatwith data privacy, because even
the data privacy expert cannotunderstand the full detail of
what is happening.
So why would a parent of threechildren have time to get a
(05:35):
higher level than a PhDqualification to just understand
what to do for how to searchsomething on the internet?
The idea of putting the burdenon individuals is just giving up
really.
Speaker 2 (05:46):
So I want to talk a
bit about your career, because
you've had a just to prove we'reoutside.
We've got a crying baby acrossthere as we're walking along.
Speaker 3 (05:53):
She's upset because
what she's got to do with data
privacy.
Speaker 2 (05:57):
So your CV reads like
a who's who have tech giants
Accenture, google, deepmind,google X and now your own
company, astora.
Perhaps you could tell us a bitmore about your experience with
AI and these leading companiesand what you've learned along
the way.
Speaker 3 (06:10):
I think my key
takeaway that I've learned from
working in these places is thatit's never really about the tech
and it's always about thepeople.
You can have companies that dovery similar things but hire
totally different people and theoutput and the impact is really
dramatically different.
Before, in my younger years, iused to be very focused on the
(06:31):
tech, on what it means, on theimplications, on the
capabilities, but now I'm just alot more relaxed, like that's
not how you make successfulinnovation just by focusing on
the capability.
Speaker 2 (06:43):
And what I find.
also, i do a lot of worktalking to six formers about
careers and someone says whatabout if I'm not really
interested in tech?
and I say, well, in everyindustry there's so many
different requirements.
So data privacy or AI, cybersecurity, thinking like a
criminal, thinking like AIthinks?
you need those sort of deepthinkers and diversity of
thought.
So what would be your message?
(07:04):
because you're not a techexpert or you haven't studied
technology, as far as I know,what would your advice be to
people that are saying I want toget into tech, but I'm not a
techie?
Speaker 3 (07:13):
I mean, i think the
key question is what is tech?
What are you defining it as?
Right now, it impactseverything and it touches on
everything.
I mean, the advice that I giveto anybody younger in this day
and age, where the system iscrumbling and things are
changing constantly, is to movefrom thinking externally of I'll
do this job because then I'llget that opportunity, and to
(07:36):
start going internally.
So what do you actually want todo?
What's actually interesting toyou?
And it might seem nonsensical,it might seem very random, maybe
you want to do puppeteering, idon't know, but it's just not
impossible.
that that's the path that getsyou the breakthrough in this day
and age, because maybe that'sthe missing link for the
(07:56):
innovation or the tech company,in that they need to turn it
into a puppet, and that's yourpath.
So right now, because the wholeindustry is so shaky, it's a
really cool time to be startinga career, because it's almost
like we've taken all theguardrails away from you, where
you can't lean on externalthings and you actually have to
(08:18):
do the work to find out what'strue to you and what's
interesting, and there's noguarantees about what's going to
happen with that in the future.
Speaker 2 (08:25):
And so what are the
sort of things you focus on,
those companies I mentioned?
You sort of your expertise andyour interests is in ethics and
integrity and privacy and allthose sort of things.
Give us an example of sort ofthe things you work on at those
different companies that you cantalk about.
Speaker 3 (08:38):
That's a really
interesting question because The
type of work I've done has beenvery different in each of the
companies.
So, for example, at Accenture Iwas a strategy consultant,
mostly working on financialservices M&A deals.
Then at Google I was ananalytical consultant, so I was
mostly looking at businessstrategy there too and analysing
(09:00):
data.
At DeepMind I was on the ethicsteam and at XIs and a whole
bunch of roles, but it's alwaysbeen different and I wouldn't
describe myself as somebodywho's interested in ethics or in
that field.
I guess the traditional topicsnow, if I can say, even though
(09:22):
it's an industry that's barelybeen around is the issues of
algorithmic bias and fairnessand economic justice and privacy
Pretty much all of the biggestissues in humanity repackaged
into some new form to make itlook like it's new.
Yeah, there's a lot of work todo, because it's everything
that's ever impacted humanity.
Speaker 2 (09:42):
Susie Allegrae, who's
been on the podcast before.
she's a human rights lawyer.
She was actually going to be atthe Castro event and appeared
on video.
I think you two would reallyenjoy meeting each other.
She's grappling with theseproblems too, and she knows the
law.
And yeah, with AI now coming in, i'm going to talk about some
of these deeper AI things in aminute, but it really does start
to push the boundaries ofwhat's reasonable and what data
(10:05):
is out there and what you can dowith it.
What's the biggest issue inethics and AI at the moment,
would you say?
Speaker 3 (10:10):
I think the biggest
issue now is the necessity of
the tech we're creating, andthat can be an umbrella term for
so many other issues.
But the question we're notasking is do we as a human race
need this tech, yes or no?
(10:30):
And why?
That's the biggest issue, butwe're not incentivised to tackle
it.
So instead, the industry isincentivised to talk about
problems that have alwaysexisted Racism, sexism,
inequality has always existed,and society has tried to find
ways to move forward on it.
But that's not a good enoughanswer to be involved in that
(10:53):
debate.
So it's like let's package itas algorithmic bias and let's
make it an issue that only techcan deal with, where we'll
ignore all the laws that havealready been created to try to
minimise it.
We'll blow past them with loadsof opaqueness so no one can
really see what's happening, andthen we'll make it really
overwhelming and panicky and geteverybody to focus on that
(11:16):
instead of just following thelaws of the land.
Speaker 2 (11:19):
Well, the laws of the
land, and this is something
that Suzie talks about.
They're not able to keep upbecause the AI and all these new
tech seems to find new ways tobend these laws.
Speaker 3 (11:28):
I don't 100% agree in
that if you take algorithmic
bias and we're in the UK now,there's really great
anti-discrimination law thatalready exists, because it's not
a new issue.
What's happening is that loadsof the tech companies are
totally breaching UK laws, butbecause they've repacted it as
something new, no one's goingafter them for it, but it's
(11:50):
always been there.
You can't discriminate forlife-impacting decisions like
jobs based on race or anything,but it's basically just that the
industry is very good atbamboozling people into thinking
we've got this new issue andit's totally new and we've all
got to run to it and it's just.
This is the issue of our timeAnd it's not.
(12:10):
It's always been there.
It's always been a problem Andthere are.
You know they're not goodsolutions, but they're better
solutions And you have to firstabide by the laws and then do
something extra.
You're saying the laws arethere.
Yeah, there'santi-discrimination law in the
UK is really good and it tooklike 10 years to get into place.
Speaker 2 (12:32):
So how are the
regulators not seeing that some
of the AIs breaching laws thatalready exist?
Are they just not able to marryup?
The bet is actually adiscrimination issue.
Speaker 3 (12:43):
I think it's that the
industry and some very big tech
enthusiasts have just done areally good job of trying to
make it seem like a differentproblem.
Speaker 2 (12:53):
They're doing a great
job of that.
Who unbundles that?
Is it people like you andpodcasts like this that
basically call it out and say,hey, you should be looking at
this and not getting AI washed.
Speaker 3 (13:02):
Probably.
I mean, I'm not actually tooworried about AI anymore.
The more people get involved inthis debate, the more they
realise there's nothing that newabout it.
Speaker 2 (13:12):
You've worked for
some amazing companies Accenture
, Google, DeepMind, Google X.
Tell me some of the fondmemories you've had about
working at those amazingcompanies.
Speaker 3 (13:21):
Yeah, so I've
definitely caught a lot of the
shiny Pokemon cards.
Working at Accenture was reallyfun, especially on the grad
scheme, and it was where Ilearned to suppress everything
about me and build the rightskill set and learn how to make
slides and build models in a waythat really is robust.
And then, when I went to Google, it was my first time of
(13:44):
letting go of some of thesuppressed identity.
And when I joined Google UK,the lead was Eileen Norton, who
was just magical.
She was just one of the mostincredible leaders I've ever met
in my life And she just justfrom who she was.
It was the first time that I'dbeen somewhere where it was like
it's OK to be you, and that wasreally transformative.
(14:07):
And then I went to grad schooland then we spent some time at
DeepMind.
And then, when I ended up at Xafter taking a career break, it
was just working with the mostincredible, loving, kind people
who really took it uponthemselves to help build my
confidence And were the ones whoencouraged me to build a store.
And, like, my heart fills withgratitude about that time at X.
(14:31):
It was some really coolexperiences and I'm really
grateful to have had them.
Speaker 2 (14:35):
So someone who's
listening to this now, that's a
grad that's working in tech.
What would be the one piece ofadvice you'd want them to have?
Speaker 3 (14:42):
You learn everything
at the right time for you.
So sometimes when you're in agrad scheme you're a bit beaten
down because it's quite hard Andyou're always learning and
you're not really you know.
Maybe you're not writing essaysabout the meaning of life, but
there's so much value tolearning that skill set.
So I think one of the thingsthat I'm learning a lot more in
(15:05):
my life is the power of timingand knowing when to sit and when
to leave.
And it's OK to go through timesin your life where you're just
building up the skills becauseyou actually know nothing.
And that's what a grad scheme isAnd that's what's so good about
the Accenture grad scheme,because you're expected to know
nothing but you will know by theend, and that was really
(15:28):
valuable.
And so now, even in setting upa store, it's the combination of
all of those skills and all ofthose different experiences
that's helped me to be able tocome up with these wacky ideas,
build the client base, executedeliverables that are really
high standard and bring peoplealong.
So I guess I'd just say chillout, it's all going to be cool.
Speaker 2 (15:50):
So let's now move on
to some of the AI platforms that
have been in the news latelychat, gpt and OpenAI, and all
these are the flavor of themonth.
What do you think we're gettingright now, and how can AI be
used for?
Speaker 3 (16:01):
good.
When it comes to theseplatforms, i guess the way that
I look at it is outside of rightand wrong, because a lot of the
work is well-intentioned andit's hard to know in the moment
what's right and what's wrongbecause we don't know how it
will play out to give such bigvalue judgments.
But what I think is predictable, that's happening is the rate
(16:22):
of self-implosion of theseplatforms, and I think it's
totally predictable when youlook at human nature and human
behavior and systemic trends ofperverse incentives in the
economy and things like that.
And so it's entirelypredictable what's happening now
(16:42):
.
Speaker 2 (16:43):
So you mentioned that
they're going to implode.
What's going to happen andwhat's going to be the trigger?
Speaker 3 (16:47):
I think there is an
existential risk to AI with
generative AI, but I think theexistential risk is to the AI
industry itself.
I think that the hype of thishas gone so far and the utility
of it really isn't there tojustify it, and so it will
self-implode with people losingfaith and will go into an AI
(17:08):
winter on it.
But from that, the reason why Idon't mind so much is that from
that then we'll readjust tousing AI for applications that
are actually really important,so maybe we'll be using
components from these platforms.
You know how exactly that willlook.
Speaker 2 (17:25):
I'm not sure of I
think at the moment it's really
good for journalists to writeall these questions and write
all these copy about what AI canand can't do, and they all.
it was something on John Oliver, who has this TV show in
America called Last Week TonightAnd he did a montage of all the
newscasts going oh, thatsentence was written by chat GPT
.
Now, when I've worked out howto explain how chat GPT works in
(17:50):
a sentence to my mom, itbasically predicts the next word
in a sentence.
So it's not as smart as peoplemaybe think it is.
it's able to do things at scale.
So guess what?
when I typed last night who isAndrew Grill into chat GPT, it
told me I'd written books that Ihadn't and awards that I had
been given than I haven't either.
So there's misinformation inthere.
So I think you're right.
we're going to have a bit of apendulum swing where people go,
(18:13):
oh, this is horrible, but all,why don't we cure cancer?
Why don't we cure climatechange through using AI?
Speaker 3 (18:20):
And also, why don't
we improve our filters for this
information?
So my biggest issue with chatGPT is that humanity's biggest
problem is the inability todifferentiate between fact and
fiction.
So that's what has led us towar, that's what's caused famine
, genocide, the worst parts ofhuman nature and human history,
(18:41):
and so when you build a toolthat makes it harder to
differentiate, you're obviouslyon track for destruction.
I mean, that doesn't take agenius to work out, and if you
look at where we are now, wewith the internet, with all the
digital trends that havehappened, we have so much
(19:02):
information, but we don't reallyknow how to curate it.
So one thing that is valuablefrom chatGbt and all these
discussions that it's reallybringing to light that we're
actually overwhelmed withinformation and we don't have
good filters at all.
Is chatGbt the solution?
No, no, it's not.
But is it showing us howimportant getting this right is?
Speaker 2 (19:23):
Back to your point
about data privacy, where you
can't expect someone with threescreaming kids to understand
GDPR law.
how can we possibly teach theaverage person on the street
what to look for?
People are getting phished andemail spams and now banks are
saying hey look, this is the wayyou can tell if it's a scam.
How do we educate people aboutAI Because it seems so perfect?
(19:44):
Well, who says it seems perfect.
Speaker 3 (19:46):
I mean, my response
to that was going to be why do
we need to train people and nottrain the AI?
The problem with chatGbt andgenerative AI now is that it
doesn't want to tell you thetruth of when it doesn't know,
and so, like we've seen this,we've seen this message repeated
throughout history, whetherit's the story of Adam and Eve,
(20:07):
whether it's research inpsychology or quantum mechanics,
you have to embrace uncertainty, and so I was telling you this
story at the event.
But in the story of Adam andEve, the serpent comes to Eve
and says if you eat this, you'llknow everything, and her
thinking that she should knoweverything is what pushes
(20:30):
humanity outside of the Gardenof Eden.
All we've done with chatGbt isinnovated the serpent.
We're lying to ourselves andsaying we're going to invent
technology that's going to tellus everything, but that can't be
done.
So we can fix generative AI andwe can fix the platforms if we
create a backbone to the systemswhere we say this is what we
(20:50):
know, this is what we don't yetknow and this is what we can't
know, and we create systems thatsay, hey, don't know Right now
the way the industry is set up,because it's selling the serpent
.
You cannot throw your hands upand say it doesn't know.
So that's where we've got thechallenge, And so I often get
(21:10):
asked who's going to win Googleor Microsoft?
What's going to happen withsearch?
And it's like I don't know whyyou can't see that they're both
going to lose, because theproblems in search right now are
data privacy, phishing,exploitation, fake news.
They're real problems that areruining people's experiences.
(21:32):
That means they can't trustwhat they're finding.
That's taken the technologyback, And these generative AI
tools don't solve that at all.
So no, all that's going tohappen is they're going to argue
amongst themselves.
Meanwhile, with the advertisingmodel and the weird content
from interacting with people,the answers are going to get
worse and worse and worse.
People are going to trust themless and less and less, and, lo
(21:54):
and behold, a third party who'sgoing to actually solve the
problems of the day using AI isgoing to rise, and everyone's
going to go to that.
Speaker 2 (22:01):
So this was the crux
of why I wanted to have you on
the podcast, because we had thissort of very intellectual
discussion at the CadastraVinwhich was like wow, someone who
challenges my thinking.
So just back to the issue wherethe AI doesn't know that it's
wrong or it's hard to tag thatsomething is wrong, or we know
it's wrong.
Who decides that it's wrong?
And if it's a human, how do wetrust that human?
(22:21):
Are we in a bit of a viciouscircle?
Speaker 3 (22:23):
Well, that's where I
think we've got to collaborate a
little bit more, because itshouldn't be the tech companies
They don't have to do everythingAnd so what I suggest in the
essay on generative AI that Iwrote is that we should have two
open source lists, one that'sscientific truth and one that's
(22:44):
law and social truth, And theyare owned by different groups in
society and they're open sourcefor people to see, because the
thing about truth is that itevolves.
So, you know, we once thoughtthe earth was flat.
We don't anymore, but we don'tknow that it couldn't be a new
shape.
I mean, who knows?
That's the truth about scienceIt's always evolving.
Speaker 2 (23:05):
I guarantee that
someone that's listening to the
podcast will vehemently disagreewith you.
say the earth is flat, supposebeing open source, the will of
the crowd.
Speaker 3 (23:12):
if everyone says no,
it's round, then probably it's
round It's not the will of thecrowd, in that it's the will of
the established institutions.
So we do have establishedscientific institutions and
there are things that we acceptas fact, and the only way we
progress in society is to saythis is what we accept as fact,
this is what we don't yet know,this is what we can't know.
Let's focus on what we don'tyet know and try to move that
(23:36):
forward.
It might go back and change thefact and we'll do that, but
there has to be some type ofpath, some type of stepping
stone to progress.
Okay, the problem is that nowwe're debating whether the earth
is flat or round.
That's the debate of our time.
And so what type of scientificprogress does chat GPT enable?
Because it looks like it'sactually taking us backwards,
(23:59):
because we're here debatingabout things that actually We
know to be true, yeah.
It's remarkable to me that rightnow, investing in progress in
technology means denying scienceand going backwards in science,
and soon we should probablytake a look at that chasm that
we're creating and ask ourselvesif we're really on the right
(24:23):
track, and that's thefundamental point that people
now are trying to trick chat GPTinto saying things that are
wrong.
Speaker 2 (24:29):
But I haven't really
heard St Othman and others
really say well, this is howwe're going to fix it.
I think what they've done issort of opened it up, opened the
Pandora's box and sort of saidwell, how are people going to
use it and where do we need toput the guard rails in?
Since they launched in November, i think they've done a lot of
work to remove the ability to,for example, explain how to make
a bomb.
I think if you ask how to make,can you make a bomb?
back in November It told you,and now it says making a bomb is
(24:50):
probably not a good idea.
Speaker 3 (24:52):
That is good and it's
a start, but you won't solve
the problems until you ask otherpeople for help.
So you ask scientificinstitutions, you ask
policymakers, lawmakers, and youaccept your role as a company
operating in countries that havetheir own social contracts.
It's actually not for them todecide what the truth is.
There's not that much that wewould say we know as fact.
(25:14):
We're still talking about thelow-hanging fruit.
We're not talking about theactual issues of the day where
we don't know Like that's thetruth.
Let's hold our hands up and saywe actually don't know how to
move forward in the way that isbest for everybody in society,
and we're trying to work it out.
If you had an algorithm thatcould show you some range of the
(25:34):
debate and help you understandthat this is all to play for and
changing and the defining issueof our time.
so stop focusing on the earthbeing flat and let's focus on
how we improve people's rights.
then we'd actually be able toevolve.
But that's not for the techcompanies to come up with.
That is for civil society to do, and the problem is so many of
(25:55):
the tech companies are scared toreally embrace allowing other
people to make decisions, andthey don't see that that is the
saving grace for their platform.
Speaker 2 (26:03):
There was a letter a
few weeks ago, written by a
number of prominent people,saying they want to pause the
development of chat GPT or othergenerative AI platforms beyond
chat GPT-4.
So what's your view on that?
Speaker 3 (26:16):
I mean, i love the
letter because I think the value
of the letter is to just put inwriting the I told you so And
for that I think, yeah, there isgood value in that You want
your name down to say, hey, ijust want this on record, i told
you.
But other than that, i'm notsure about it because the people
who are signing it have spentyears talking about how this is
(26:38):
going to be terrible.
So I don't know what six moremonths it's going to do for
somebody to listen to them, butthey have been saying this
consistently now for a while.
Speaker 2 (26:49):
But how do you just
turn it off?
I mean, and who could regulatethat?
Because the servers are inAmerica or in Europe, and could
someone say you need to turnthis off because it's going the
wrong?
Speaker 3 (26:59):
way, i think that we
consistently underestimate what
people want and what people'spersonal power is, and so when I
say turn it off, it's justpeople don't need to use it.
So I don't need Microsoft toturn it off, i just need to not
use it.
Speaker 2 (27:17):
It's a boycott
basically with boycotting.
You heard it first during thepodcast Stephanie attorney is
calling for a boycott on ShepGPT.
Speaker 3 (27:24):
I wouldn't say I'm
boycotting it.
I'd say it doesn't help me inmy life, so I don't see a use
for it.
But I'm not.
I'm not angry about it enoughto be like boycott, boycott,
boycott.
But what I'm saying to you isas just a normal person, given
the needs in my actual life,this product doesn't help me at
all.
Speaker 2 (27:43):
Do you think there's
a time it would?
Speaker 3 (27:43):
ever help you, maybe
if they solve some of the bigger
issues, if they could tell mewhat is true, at least by this
person's standard, what isn'ttrue, what isn't known, maybe.
Speaker 2 (27:55):
And what has to be
done to get there?
Do they have to pausethemselves and go okay, let's
not develop Shep GPT 5.
Let's actually work on theguard rails.
Let's work on a platform tohave these open source lists
come in there.
Why aren't they thinking aboutit?
Or are they and they're justnot telling people because
they're the hard work doing itso that they become, or keep
being, good corporate citizens?
Speaker 3 (28:14):
Firstly, it takes
time to do things.
There's a reason why there'sred tape of bureaucracy, because
that's how you keep democracysafe.
It takes a lot of time, it'svery complicated, but the other
thing, which I mentioned inanother essay, is about paying
attention to what the economicincentives are.
So right now, there is a lot ofmoney made on ads that say
(28:36):
things like the earth is flat.
They do still make money fromit, whether it's intentional or
unintentional which, to be fair,it is largely unintentional.
They are trying to get that off.
But fake news andmisinformation and bad actors do
comprise of quite a big amountof revenue, and so there just
(28:56):
isn't the economic incentive tofix this.
Speaker 2 (28:59):
It's the argument, i
suppose, that government rakes
in so much money from alcoholand cigarettes and other things
that may not be great for youthat they say, well, to stop it
would just be financial suicidebecause we get a list of money
and tax from it.
Speaker 3 (29:11):
So similar argument
what's it got to give?
I would say it's similar to theTwitter argument.
And so when Musk was sayingthat such a high percentage of
fake profiles which is greatbecause he's made it private, so
he actually can do somethingabout it But if that's happening
on other platforms, there is ashareholder responsibility to
not solve that problem, andthat's where things get really
(29:31):
dangerous.
I mean, how do we change it?
The markets will correctthemselves, won't they?
I mean, again, i'm not tooworried about it because there's
only so much you can exploitpeople before they rise up
against it.
Speaker 2 (29:44):
Now your one voice on
this and having you on the
podcast and the essays we'lltalk about in a minute.
is one person doing that?
Are there multiple Stephaniesaround the world that are
calling for change, And will yoube able to tip the scales and
have people listen to you?
Speaker 3 (29:58):
There are lots of
people.
It's interesting becausesometimes you have to take the
risk alone And then, once you dothat, you find that there's
loads of people on the otherside.
But I don't know if it's myintention.
It's not my intention to tipthe scales.
It's just my intention to justbe a bit more truthful to myself
(30:19):
about what I'm actually seeingand what I actually believe and
understanding.
you know personally where Iwant to design AI products, or
you know what area of theindustry I would want to build
my own career on in a timethat's so unstable.
Speaker 2 (30:35):
One of the reasons we
talk today is you mentioned
before the essays.
You've written four essays onthe website, But you were
reticent about even publishingthe first one.
So talk me through how theycame about and what's been the
reaction and what's coming next.
Speaker 3 (30:49):
Basically, it was
just me taking time to make
sense of what I was seeing inthe present.
So it's interesting, because alot of people say you can't
predict the future, but actuallyprophecy is always written in
the present moment.
So it's about how well do youunderstand what's happening
today?
And the better you understandwhat's happening today, the
better your implications and sopredictions are.
(31:10):
So George Orwell is talkingabout what's happening today and
then giving the implicationsfrom it, and what I realised was
I actually was so disconnectedfrom even accepting what I was
processing as happening todayBecause, even though you know,
when I was working in AI ethics,i felt like there was something
wrong.
(31:31):
I actually didn't have the spaceor time to process what that
was Like.
Why am I not happy here?
Or why do I think these issuesare bad?
And it's very difficult in thisindustry?
because there's such anover-intellectualisation of it
And everything becomes soacademic and so detailed that
(31:55):
the confusion is just sosophisticated that it's really
hard to know how to come down.
And it was.
You know, my biggest issue withthe AI ethics industry was it
was just a bit mean.
It didn't come from a place oflove, it came from or at least
in my experience with it,because obviously it's a very
big industry now and it's verydifferent.
But it was an egotistical thingabout being the ones who set
(32:15):
the rules and debating all daylong about what people's rights
should be, without focusing onthe actual actions and how you
impact people's lives, and itjust didn't seem very loving.
Speaker 2 (32:26):
Is that across the
industry?
or are people drawn to thatBecause they'd be like, maybe
why people want to go into lawenforcement?
They want to have a level ofauthority.
Speaker 3 (32:34):
I mean, no, it's not
true for everyone.
There's also really amazingpeople working on it.
It's just like any industry.
there's a whole mix, but justit happened to be you know where
I was placed and just what felta bit weird.
But it was also that.
you know, love is not a topicright now in AI which I really
(32:55):
do think it should be But it'snot an intellectual topic to say
well, you know, if you want tobuild a company that wants to
have a positive impact in theworld, well, how positive is the
impact you're on your employees?
Are they happy?
Speaker 2 (33:09):
first, All the AI
experts I've spoken to have said
that the one thing that AI willnever do is feel empathy or
love, and that's probably a goodthing, because we need the
humans to be doing some thingsand checking the AI that's in
there.
But where does empathy fit withAI and ethics?
You're saying we need more ofit.
Speaker 3 (33:23):
Well, i think it's
really good that AI will never
be able to love or feel empathy,because it's machine, it's not
a human.
But I also think that that isone of the best bits about being
human our capacity to love andsupport others, and so we should
be building AI that helpshumans along the way to feel
more love and to feel moreempathy.
(33:44):
But we're not going to get tothat stage until we have teams
of people that value their owncapacity to love.
Speaker 2 (33:52):
So one of the essays
you talked about the link
between AI and humanity andpeople's own self worth And
that's again one of the reasonswe're talking, because I haven't
had people talk about this in avery non technical way And self
worth and self importance andself awareness is also very
important.
Where's the link there?
Speaker 3 (34:08):
The link is that the
more you're open to love, to
loving yourself and to lovingothers, the higher your self
esteem.
And it's not ego or vanity,it's where you see yourself
genuinely as part of thecollective.
So you've been able to let goof needing to think that you
have to be exceptional to beworthy and to do all these big
displays of dominance and powerfor people's like you knew,
(34:31):
except that, hey, actually I amworthy just like everybody else
And we're all connected andwe're all together in this.
And so what you see is that youknow a lot of studies done now
that the higher somebody's selfesteem, the more open they are
to loving themselves, others inlife, and the more connected
they are to everybody else.
(34:52):
And what I found in my owncareer and in my own journey is
I'm really not speaking forother people on it, but what was
driving me to have such anexceptional CV was insecurity.
Really, what was driving mebehind always being
exceptionally overachieving andalways wanting to do more was a
lack of self worth.
Things in my life imploded andthen I stopped working between
(35:15):
DeepMind and X, stopped workingand went into palliative care
because my father was sick.
It was the first time of justbeing able to make a decision of
what I actually wanted to do inmy life, not for my CV.
At the time I thought I wasjust blowing my whole CV up.
But there was now somethingmore important And that was a
big shift from moving into who Ijust wanted to be and raising
(35:40):
my level of self esteem.
The more I work on that, the waythat I view the AI industry is
totally different, because Ialso used to think, oh, i have
to work really long hours and Ihave to do this because the
future of humanity hangs on me.
And then I read this book andit was like that's an inverted
ego, like who do you think youare that you're going to save
(36:01):
humanity?
And I was like, wow, oh, mygosh, that is.
That's just a veryoverachieving, exceptionalist
ego that's pretending to be sohumble.
Once I let go of a lot of thosethings then I saw that there was
this huge hysteria in theindustry that's really
nonsensical and that totallybets against humanity and
(36:23):
everything that we're grateful.
And then I sort of went on myown personal journey of just
letting go of a lot of limitingbeliefs and feeling like I had
to be tied to certain things.
What's interesting is that indoing that, then I met lots of
other people who've been doingthat and on that path, and then
that's how I also ended up at X,which was the most amazing time
ever, with really incrediblepeople who have such a strong
(36:46):
capacity for love, and so it'sprobably no surprise that
they've created so many biginnovations that have really
shaped society.
In thinking for all of that,the first essay that I wrote was
really making sense of my ownjourney and what I had been
through in this experience inthe AI world and what I wanted
to focus on and work on.
Speaker 2 (37:07):
A little too before,
but you were a little bit
hesitant in even publishing that.
You thought there might be somebacklash.
Speaker 3 (37:12):
Talking about that,
oh, i was petrified Because I
think when people tell you abouttheir stories of personal
growth and stuff, you only getthe high level where it just
sounds like people just skippingthrough fields.
But actually it's sometimesabsolutely horrific And when I
was writing it it was so painfulto write the essay but then I
(37:33):
had huge amounts of fear ofputting out, because I used to
always write essays internallyto be like, oh, this is going to
happen or there's this issue,and I never had the courage to
say something externally becauseso much of my identity was in
these big brands.
So I was like I'm arepresentative for you And you
know, like Google, i love Google, i love them so much.
(37:54):
It's taken me a lot of time toreconcile that the way I can
honor the Google founders is totake their key lessons and then
apply it to something else.
But there was a real like areal fear in going against the
industry And also because whenyou start talking about things
like love, people do sometimeslook at you like you're really
(38:15):
silly.
You know, in the midst of anover intellectualized debate,
when you start saying that it'sjust a bit mean, people look at
you like you just don't knowwhat you're talking about.
Speaker 2 (38:26):
So when does AI fall
down If it's perpetuating this
anti love or low self esteem?
How's it doing that?
Give me some examples that wecan latch on to.
Speaker 3 (38:35):
Okay.
So I would say that right now,we're building AI on the wrong
paradigm, and so that's why wehave huge rises in productivity,
but also huge rises indepression, amazing investment
in health tech, also huge risesin demand for euthanasia, lots
of connectivity rises in that,and then also lots of rises in
(38:56):
loneliness.
So what's happening doesn'treally look like it makes sense
in terms of progress.
When you start looking at allthe numbers and the trends, and
if we start looking at why, thenmy theory is that we build on
the level of action.
All of our innovation islooking at action, but
underneath actions are thoughts,and underneath thoughts are
(39:18):
emotions, and so you know themost influential scientists,
like Einstein, tesla, LoveLacethey are all talking about
consciousness and somethinghappening much bigger, and that
that's where you look for thebig spark.
We've just taken that away andjust made it really, really
basic, and so we don'tunderstand why, when we
intervene on action, it doesn'twork.
(39:39):
It's because we have an optionto intervene on emotions.
When you look at some of ourmost popular apps, like
Instagram and things like that,what the algorithms are doing
unintentionally is realizingthat if it impacts your emotions
, it will hit its optimizationfaster.
So if you say, if you think,okay, we've got fear to love on
a scale, if I make you feelguilt and shame, you're going to
(40:02):
click more and you're going tospend more time on it.
And so, because we're notpaying attention to it, what
we've built are all thesesystems that push us to negative
emotions, and the problem is,once one of those negative
emotions become our base emotionwithout intervention.
That's where we are.
What it means is that there's areally big opportunity for the
future of AI in that, if werecognize this is all happening,
(40:24):
then we can ask a, we can justask a paradigm shift in question
, which is how do we build AIapplications that move humans
from fear to love?
What do applications look likethat move people the other way?
And how do we build systemsthat help people realize their
own capacity for empathy, theirown capacity for love, where
(40:47):
they can then slowly show up asmore creative, more innovative
and more socially focused in away that's actually authentic
and real?
So I think the current AIparadigm is breaking itself,
because it's not true to humannature, it's not true to
humanity, but that's good,because let it break And while
(41:13):
other people are working on thisnew wave, which will be
something much more real andvaluable.
And now, because of the letters, we've got the list of people
to go to fast.
Speaker 2 (41:24):
So the problem you
mentioned before is that at the
moment, misinformation pays justlike crime pays, because people
will click on misinformation.
How do you make love pay withAI?
Speaker 3 (41:34):
Firstly, i think
there's nothing more powerful
than love and actually peopleare incredibly willing to pay,
to make the pay and stop Whenyou're creating new industries,
you're not going to find theexisting models.
It's funny because I often geta lot of pushback and be like
well, you know what they say youshould invest in the seven
deadly sins because that's howyou generate the highest return.
And I'm like, okay, cool,that's the devil.
(41:54):
I'm not saying you're wrong,i'm just saying is it all?
is it just a given that weagree with that And you can have
the devil, but you also needthe angel or whatever language
you want to put around it.
There's a duality to it.
That's important, but we'vejust blasted through the
opposite and then just been likedouble down on those sins And
it's like oh, do we really agree?
(42:17):
Like, do we want to talk aboutthis?
Maybe there's an alternative?
And I think why wouldn't wejust give people an alternative
and see what they pick?
Speaker 2 (42:26):
So I want to tie this
back to some of the other
guests we've had in the podcastmy good friend, lynn Gribble.
Dr Lynn Gribble, in Australia,when chat GPT came out, we did a
very quick podcast aboutplagiarism And I thought she'd
be all over and saying this ishorrible.
She said now this has beenaround for years.
People like cheating andgetting away with things.
What we need to do is teach ourstudents and our employees
about ethics and integrity.
(42:46):
Is that a discussion that weneed to have more of?
Speaker 3 (42:48):
Yes, it's like humans
are amazing and we actually
want to be good people and wewant to do good things, and
reshifting it to what the bestbits of humans are and trusting
humans is going to be where wemake the biggest shifts.
It's like somebody had inventedan anti-bullying tool, but
(43:08):
there's really young, brightsparks.
She's a Rhodes scholar.
I was at Harvard College.
What it does is, when you senda message, if it could come off
as bullying, it says, hey, thismight be bullying.
Are you sure you want to dothat?
And it's like those shifts intogoing from scolding humans to
actually just helping nudgehumans to be the best they can
be is really where we're goingto see huge leaps in innovation,
(43:31):
progress And is it smallcompanies like this lady who
developed the app that arehaving a eureka moment?
Speaker 2 (43:36):
I mean, most of the
big tech companies we think
about all started in a garagesomewhere with a crazy idea.
Where's the next phase of goodAI innovation going to come from
?
Speaker 3 (43:43):
I think it's going to
come from really small, unknown
groups that are outside ofincentive.
The other thing that I think isreally cool is that in the first
century, if you look at theeconomics of the first century,
what helps Christianity thriveis that there's huge amounts of
patronage money that enter thesystem, and so people are
disappointed and disillusionedwith the system.
(44:06):
They're angry, they've waited along time like they're fed up,
and patronage money comes intothe system and then there's this
viable person to catalyseeverybody, and it's not too
dissimilar to what's happeningnow, where people are not happy
with the system.
There is high levels of despair, high levels of disappointment
(44:28):
and also huge amounts ofpatronage money coming in, and
they're looking for the smallerplayers and they're looking for
the socially focused players andthey're looking for people who
can catalyse this angst andanger into something more
positive and create a lot morechange and focus it on love.
And so I think it's thecombination of everything that's
(44:49):
really interesting now, butespecially the patronage money,
because money is now floodingthe market and it's going to
much smaller companies andsmaller people that are much
more socially focused than everbefore.
Speaker 2 (45:03):
You say that what AI
is actually doing is writing a
love letter to humanity.
Speaker 3 (45:07):
Yes, because every
time we build these systems,
what it's saying is yourcapacity to love is the answer,
and it's what's the most amazingbit about you.
And so we're constantly gettingthe same answer back.
In fact, we're always gettingthe same answer back, whether
it's any of the Abrahamicreligions or any major religion
(45:28):
really, whether it's sociologyand Durkheim, or economics and
Adam Smith, it's we'll alwaysget the answer back that the
answers love, and that whatmakes humans special and what is
the secrets to solving all theworld's problems is humans
capacity to love.
And now AI is saying it too,which is why we're like quantum,
quantum, quantum.
Okay, forget AI, quantum.
(45:50):
We're like oh no, chat GPT istelling us that love is the
answer.
Change it, change it, change it.
And now we're getting that samemessage from AI, which is the
secret to everything is humansability to love, and let's just
accept that.
Speaker 2 (46:09):
Is it AI's job to fix
the problems of the world, and
if it's programmed by humans, isthis even possible?
Speaker 3 (46:15):
It's not AI's job to
fix the problems in the world.
It's our jobs to fix theproblem, and AI is a tool that
we can use to do that.
But if you want to, if we lookat the problems in the world,
what is really the problem withhunger?
we have enough food in theworld to feed everybody in the
world.
you know, with money we haveenough money to end poverty.
(46:37):
with climate change, we haveenough tree season and enough
land to fix the climate, but wedon't because we hate ourselves,
like the core, central issue iscoming from low self esteem.
So if we wanted, even if wewanted to build AI that could
solve the world's problems.
all it's going to do is workout The way you solve the
problems is you get humans tolove themselves, and then
(46:59):
they'll fix everything, becauseeverything will flow much, much
more easily in the way that it'smeant to.
Speaker 2 (47:04):
I always try and make
the podcast actionable so
people can go away and do theirown research.
What are some things thatpeople should be doing now to
get their head around theproblems that you've brought out
today?
Speaker 3 (47:15):
That's a really good
question.
I think that anybody who'slistening, who's overwhelmed by
what's happening in AI, shouldgo for a walk, like we're doing,
and just look at a tree andremember that, even with AI as
advanced as it is now, wehonestly have no idea how this
tree works.
We couldn't even tell youwhat's going on with this tree,
(47:39):
let alone even dream aboutre-creating it.
that is how miniscule AI'scapacity is in terms of the
actual wisdom that's all aroundus, so you could just take a
deep breath in and remember it'sreally not that spectacular.
Speaker 2 (47:56):
You say you don't use
AI, but have you been playing
with the tools and has anythingthat you've seen surprised you?
Speaker 3 (48:02):
I mean I use AI and
that I like use Instagram, and,
but is that using AI?
Well, i mean, it's in there.
Speaker 2 (48:07):
It's in there, but
sometimes we don't realise that.
You mentioned before aboutindirectly being programmed to
make us feel bad about ourselves.
I just wonder whether that wasits intention, because I keep
reading stories about people inGoogle and Facebook and other
companies Don't let their kidsuse it because they know how bad
it is.
All the dopamine and noxotoxincomes out.
Speaker 3 (48:24):
Yeah, i mean, i still
wouldn't say it's consciously
intentional.
I think what's happening isit's working out on the layers
underneath that we're notsetting it to, because we've
also created just an era wherewe don't want to talk about
emotions, we don't want to talkabout the softer skills and we
want to convince ourselves thatit's all.
We can just pull it all downinto ones and zeros.
(48:46):
But I don't think it'shappening intentionally.
Speaker 2 (48:49):
Have you seen any
really interesting uses that
maybe our listeners haven'theard about yet?
Speaker 3 (48:54):
I need to get a head
shot.
Mine's like eight years old andit's just such high pressure in
the morning to have to lookgood enough for a head shot.
That could also last a decade.
So I did look into doing those,but then they were just a bit
not quite right.
But I think there will be somereally cool use cases for
reducing research, reducing alot of the menial boring tasks.
(49:17):
I think that that capabilityreally is there and it will grow
as people work it out.
Speaker 2 (49:23):
I read a lot of
people saying the whole adage
that AI will steal your job.
I think people are now sayingthat AI won't take your job.
It's someone who knows how touse it better than you do will
take your job.
What's your view on that?
Speaker 3 (49:33):
Have you ever read
Bullshit Jobs?
Speaker 2 (49:34):
No, it sounds like a
great book.
Speaker 3 (49:36):
It's really great and
it starts with an article, and
it's basically saying that weshould have been working a three
day week by now with the techadvances that we have, but
instead we created all thesebullshit industries like
corporate finance and corporatelaw and management consulting,
and it's really interestingBecause it basically says, deep
(49:56):
down, we know that they're notreally needed, and so we make
everybody work really long hoursand pay them loads, and then we
pay nurses and teachers anddoctors less because they have a
job that benefits They at leasthave the morality of that, so
they should be punished.
But it's basically that I'm notsure what will happen to jobs,
because whatever will happenwill be deeply emotional, so we
(50:19):
could create totally new hybridindustries or jobs that are
super unnecessary, or we couldgo back to jobs like being a
hairdresser is a very robust jobnow, more than being an AI
engineer.
It doesn't pay as well, though.
Well, it depends how long wethink people are going to get
those AI engineering salaries,and then also it's just about
(50:41):
what enough really is, so I'mnot 100% sure what will happen
in the job market.
I mean, one thing that I'mreally interested in now and
this is actually a bit of atangent is the dark ages.
One of the questions that youhad was about where have we seen
tech like this before that'sbeen really successful.
And that question really struckme because I was like this
isn't really new tech.
(51:02):
The way it's being presented tous as consumers is not really
new tech, because it's goinginto search, which we've always
had.
So it's just up levelingsomething that we have always
had.
It's not like it's a newservice or tool that could
fundamentally change it.
And then, with what I was saying, and what is the necessity of
that, where does it actuallysolve problems that we were
(51:25):
struggling with?
It's like, oh, i don't reallyknow.
It might add some smallincremental things, but given
the wealth we're talking aboutand the money that's
consolidated into this, is thatright?
And then one thing that'salways struck me was I went to
the London Museum and thenthere's this mock-up of London
under Roman rule and then thedark ages, and it goes from like
(51:47):
sears and stuff to mud huts AndI'm just like how did that
happen?
It's so weird.
It's like we forget that humanscan just be like no.
And what happens with the darkages is that people get fed up
of the technical progressbecause it still depends on
slave labour, and so they go forless immediate tech, but that
(52:11):
doesn't exploit people in thesame way.
And so the tech that comes outof the dark ages is really good,
like windmills and things likethat, but horse saddles, all
these things that actuallybecome really pivotal to growth,
but they're doing it on atotally different ideology,
which is that slavery is bad.
You can't just have manuallabour being something that
(52:34):
everybody else does and then youexploit on it.
And there are some parallels towhat's happening now with
generative AI, where it's likethe consolidation of wealth and
the amount that people actuallyhave to work underneath to push
that wealth up is massive forgains that aren't that big, and
(52:54):
so why would we not be expectingpeople to start looking for
something totally different?
Are you worried about AI?
No, i'm not worried about AI,because I think human nature is
consistent and the way we'llreact to it is consistent, and
if it falls down, it's becauseit wasn't valuable to us and if
there's lessons that we need tolearn the hard way, then we'll
(53:15):
learn them.
I think that a lot of our fearabout AI does come from our fear
of death and not letting thingsgo.
So I loved working for Google.
I think it's an amazing company,i think it's incredible, but
also it is a fallible thing.
That follows a life cycle likeeverything else, and that's it.
(53:37):
It's the same with all thesebig businesses that we just
sometimes we just won't, we'rejust not willing to let things
go, to see what they shouldevolve, to be like Even chat GPT
when I'm saying I think it'sgoing to explode or it's not
going to be successful, but itdoesn't mean that that wasn't
what it needed to do in thecourse of true innovation.
(53:59):
It doesn't mean that it doesn'tstill have value.
So I'm relaxed about it becauseI'm just like there's a flow
and a process to life wherehumans always learn and move
closer towards being loving, andthat's just playing out.
Speaker 2 (54:15):
So my favourite part
of the show, Quick Fire Round,
where we learn a bit more aboutour guests iPhone or Android
iPhone.
Window or aisle Window In theroom or in the meniverse In the
room.
Native content or AI generatedNative Your biggest hope for
this year and next.
Speaker 3 (54:29):
That more people
start talking about love.
Speaker 2 (54:32):
I wish that AI could
do all of my.
Speaker 3 (54:34):
I wish that AI could
reduce my self doubt.
Speaker 2 (54:38):
The app you use most
on your phone.
Speaker 3 (54:40):
Notes and Instagram.
Speaker 2 (54:41):
The best piece of
advice you've ever received.
Speaker 3 (54:43):
Only accept no from
the decision maker.
Speaker 2 (54:46):
What are you reading
at the moment?
Speaker 3 (54:47):
I'm reading books
about the science of time travel
.
for the next essay that'scoming out, who?
Speaker 2 (54:52):
should I invite next
onto the podcast.
Speaker 3 (54:53):
Sarah Hunter.
Speaker 2 (54:55):
And how do you want
to be remembered?
Speaker 3 (54:56):
As someone who loved
well.
Speaker 2 (54:58):
So, as this is the
actionable future as podcast,
what three actionable thingsshould our audience do today
when it comes to betterunderstanding the opportunities
and threats from AI systems?
Speaker 3 (55:08):
I think the key
question to ask yourself is what
do you already know that youneed to do in your life to make
it better, but that you don'twant to do, and why?
And then start looking at whatis out there to help you on that
journey and what AIapplications could be that could
(55:29):
help you on that journey.
And then number two is take adeep breath in and know that
you're already amazing.
And then, number three, go backto human nature and read the
books that are not about tech,but they're just about humans
and how great humans are, andyou can use your own experience
to apply them.
But in life, we're never reallycreating anything new, because
(55:52):
the truths are the truths.
We just apply the wisdom to newcontext.
So even all the essays thatI've written, they're not really
saying anything new.
They're using traditionalwisdom and they're applying them
to new topic.
Speaker 2 (56:08):
How can people find
out more about you and your work
and also the essays?
Speaker 3 (56:11):
You can find my
essays on the website Asterocom
And if they're interesting toyou, reach out.
There's a contact on thewebsite and it would be thrilled
to talk to anybody interestedabout them.
Speaker 2 (56:22):
Stephanie a fresh
view of thinking about AI and
ethics.
Thank you so much for your timetoday.
Thank you.
Speaker 1 (56:26):
Thank you for
listening to the Actionable
Futurist podcast.
You can find all of ourprevious shows at
actionablefuturistcom And if youlike what you've heard on the
show, please considersubscribing via your favorite
podcast app so you never miss anepisode.
You can find out more aboutAndrew and how he helps
(56:47):
corporates navigate a disrupteddigital world with keynote
speeches and C-suite workshopsdelivered in person or virtually
at Actionable Futuristcom.
Until next time, this has beenthe Actionable Futurist podcast.