Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Digitally
Curious, a podcast to help you
navigate the future of AI andbeyond.
Your host is world-renownedfuturist and author of Digitally
Curious, Andrew Grill.
Speaker 2 (00:15):
Welcome to Season 7
of the Digitally Curious podcast
.
This show is a perfectcompendium if you've bought the
book of the same name and if youhaven't grabbed yourself a copy
yet, may I suggest that'ssomething you should consider
investing in.
Just search for DigitallyCurious online, or ask for it by
name wherever great books aresold.
In today's episode, we're goingback in time to an episode from
(00:38):
an interview I conducted withDavid Schreer in late 2021, well
before ChatGPT hit theheadlines.
We discuss his book Augmentingyour Career how to Win at Work
in the Age of AI, and the bookand our discussion is as
relevant as ever three years on.
What I found interesting whileediting the episode on New
Year's Day, 2025 is that it's anexcellent discussion about the
(01:02):
fundamentals of AI, without thehype of chat, gpt and generative
AI muddying the waters.
I really hope you enjoy thisepisode.
My guest today is David Schreier, who is a globally recognized
thought leader in technology andeducational innovation, serial
entrepreneur and author.
The portfolio of digitalclasses he created for MIT and
(01:22):
University of Oxford haveengaged more than 15,000
innovators in over 140 countriesand revolutionized the model
for university short courseofferings.
He is a professor of practiceAI and innovation with Imperial
College Business School, wherehe directs the translational AI
lab.
We're here to talk aboutaugmenting your career how to
win at work in the age of AI.
(01:44):
Welcome, david.
Speaker 3 (01:46):
Andrew, thank you for
having me.
I can't wait to dive in.
Speaker 2 (01:48):
It's a very relevant
topic and a book for our time
right now, because I wasactually on stage yesterday with
a bunch of accountants who arevery worried that I may take
over their job.
Speaker 3 (01:59):
Well, I will say that
the accountants are right to be
fearful.
There's a massive effortunderway in the services
industries, like consulting,accounting and law, to replace
people with machines.
Speaker 2 (02:11):
So let's dive into
some of the topics covered in
the book.
When we hear AI, everyonethinks it's artificial
intelligence, but after readingyour book, perhaps we should
rename it to augmentedintelligence.
You talk about humans andmachines.
Is that a fair way of using theA and the I?
Speaker 3 (02:32):
Well, I would argue
no.
So think of AI as the superset.
That's everything and oneparticular kind of AI.
And what I think is the mostexciting and profoundly
impactful is where we put peopleand machines together, and that
is augmented intelligence.
But that's just one kind of AI.
Speaker 2 (02:44):
Great book.
You've written, as I said, sixover the last six years.
What was the inspiration behindthis book?
Speaker 3 (02:49):
Well, I've been
messing about with AI since 1991
.
And I'm also a big fan ofscience fiction, so I read a lot
of fabulists who speculateabout what AI could look like 50
years or 500 years or 5,000years from now.
But what I've been seeing,particularly over the last five
(03:10):
to 10 years, is that AI is nowreally getting useful and having
more and more profound effectson society, and I think people
are woefully underprepared forthis change, and I think there
is a massive lack of AI literacy.
So I wanted to write a bookthat was really accessible,
that's easy for people to read,easy to get into, but still
(03:33):
substantive.
That still gives you theessential knowledge that you
need in order to understand thistechnology what it means for
your job, for your future, forchildren's futures, and what it
means for society at large.
Speaker 2 (03:45):
The AI literacy is
something I want to cover
because I think it's a hugeissue.
Should it be taught earlier inthe education system?
So primary school children aretaught about how money works,
because you need to know that.
Your book basically talks aboutthe fact that people are afraid
because they don't know whatthey don't know.
If they understand theopportunities of AI and we go
back to our accountancy exampleif they know what the issue is
(04:07):
and why they might be replaced,they can start reskilling now.
So should we start earlier withAI and digital literacy?
Speaker 3 (04:14):
Absolutely.
I think that we need to be AIliterate the same way that we're
numerate, you know, and so Ithink it's incredibly important
not just sort of the directunderstanding of AI, but also
kind of some of its indirect buthigh impact offshoots.
So, for example, denmarkteaches 10-year-olds how to
(04:39):
understand that they're lookingat fake news versus real news by
providing critical thinkingskills, and so, when you think
of AI literacy, you really needto think big, not small.
You need to think ofadjacencies, not just the facts
of what's an expert system.
Speaker 2 (04:55):
So you mentioned that
Denmark are doing a good job
there, but who else is gettingAI literacy right and for our
listeners?
Where could they go today,apart from, of course, buying
and reading your book?
But where could they go for asort of a mini course on AI to
get ready?
Speaker 3 (05:09):
China is putting a
lot into it and everyone needs
to pay attention to that.
Singapore has an incrediblearray of educational programs,
from very young ages all the wayup through working adults, and
you see interesting programsselect other countries like
Canada, united States, a fewother places in a few other
(05:31):
areas In terms of improving AIliteracy.
So the book you know, the nicething about a book is it's you
know, it's 20 quid and there'san audiobook version done by
wonderful actor Roger Davies,that if you don't want to read
it, you can listen to it on thetreadmill or while you're
walking.
If you want to go more deep,I've created a program at
(05:53):
Imperial College together withChris Tucci and Eve DeMontjoy,
two other professors, and it's ajoint program between the
business school and theengineering school that lets you
understand not just what AI isbut what to do about it and how
to use it, and you walk out ofthat.
It's a six-week entirely onlineprogram and you walk out of
that with a plan, like abusiness plan or a strategy or
(06:16):
something that you could applyat work tomorrow.
Speaker 2 (06:18):
I just want to touch
back on China, because every
guest I've had on that talksabout AI, talks about the threat
from China, and I think part ofit you talk about in the book
is that the government has anational program around that.
So what's the real threat hereand what are the other
superpowers doing to respond?
Speaker 3 (06:34):
There are several
threats.
One of them is that China isvery good at doing a coordinated
response, so they've alignedgovernment funding and a very
large amount of governmentfunding with private sector
support and enablement and asuper national strategy, meaning
they've been actively usingthis what's called the Belt and
Road Initiative to push Chinatechnology beyond the borders of
(06:57):
China into numerous othernations around the world.
They've also been activeinvestors in obtaining
university innovation, so theyfund research all over the world
at the world's leadinguniversities and then take those
inventions and innovations backto China and then build
businesses about them.
So they've got a very, verysmart strategy.
Something I will note is thatthe UK generates a tremendous
(07:23):
amount of AI research, but thendoes not do anywhere near as
good a job of commercializing itas China does.
So, according to TortoiseResearch, for example this is
something I cite in the book theUK has two and a half times as
many articles published bytop-rated AI experts as China
does, and three and a half timesas many as France by top-rated
(07:45):
AI experts as China does, andthree and a half times as many
as France and almost 30% morethan Germany, and so the UK has
global leadership in itsresearch around AI, in creating
inventions, but where it lags isin innovation in translating
those inventions into commercialpractice and scale, and so
(08:06):
that's an opportunity that Ithink the new national strategy
that the government hasannounced is seeking to address.
Speaker 2 (08:12):
Now in the book you
allude to something you're
working on with the AI Institute.
Can you tell us more about that?
Speaker 3 (08:16):
Well, this is part of
my small contribution to the
effort, right?
So I've banded together withsome other folks, including a
notable tech entrepreneur, benYablon, and the opportunity is
to build a bettercommercialization pathway for AI
and, at the same time, toimprove the governance of AI to
make sure that it's trusted,that it's ethical, that it does
(08:40):
what we want it to do and notviolate laws or go off and do
whatever the AI itself decidesto do.
So that's an exciting newproject which we hope to get off
the ground.
Speaker 2 (08:51):
Ethics and AI is a
huge topic and I want to focus
on that now because, again,talking to our accountants
yesterday, I said one thing youneed to be aware of is who's
programmed this?
Where is the conscious bias?
Because a lot of people don'trealize.
They think that AI just happensby itself.
You've got to educate and trainthem and that's done by humans,
and humans have a consciousbias, as we all know.
So ethics is a huge issue.
(09:12):
In your book you sort of talkabout, elon Musk is trying to
scare us about the machines willtake over and rule the world.
I would hope that you're alittle less science fiction
about that.
But where do you stand onethics and what should we be
doing more of to ensure that theright ethics frameworks are
built into every AI system?
Speaker 3 (09:31):
First of all, I
actually think that, if anything
, elon Musk is understating theproblem.
So you know, I mean he doeshave a horse in the race, so to
speak.
Right, he's a major investor ina big AI company and uses AI
and Tesla and his other ventures, but nonetheless, ai does have
(09:53):
the potential for significantharm to society and to humanity
at large.
I do not think it ispredetermined.
I do think that we can controlour own destiny, and so that's
where I may have a somewhatdifferent framing of the problem
, which is let's all get smarterabout this and then let's do
something about it.
Let's all work together to makeAI work for the benefit of
(10:15):
society.
So you know the answer to yourquestion about how do we
implement it.
You're absolutely correct thatone important thing is that we
need to educate both the peoplewho are designing the AI
technologies, the technologists.
We need a better job of havingthem understand AI ethics.
It can't just be a module or aclass that shows up once in
(10:38):
their training.
It needs to be embedded intokind of everyday practice and
everyday thought.
It into kind of everydaypractice and everyday thought.
We also need to train thebusiness people and the consumer
, everyone around and governmentofficials, everyone around the
technologists as well needs tobetter understand what the
unconscious biases are and whatthe implications are, because
(10:58):
one of the things that'simportant to know is AI doesn't
just sort of spring forward fromthe brow of a computer
programmer.
Okay, it is trained by data, sothe people design the training,
but the people choose data andthen the data trains the AI, and
so if you pick the wrong data,you can end up training your AI
(11:19):
in the wrong direction.
So kind of one of the famousearly examples was I believe it
was Google's image recognitionsystem and the image analysis
system.
So it's a bunch of, you know,young white male programmers
aged 28 to 32, of WesternEuropean descent, in Silicon
Valley primarily, who built animage recognition platform and
(11:43):
then trained it on a data set,and the data set they used was a
bunch of images that had beengenerated of people who looked
like them, and they didn't seeanything wrong with that,
because they didn't even notice,because they're like oh yeah,
that looks like a person.
To their mind, person equals30-year-old, you know whatever
English male.
And so what happened was, whenthe AI was then deployed, it was
(12:08):
horrible at telling thedifference between you're
recognizing someone whose skintone was not milk white and
women.
You know someone who's maybe ofan Asian ethnicity, anyone who
was not a 30-year-old white male.
And so one of the worst examples, one of the ones that got the
(12:29):
biggest headlines.
You know it would confusepeople of African descent with
gorillas routinely.
This was a big black eye forGoogle.
It was a terrible instance ofunconscious bias.
They have fixed the problem,but it's an example of what
happens when you don't train thepeople who are designing the
(12:49):
AIs, and then those people pickthe wrong data set to, in turn,
train the AI.
Speaker 2 (12:55):
You mentioned in
passing that government
officials should be across this,but I think they're probably
one of the key points to this,because I think regulators are a
little bit behind becausethey're so busy keeping the bad
guys out that they can't keep upwith the latest things.
And so a good example inanother industry is the open
banking model PSD2, that theEuropean and UK governments put
in a few years ago, and thefinancial conduct authority, the
(13:17):
FCA, here I think they were onethat had a sandbox.
They basically said we don'tknow what you're going to do
with this new regulation or thisnew platform.
We want you to try and break it.
We want you to try and doeverything you want and you can
do it in a sandbox where youwon't get sued, you won't go to
jail, because we want to see, asregulators, what the art of the
possible is.
So back to AI literacy.
Maybe we should be running ourregulators through training so
(13:40):
they understand this.
Another example the Queen'smessage here in the UK every
year is on BBC and theAlternatives on Channel 4, a
commercial broadcast, and lastyear they published a deepfake.
They had an actress playing theQueen, and I use this example
because it means that a publicplatform like a commercial TV
broadcaster raise the issue thateverything you see on the
(14:00):
screen may not be real.
So where do the regulators comein, and who trains and educates
the regulators so they canactually make the right
decisions on how to police this?
Speaker 3 (14:10):
I actually think the
FCA and the Bank of England are
among the most sophisticatedgovernment regulators about
technology that I've encountered, and I've worked across 150
governments, so I think I have arepresentative sample to say.
You know, they're on the moreprogressive end of the spectrum
in terms of their knowledge.
They may be cautious to takeaction, and there are reasons to
be cautious, but, you know,sandboxes are a good thing.
(14:34):
That is one of the, I think,better interventions that a
government can take in order tounderstand the effects of a
technology.
And so, you know, that said, Imean, look, I'm going to be
briefing some governmentofficials later today who
reached out to me.
The UK government, I have found, is very proactive about
reaching out to experts andgetting input and building
(14:57):
capacity, building capacity.
So you know, that said, mybiggest fears and I'm going to
speak globally, not specific tothe UK my biggest fears are that
you both have a lack ofsufficient action, that you have
and so the US is a greatexample here of, you know, there
isn't regulation that bothprotects and enables and it's
(15:21):
important that it does both,because the government you know
it's too complex a regulatoryenvironment right and no
political will to act, or theother great fear I have is too
much regulation that innovationis stifled.
So an example of this in theparallel industry was New York
City.
(15:41):
New York was one of the world'stop three financial capitals
and so had a lot of technologyand had the potential to become
fintech capital of the world.
And the regulators stepped inwith the blockchain and Bitcoin
revolution and they createdsomething called BitLicense,
which had the effect of quashingmost blockchain-related
(16:03):
innovation in New York, whichenabled London to take the lead,
and Singapore and a few otherplaces.
So I'm afraid of too much andtoo little government action,
because, if you think about theprinciples of good regulation,
you want it to it to, on the onehand, protect you know, provide
(16:24):
consumer protection, right.
Protect society from harms.
You want it to manage risk,including particularly systemic
risk, so you want to avoidthings like flash crashes on the
stock market or what have you.
You want it to promotestability, right.
You don't want society to be inconstant upheaval and and this
(16:46):
is an important one you want itto promote innovation.
And so good regulation is ableto do all four of those things,
and in order for a regulator,government official, policymaker
to know how to do that theyneed better AI literacy, and so
in a lot of my online classes Ido because I've now taught
people in 20,000 people in 150countries I get.
(17:07):
About 10% of these folks areregulators and government
officials and so, ad hoc,they're reaching out on a more
structured basis and I commendthem for this.
The Commonwealth Secretariatfunded 100 government regulators
from different countries 50 totake fintech classes at Oxford
(17:28):
and 50 to take fintech classesat Cambridge and you know,
particularly developingcountries.
I think that that is acommendable approach.
I think we need to see more ofthat, more efforts to provide
more skills, tools and knowledgeto government officials.
Speaker 2 (17:44):
I'm buoyed by that,
because it's good to hear that
my government is on the frontfoot and they're reaching out to
experts like you because theywant to know more, because they
realize this is the future.
Speaker 3 (17:52):
One of many reasons I
moved to the UK from the US you
may detect my accent is notexactly from, you know, from
Whitehall.
You know it is because this isa hot bed of innovation for two
things that I spend a lot oftime on fintech and blockchain,
on the one hand, and AI, on theother, and so I think that the
(18:15):
UK has the potential to reallybe an AI superpower.
Speaker 2 (18:19):
Back to the book
Augmenting your Career.
What new jobs will we seethanks to AI and what jobs do
you think will go?
Speaker 3 (18:26):
Certainly a lot of AI
designers, ai ethicists.
We may see AI psychiatrists itmight not be called that but
effectively, once you have amachine that thinks like a human
being, it may developpersonality that needs to then
be nurtured, and it may developpersonality that needs to then
be nurtured.
Ai could get depressed.
What would that look like ifthe transport AI got depressed?
(18:47):
Probably not very good.
So we got to sort of thinkabout that.
Interestingly enough, a goodfriend of mine, tommy Meredith,
who was the original CFO at Dellthat helped them become a
multi-billion dollar company, henow invests in a lot of strong
AI companies and he said to me,the best AI programmers they can
hire, uh out of UT, universityof Texas, austin, are philosophy
(19:10):
majors.
Okay, these are people whounderstand formal logic and can
hold multiple ideas in theirhead at the same time.
Uh, while some, you know,they're waiting for a thing to
resolve or a situation to becomeobvious, which kind of helps
you understand Bayesian math.
And, in turn, a lot of machinelearning systems and deep
learning is built around thisprobabilistic math or maths, I
(19:34):
guess, as I should say.
So I do think that there aregoing to be some new jobs
emerging, as Eric Mignolson now,of Stanford, said for every
robot there will be a robotrepair person, and so this is, I
(19:55):
think, a notable kind ofcomment.
Jobs that will go away arealmost everything right, I mean
there are.
Ais are getting increasinglygood at doing almost anything a
human is doing.
The ones that will, the jobsthat will be most resistant, are
service jobs that interfacebetween people, so restaurant
(20:15):
servers, healthcare workers thatchange bedpans and give sponge
baths, possibly even doctors,although some kinds of doctors
are getting replaced.
So, for example, specialistslike radiologists, who do image
analysis, are getting replacedwith AI systems, but primary
care physicians, I think, aregoing to be a lot more resistant
to being replaced by AI jobcategory is is eventually at
(20:41):
risk.
So so it used to be, you wouldsay, low skill, repetitive tasks
are things that AIs are goingto replace.
Now we're seeing slightly morecomplex things like customer
service agents getting replacedand increasingly, management
consultants, financial modelersat investment banks, accountants
.
These other categories arestarting to get replaced by AI
(21:04):
because the AIs are gettingbetter at what they do.
Speaker 2 (21:07):
Now in the book you
coin a phrase adaptable industry
.
So jobs might go away, but theymay morph into something else.
Which industries are most ableto adapt to the AI future?
Do you think?
Speaker 3 (21:18):
I mean certainly so.
The health services industriesare going to be less impacted by
AI, probably about half thejobs, disruption of financial
services or transport.
I think that the creativeindustries are going to be
pretty resilient.
People still like to see asinger perform live or an actor
(21:42):
perform on stage.
Speaker 2 (21:43):
Or a keynote speaker
on stage.
We don't want to get replacedjust yet.
Speaker 3 (21:46):
Well, you know, but
we could deepfake a video.
I mean, in the COVID era, howdo you know that you're talking
to me and not Dave Bott?
There are certain liveinteraction settings that people
like having a live person.
There are a lot of jobs thatare threatened, and so I think
people want to a lot of jobsthat are threatened, and so I
think people want to try andcreate those new jobs that will
(22:06):
be resilient to AI disruption,and one of them, I think, will
be some kind of hybrid between aperson and a machine.
Speaker 2 (22:11):
The book I think is
also a great primer because it
explains some of the concepts inthe machine learning and those
sorts of things deep learning,what qualifies as AI?
You actually say that chatbotsbarely qualify as AI.
So what isn't AI?
Speaker 3 (22:25):
Maybe I'm being
unfair.
Chatbots are a certain kind ofAI.
They definitely are AI.
They're just a very primitivekind of AI.
But AI is getting more and moresophisticated, and so deep
learning systems are AIs thatthink like people.
They're structurally so.
You have layer upon layer ofcomputation in a network that
(22:45):
looks kind of like a humanneural network, and in that
instance those are among themore powerful AIs that we
interact with.
We're still not yet at kind ofgeneral artificial intelligence
or artificial generalintelligence, depending on which
construct you like, but peopleare working assiduously towards
(23:06):
that.
Speaker 2 (23:06):
So I wanted to ask
about that, because everyone's
saying that's the next frontierand that's when AI will think
and look and act as much like ahuman.
And again the experts I'vechallenged on this, they've said
AI will never be able to loveor feel empathy.
But how close will it get to ahuman and how far away are we
from that?
Do you think?
Speaker 3 (23:23):
Well, so first of all
, I would dispute that statement
somewhat insofar as if itappears to be loving and appears
to be empathetic, can you tellthe difference?
How can you tell the differencebetween me being actually
empathetic and me pretending tobe empathetic?
You can't.
So from that perspective, Idon't know.
(23:45):
I mean, they might not love orexperience love in the same way
that we do, and they mightactually never really.
I mean, this is a philosophicalquestion.
Would an AI really think youknow, as opposed to appear to
think, but not unlike the humanbeing who you know, a psychopath
who emulates emotionconvincingly?
If you can't tell thedifference, they can still pass
(24:09):
through society and people won'tknow.
So I do think we are a few yearsaway from AGI or GAI.
There's kind of a joke which iswhen you talk to people and
they say, well, how far out isthis general intelligence?
And everyone says five to sevenyears, and it's been the same
(24:30):
answer for 20 years oh, we'refive to seven years away.
So you know, we're not thereyet and there is more work to be
done.
I do wonder if we get quantum,which is also five to seven
years away from commercialapplicability, then it might
create an AI powerful enough,fast enough, to be capable of
(24:50):
this general intelligenceproblem.
Speaker 2 (24:52):
Well, it leads me to
another problem and the whole
net zero debate.
You mentioned cryptocurrencyand blockchain and Bitcoin
before, and one of thecriticisms is that it just uses
so much energy because it'srelatively inefficient.
What about renewable AI?
Do we also need to think about?
If you're going to have reallysmart, really fast AI, it's
going to draw a lot of power.
How does that contribute or notto the net zero debate?
Speaker 3 (25:12):
First of all, it's
important to remember that
Bitcoin was designeddeliberately to be inefficient.
That's part of what drives itsscarcity value.
Second of all, some othercryptocurrencies, like Ethereum,
are moving to differentcomputation protocols that are a
lot more energy efficient.
And finally, you know there arethere's a huge movement to
(25:33):
create sustainable Bitcoin,meaning Bitcoin that's mined
from renewable or carbon freeenergy sources, like hydro or
solar.
So you know.
So I think that I'm carefulabout being overly critical of
something that I know for a factis changing to reflect the
current focus on the environment.
(25:54):
The heat not just the energyconsumption and fossil fuel
burning that's used to power AI,but the waste heat that's
generated from data centers thatare running AI.
Eventually, it could be all inspace, right?
(26:15):
The most powerful AIs couldjust be harmlessly radiating
that heat in space, and the costof lift, of bringing things up
into orbit, is getting cheaperand cheaper.
And thanks to Elon Musk, therehe is again, and so you know.
So I do think that you know, 20years from now we might be
looking at a planet that is kindof got a bunch of AIs orbiting
(26:38):
us, where the really profoundcomputation occurs.
Speaker 2 (26:42):
Interesting thought
AI in space.
You relayed a comment thatJulie Sweet from the Accenture
CEO gave you that she's, or hercompany's, reinvesting 60% of
what they're saving due to AIsavings into reskilling their
workforce.
Sounds like a really goodexchange.
Should more companies be doingthis, and have you seen examples
of that beyond Accenture?
Speaker 3 (27:01):
Absolutely more
companies should be doing this,
because you know, if you thinkabout, what makes a company go,
it's not just skills, but it'sculture, and so you know, and
this is what Accenture noted.
They said, look, we spend allthis time finding people who are
Accenture, people who have acertain personality, approach
and problem solving headset andintellectual curiosity whatever
(27:22):
goes into defining an Accentureperson and they're all bright,
right.
And so we get all these brightpeople, and then why would we
then throw them away when wefigure out how to automate a job
?
Why don't we reskill them intosome other job?
And so that's what they'veinvested in doing, because they
recruit tens of thousands ofpeople a year and this saves
(27:43):
them significantly on thatrecruiting expense, because the
cost of a mishire, even at ajunior level, is 15 times base
salary, and so that's a riskmitigation as much as it is a
cost savings.
I do think more corporationsshould do this.
I don't think enoughcorporations do it by far.
(28:04):
I think that there's too muchshort-termism, which is driven
by quarterly earnings pressures,and so corporations said, hey,
I can look smart by cutting allthese jobs because I automated
through AI, and it's someoneelse's problem to figure out
what to do with it.
Because the tenure of a CEO ofa Fortune 500 company is getting
shorter and shorter and shorterover the last 30 years, and so
(28:27):
if your average tenure is threeand a half to four years, you
just need to notch up a few winson the quarterly results to get
a huge bonus in stock and thenyou move on because you know
someone else comes in andbecomes CEO CEO.
(28:48):
This is where Michael Dell tookDell private, because he said I
want to do some massive digitaltransformation and I can't do it
with quarterly earningspressure.
And you see how Elon Musk he'srunning a private space company
that's worth over $100 billionand he hasn't taken it public
because he's like, why should I?
This is a long-term play.
And he hasn't taken it publicbecause he's like, why should I?
(29:10):
This is a long-term play, I owna huge chunk of it.
He says, and I don't want tohave the quarterly conversations
with investor analysts that aregoing to keep us from
succeeding in a deep techmission.
Speaker 2 (29:22):
I want to talk about
one example I've looked at in
terms of AI replacing a job.
So a few years ago, google cameout with Google Duplex.
It was a great demonstration ofa woman basically talking to a
Google assistant wanting to booka restaurant.
The AI said here are a coupleof options.
The AI then rang the restaurant, said a couple of mm-hmms
negotiated with the person atthe restaurant and booked the
time slot.
(29:43):
And my argument is this phone,this piece of plastic that I own
, knows everything about me.
So are we not far away from anAI assistant, a virtual
assistant, a digital agent?
But going one step further,they then do digital negotiation
, digital deals with othercompanies.
So, for example, my healthprovider, my telco provider, my
digital agent, talks to theirdigital agent.
(30:03):
And one example I gave is myhealth insurance is due next
month, but my digital agenttalks to their digital agent.
And one example I gave is myhealth insurance is due next
month, but my digital agentknows about that.
It goes out and does digitaldeals.
One such deal is if I give aone-time hash of my fitness
information, I get a deeperdiscount because now I'm healthy
.
I then put up a slide that sayswe'll be having to write ads for
robots.
Now I fall on stage in front ofa bunch of marketers.
They start throwing the stressballs at me, saying this will
never happen and our jobs aren'tat risk.
(30:25):
But is that?
A real example of the data isalready there.
The AI is probably there aswell.
The Google duplex is the worldof that last mile.
Is that a role that couldactually be replaced?
A digital agent to run my life,to run your life?
Speaker 3 (30:40):
I think that we're
heading in that direction.
Inevitably, people aren't quiteready to just hand everything
over, but we're getting closer.
So I know a lot of peopleincreasingly who have digital
scheduling assistants who actlike an EA, and so you can
actually email and have aconversation with this bot about
getting onto someone's diary,and so that's already happening
(31:04):
and so that's already happening.
But I think where we are is ina transition phase.
So, for example, google andMicrosoft have started with
these type ahead recommendations.
So that's an AI While you'recomposing, it finishes the
sentence for you, but it's ingray and if you like what it
suggests, you hit tab and itspeeds up your composition.
It's based on billions ofsentences that have been loaded
(31:29):
into their AI to train it, and Ithink it's actually it's
getting pretty good.
It's actually it's not bad.
Certainly we see digitaltranscribers.
I composed about a third of mybook by talking into my phone
and then cleaning it up later,and you know the AI was pretty
good.
I mean, it wasn't perfect, butit's getting better and better,
and certainly a lot better whenthan when we first had the, the
(31:50):
speech to text systems, come outin the eighties and nineties.
So, so, so I.
I think that we are going tohave a world.
I mean, have you ever wishedthat you know there could be two
of you, cause there's just somuch to do?
And you're like, oh gosh, Iwish, well, you know, we could
eventually have a digital twinthat would act the way that we
(32:11):
want it to, that sort ofrepresents us in the world and
frees us up from drudgery.
I do see that as not aninevitability but a likelihood
within the next 10 years, maybe,maybe even five.
Speaker 2 (32:25):
Some people would say
it'd be scary to have two
Andrew Grylls, but that's for awhole other discussion.
Now, before we go, becausewe're running out of time,
there's a chapter, chapter five,reskilling and developing
cognitive flexibility, andthere's some great ideas in
there and you talk aboutbasically remaking your brain
and there are five things you dothat.
Can you explain what that meansand how you go about keeping
yourself up to date and learningnew things?
Speaker 3 (32:44):
It goes to the kind
of the fundamentals of how we
prepare for the AI future, right?
So in order for us to be readyto reskill, to upskill, to stay
ahead of technology change anddisruption and frankly, it's not
just disruption from AI, it'sdisruption from everything we
(33:04):
need to train our brains.
Your brain is plastic, right.
It is malleable.
You can re-skill your brain toabsorb knowledge, right, and so
there are certain techniquesthat you can employ that will
let you acquire new knowledgefaster and use it more
effectively.
(33:25):
Acquire new knowledge fasterand use it more effectively.
And so we embed some of thesekinds of practices, for example
at Esme Learning, which is acognitive AI learning platform,
in how we work with Oxford andCambridge and MIT and Imperial
to create online experiences.
So things like practice, right.
So you're much more effectiveat learning something if you
(33:49):
attempt to apply the lessonimmediately, if you try things
out.
This is one of my biggestproblems with Masterclass as a
learning platform.
I think Masterclass isfantastic for intellectual
curiosity, it's a greatentertainment platform, but it's
not learning.
Okay, you will forget 50% ofthat masterclass video you
(34:09):
watched within one hour.
This is called the Ebbinghausforgetting curve, right.
Your brain just will like goneout of your head.
But if you actually tried usingsome of that stuff right away,
then you would.
You would actually have a youknow.
It would cement in your, inyour memory, better.
Reflection is another one.
So it's not enough to takeknowledge in, you have to
(34:33):
actually cogitate on it.
You have to think about.
So it's metacognition, you haveto think about thought.
What is this thing that I'velearned and how does it fit in
with my mental model of theuniverse and what does it mean?
That active reflection againhelps you remember things better
.
Gradual change is another one.
Right, this is not somethingyou can just cram for.
(34:53):
Okay, people like doingintensive short courses over a
weekend because they think it'sefficient.
Oh, I'm busy, I just want to doit in a 15-hour sprint over two
days and then I'll learnsomething.
That's the worst way to learn.
Okay, if you do intervaltraining, it's like lifting
weights at the gym.
You can't build muscle with oneeight-hour intensive session.
(35:15):
But if you do one hour a weekover eight weeks or one hour a
day over eight days, you'll makemore progress than if you just
try and do it all at once.
Peer learning is another one.
Ok, we learn.
We're social animals.
People are social animals.
We learn better from each otherthan you know.
If we, you know, just have likea sage on a stage kind of
(35:39):
blathering at us like I'm doingnow, it's much better if we sort
of talk about things anddiscuss.
The Oxbridge tutorial isactually one of the most
effective ways to learn, and soone of the problems that Esme
Learning solves is how do we putthat online?
And finally, creativeexploration.
And so, you know, the humanbrain gets joy from exploring
(36:02):
and creating.
We get little like bursts ofdopamine and serotonin when we
experience something new anddiscover something new.
We do that a lot as children.
Children are among the mosteffective, creative people on
the planet.
And then the education systemproceeds to spend a decade and a
half training that creativityout of us, and so sit in rows,
(36:25):
don't speak up, you know.
Raise your hand before you talk, whatever.
It does a lot of things thatregiment our minds to fit in
with society, and that isactually at the expense of
creativity.
So you know, there's a famousexperiment the marshmallow, the
great marshmallow experiment andbasically you give people a
(36:46):
couple of marshmallows, a fewsticks of uncooked spaghetti,
and I think there's some stringinvolved and basically it's a
standardized little kit and youhave 18 minutes to build the
tallest tower that you can.
And so the least effectivepeople at the marshmallow
experiment are MBA students,because they spend almost all
(37:10):
the time negotiating status witheach other and not doing
anything.
Among the most effective arefive-year-olds, because they
just jump in and they startplaying and they grab things
from each other and they startputting things together.
Putting things together thatcreative exploration produces
among the highest towers in thatexperiment and, more broadly,
(37:31):
that creativity is how humaningenuity can survive AI
disruption.
Speaker 2 (37:35):
Yeah, I've been a
number of corporates and we've
done those exercises and eventhe ones at IBM, and you just
sit back and watch the behavior.
It's actually more interestingthan the end result.
We're almost out of time, so Irun all of my guests through a
quickfire round where we learn alot more about you in a couple
of minutes.
So let's do that now.
Iphone or Android, iphone, pcor Mac Mac the app you use most
on your phone.
Speaker 3 (37:56):
Let's say Signal.
What are you reading at?
Speaker 2 (37:58):
the.
Moment.
Speaker 3 (37:58):
I'm trying to
understand the human brain, so I
am reading a book on kind ofhow we think about thought and
final quickfire question how doyou want to be remembered?
He made the world a betterplace for billions of people.
Speaker 2 (38:12):
What three actionable
things should an audience do
today when it comes toaugmenting their careers?
Speaker 3 (38:18):
Get smarter about AI
Explore, play and create.
Try and experiment with AI andthere are no code systems.
You don't have to know how toprogram and enlist a friend in
the journey.
Speaker 2 (38:29):
Great advice.
So how can people find out moreabout you and your work?
Well, if you go todavidschreiercom.
Speaker 3 (38:36):
that has a lot of
information on my books and
thought leadership.
Also, imperial College, theCenter for Digital
Transformation.
We're going to be doing a lotof cool things out of that and
obviously have a website off ofimperialacuk.
And finally, we have an amazingset of classes from some of the
world's greatest thoughtleaders at esmelearningcom.
Speaker 2 (38:58):
I'm going to check
them out as the very next thing
to do.
David, a great discussion today.
Thank you so much for your time.
Thanks, Andrew.
Speaker 3 (39:03):
This has been fun.
Speaker 1 (39:07):
Thank you for
listening to Digitally Curious.
You can find all of ourprevious shows at
digitallycuriousai.
Andrew's new book, DigitallyCurious, is available at
digitallycuriousai.
You can find out more aboutAndrew and how he helps
corporates become more digitallycurious with keynote speeches
and C-suite workshops atdigitallycuriousai.
(39:30):
Until next time, we invite youto stay digitally curious.