Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:00):
Welcome to Digitally
Curious, a podcast to help you
navigate the future of AI andbeyond.
Your host is world-renownedfuturist and author of Digitally
Curious, Andrew Grill.
SPEAKER_00 (00:15):
Welcome to a very
special episode of Digitally
Curious.
Today we're joined by the teamfrom the Somewhere on Earth
podcast for a pod swap.
I'll let host Gareth Mitchellexplain more.
SPEAKER_01 (00:26):
Hello folks, it's
Gareth.
Welcome along to Somewhere onEarth.
It is Tuesday, the 14th ofOctober, 2025, and we're in
London.
And guess what?
We have a special co-productionfor you.
SPEAKER_00 (00:37):
I'm Andrew Gruhl,
and I'm from Digitally Curious.
Stay tuned.
SPEAKER_01 (00:47):
There you go.
So you're going to hear morefrom Andrew as we go along,
folks.
Also with us is GhelenBoddington.
You all know who Glenn is bynow.
So Ghlen, um usually I'd sayhello, how are you?
But how about a much moresearching question?
Are you digitally curious?
SPEAKER_03 (01:02):
Oh yes, I'm
digitally curious and I have
been for many decades.
And my fascination, of course,is um this balance, the constant
questions in my head every dayabout the balance between what
the digital can do to complementand enhance us humans and our
lives.
So there's my curiosity all thetime going on.
SPEAKER_01 (01:25):
So coming up today.
Welcome to the Somewhere onEarth Podcast.
And welcome to DigitallyCurious.
Yes, folks, two fabuloustech-centred podcasts have
joined forces for this specialedition at what we in the trade
call pod swap.
At Somewhere on Earth, our takeon tech is to dig out, you know,
(01:49):
those unloved tech stories fromaround the world and we talk
about how they affect people'slives.
How about you, Andrew?
SPEAKER_00 (01:55):
Well, on Duty
Curious, we provide anyone who
is curious about technology withsome actionable advice for the
near-term future.
SPEAKER_01 (02:02):
So in this edition,
AI really it's not the future
anymore, is it?
It's here.
So how is it changing the waythat we all think about the
technology, you know, in areally practical, down-to-earth
way?
You know, big picture, but alsodown-to-earth.
So whether you're a big businessor an everyday curious amateur
like me, what does it mean?
SPEAKER_00 (02:22):
We're also going to
examine the impact of AI on
society and how we can live withit rather than have it take over
our lives.
SPEAKER_01 (02:28):
Yeah, so that's all
right here on the Somewhere on
Earth and Digitally CuriousPodcasts.
Right then, so let's get to knoweach other a little better.
Now at the top, we've alreadygiven a kind of one-sentence
outline of our podcast.
So, Andrew, um, our Somewhere onEarth listeners may not have
(02:51):
heard Digitally Curious so far.
I'm sure some of them have,because if they download digital
podcasts, they've probably foundyours.
But for those who haven't,what's your origin story and why
do you think the world needsyour podcast?
SPEAKER_00 (03:02):
Really good
question, Gareth, and thanks for
having me on the pod swap.
I've never been on a pod swap,so this is the first for me.
But Digitally Curious startedbecause I noticed a massive gap
in boardrooms and businessesbetween the excitement around
these emerging technologies andactually knowing what to do with
them.
I spent decades working withFortune 500 companies, and
there's a pattern where everyoneknows about AI, quantum.
(03:24):
They know it's important, butthey're paralyzed by not knowing
where to start or importantly,how to make it practical.
So my podcast bridges that gap.
It's actionable futurism, notjust fascinating speculation,
but real-world guidance on howto prepare for and implement
what's coming next.
So, Gareth, for my listeners'benefit, how about Somewhere on
Earth?
How did you get started?
SPEAKER_01 (03:45):
Yeah, sure.
So hello, digitally curiouslisteners.
Thanks for having us, by theway.
So, yeah, our kind of originstory is that uh a while back we
were a technology program on theBBC World Service called Digital
Planet, and we ran for manyyears, and then that show ended.
And then because we loved doingit and we loved our listeners,
we took it outside the BBC, andhere we are, uh somewhere on
(04:06):
Earth.
And uh I suppose the clue inmany ways is in the name.
You know, we are interested inthose sort of global technology
stories like anybody else.
Of course, we're going to lookat what's going on with the
usual suspects in Silicon Valleyand um, you know, all the
biggest of tech headlines.
But we also like to do thosestories that perhaps uh many
other tech podcasts might ummight gloss over, so they could
(04:28):
be stories to do with oh I likeanything.
But you know, digital literacy,for instance, or any kind of
literacy or technology helpingin vaccine rollouts in the
global south.
Um so yeah, we like to thinkabout technology and think about
it in a global context.
And I guess actually, but onething we definitely have in
common on our podcast, Andrew,is you know, you talk there
(04:50):
about yes, you do some biggerpicture stuff, but you really
want to get into how this stuffaffects people's lives.
And that's very much our taglineas well.
We can talk about it, but how isthis stuff affecting our lives?
So that's uh who we are as well.
And um, Ghlen's been with us foryears and years.
Have I said it all, Ghlen, ordid I miss anything?
Is there anything you want totell, tell the podosphere about
(05:12):
who we are, what we do, and whywe do it?
SPEAKER_03 (05:15):
No, I think that's
great.
I think what has been reallygood coming into somewhere on
earth from the BBC podcast is ofcourse we've worked together as
a team a lot.
We really enjoy workingtogether, and we've brought a
whole load of network contactswith us who are continuing to
move forward with us andlisteners too.
So it's very positive community.
SPEAKER_01 (05:34):
Yeah, absolutely.
And it's quite good to, Isuppose, think about what
differentiates our variousapproaches to talking about
technology.
And listening to I've listenedto a load of your auditions now,
Andrew, on digitly curious, andyou have quite a business focus,
don't you?
SPEAKER_00 (05:48):
Yeah, absolutely.
Well, I've been in business fora long time, but uh I'm also a
public speaker.
The wonderful serendipity ofhaving us all on the show today.
So Ghlaine and I keep bumpinginto each other at these AI and
technology events, and uh it wasjust fantastic to have that sort
of synergy there.
But I've been asking thequestion of digital curiosity
for some time.
I'm a public speaker, not just apodcaster.
(06:09):
And what I do, I bound ontostage, and rather than saying
it's great to be here andwherever, I ask a cold open, I
say, Are you digitally curious?
And the audience looks at me andsays, What's going on here?
And then I I play a game.
So I'm very keen to understandhow digitally curious my
audience is.
But when it comes to the podcastand my approach, I think what
sets my approach apart is that Icome from the technology side uh
(06:31):
with that businessimplementation rather than just
the innovation side.
I'm not just talking aboutwhat's possible, but what's
practical.
How do we actually make thiswork?
I've been in those corporatemeetings where someone says we
need to quote, do AI.
Everyone nods, but no oneactually knows what it means for
their business and where tostart.
So I start and I focus ontranslating future possibilities
(06:52):
into action plans for this weekand next.
How about you, Gulen, then?
SPEAKER_03 (06:57):
Yeah, I think I'm I
think I'm working more on the
multiple little nicheinnovations and how across the
last decades they've kind ofclustered and become um
collective in their impact overtime.
So my background is actuallyarts and humanities and
particularly dance andperforming arts.
And um, I spent a couple ofdecades as a creative studio
(07:18):
director doing participatorypublic projects involving
telepresence and sensors andearly digital interactions of
all types.
It's all now known as immersiveexperiences in today's world.
So I was in the early pioneerwaves of the immersive
experience sector.
And I guess my knowledge basecoming from dance as well is
(07:38):
really much about how humansbehave with the technological
tools.
So today I am mainly working asa researcher, thought leader, do
a lot of speaking and consultingand conferences and gatherings
around the world.
What I'm really doing is puttingout there and exploring,
creating debates in a widevariety of sectors about the
future of us as humans and howwe will work positively and what
(08:03):
is possibly negative for us ashumans about today's
technologies.
SPEAKER_01 (08:08):
Nicely put, folks.
Um, so I suppose just widen thisout a little bit then, you know,
the global perspectives that uhthat we both have uh on our
podcasts and that we bring toour audiences.
And um, you know, Andrew, if wecould come back to you, often
speaking to enterpriseaudiences, so how do you feel
that all these differentinsights, you know, some global
(08:29):
insights, how do they complementeach other and tell a story for
the digitally curious?
SPEAKER_00 (08:33):
What I've found is
that enterprise adoption often
follows the global trends, andAI is certainly the top of that
at the moment, but there's a lagand a lot more due diligence.
When I'm working withmultinationals, I'm constantly
drawing on examples fromdifferent markets, what's
working in Singapore, with saysmart city initiatives and how
Nordic countries are handlingAI, AI ethics and those sort of
things.
Or how Japanese companies areintegrating AI into
(08:54):
manufacturing.
The global perspective preventsyou from getting trapped in your
own markets' assumptions, whilstenterprise insights show what
actually scales, what doesn't,and who's willing to pay for it.
SPEAKER_01 (09:04):
Yeah, and I can
imagine there's some fascinating
sort of cultural aspects thereas well when you think about
different different businesspractices between, for instance,
Singapore or Japan or Europe andwhat have you.
Um, Galen, how about you?
Where does the globalperspective come in?
SPEAKER_03 (09:17):
Yes, I think for me,
the fascinating side of that, um
of a global side, is actuallyseeing the effect that um
different cultures andreligions, different politics in
different countries have on howour digital technologies are
coming into use today.
I mean, I'm very interested alsoin the difference between cities
and rural environments and whatwhat the access points are.
(09:38):
And of course, a lot of thatends up being about the ethics,
for example, how our biometricsare being used in digital
identity around the world, um,which is really topical focus,
because I'm working with thebody, that's a big part of my
work.
Um, how people use technologiesto support their lives, or how
technologies are being used tonegate or misuse people, yes.
(09:59):
So I think we need to constantlyremind ourselves over so many
people still don't even havedigital access at all.
I think it's pretty much threebillion people, it's not that
far under half the world, don'tactually have the infrastructure
or data access or skills there.
Um consequently, I'm fascinatedby with Somewhere on Earth and
(10:19):
the other work I do, being ableto delve into those different
global perspectives fromdifferent countries where we are
at different points in time.
SPEAKER_01 (10:27):
All right, so I
think that sets things up
beautifully.
But just in a final few wordsthen, give us the the elevator
pitch here, Andrew.
Um why why should our audiencenow um switch to not switch to
um compliment the somewhere onearth listening with Digitally
Curious?
Uh why would they do that?
SPEAKER_00 (10:45):
I think they
actually fit well together.
If you're listening to somewhereon earth, you should be
listening to Digitally Curiousand vice versa.
Uh but I suppose if I was to uhgive you the pitch, in fact,
Digitally Curious isn't just apodcast.
Uh it spawned a book.
So if you like what you'veheard, you can buy the book.
Um but if you're curious abouttechnology and frustrated by
either overly technical jargon,who isn't, or pie in the sky
futurism, digitally curious isfor you.
(11:07):
If you're running a business,working in one, or just trying
to understand how these changesaffect your life, I and my
guests break down complex techtrends into practical insights.
And my guests range from AIresearchers to retail executives
to academics.
They're all focused on onequestion.
How are we able to turn digitalpossibilities into real-world
benefits?
SPEAKER_01 (11:26):
And I think similar
um benefits uh for your
listeners, if they'd like to,Andrew, to coming to Somewhere
on Earth.
You know, we we like to begrounded, uh, we like you to go
away with something a little bittangible that um, you know, you
can reflect on.
And um we don't have a book, butwe like to think we have a
splendid listener community, andwe'd like to include the voice
(11:47):
of our audience whenever we canas well.
So you'll be with friends um ifyou join us here on Somewhere on
Earth.
Okie dokie, now um, I'm veryinterested in like the tone of
(12:08):
AI discussions.
And it's partly because I have abackground in science
communication, so I think a lotabout the way that messages are
framed, about how discussionsare are framed and uh given a
context.
So we talk so much about AI,don't we?
But I I'm gonna put it to you,Andrew, and Galen will have a
view as well, that the tone ofthe discussion around AI, I
(12:31):
think it has shifted even in thelast year.
And I wonder if that's yourexperience as well, Andrew.
SPEAKER_00 (12:38):
The thing is, AI's
not new.
Uh only two weeks ago was thefifth was not the fifties, was
the 75th anniversary of when Dr.
Alan Schoing wrote his whitepaper, Consume Computer,
Machinery, and Intelligence.
And the first line of thatresearch paper asked the
question, can machines think?
So we've been asking thatquestion for 75 years.
Only nearly three years ago whenChat GPT bounded onto the stage,
(13:01):
was everyone then able to removethe friction and start playing
with an AI tool?
So it's it's fascinating.
I'm on LinkedIn a lot, and yousee people that were social
media experts, then they werecryptocurrency experts, then
metaverse experts, and nowthey're AI experts because
that's the flavor of the month.
And they all tell us that we'reall going to be out of jobs and
everything's gonna change andthe world will change.
(13:22):
The world will change, but notas fast as some of these air
quotes experts expect.
I'm in the trenches every day.
I've done probably since ChatGPTcame out, 200 talks around the
world.
I'm speaking to real-worldcompanies that are saying, we
can't change overnight.
We have policies and processesand shareholders, and it's gonna
be very hard for us to changevery quickly.
(13:42):
And then it's amplified aroundthe world.
So what I'm finding is you'rereading about AI changing the
world, but actually it's gonnachange quite slowly.
A year ago, every conversationstarted with AI will change
everything.
Now it's AI is changingeverything, and he's
specifically how we've movedfrom theory to now
implementation.
But again, if you read aLinkedIn feed or read the news,
a lot of surveys are saying AIprojects aren't working.
(14:04):
And here's the secret a dirtylittle secret.
Why they're not working isbasically people have run into
them, they haven't thought aboutreimagining the business
processes that are currentlybroken, and they're trying to
fit this very smart technologyon top of an overly broken human
process.
And so that's what's nothappening.
And everyone wants to know ROI,and the other thing I talk about
when you talk about return oninvestment is the way we measure
(14:27):
return on investment today withtechnology investments is
actually very different to howwe'll measure it in the future.
So it's very confusing.
And I think what your podcastand my podcast are trying to do
is peel back those onion layersand make sense and say, what do
I really need to understand andwhat's important now?
SPEAKER_01 (14:43):
Yeah, and of course,
you know, there's the monetary
capital, monetary return, butthen the kind of social value as
well, you know.
So um, wow, um, Glenn, becauseyou're interested in in
education amongst many things,aren't you?
SPEAKER_03 (14:56):
Yes, I am.
I've um education access.
I I work in universities a lot.
I'm at the University ofGreenwich as a senior researcher
and uh do a lot of my work outfrom that.
It's part-time, so I can take itout into public engagement.
Um, but I've really beeninterested in the shifts and the
debates within the universitysector and in education as a
whole about the shifts in um thewe're gonna we're going through
(15:18):
it in how we teach and how welearn.
Yeah.
And I think it's um it's goingto be a massive shift.
I mean, really incredible shift.
Um, and of course, someeducationalists are battling
with that, but there's a lot ofvery good debate going on about
it as well through theuniversity systems and different
um academics finding differentways to use it, students adding
(15:39):
into that and complementing it.
I really agree with Andrew,though, it does take much longer
than we all think.
Yeah.
Um I was very early working withtelepresence um with stage,
remote stage connectivity fromthe from the early to mid-90s,
linking up dance stages actuallywith full-bodied Zoom, really.
Um, and of course, you know, thethree, three or four hundred of
(16:01):
us that were working on that inthe 90s across across the world,
we all thought that it was goingto be coming really soon.
Everybody would be doing thiskind of communication, and
definitely by the year 2000.
But in fact, as we know, it's ittook um a lockdown and COVID and
people having to use it thatactually, you know, pushed that
out to into mass usage.
And we can see the same with ummany other things like Avatar
(16:25):
creation.
Andrew's right, until itactually hits the shelves and
it's there for mass use, yeah,it's very hard to get anything
moving that fast.
We we see technology as thisvery fast-moving thing, but I
think if you're in it like everyday like we are, it's not.
It's actually quite slow to getitself going.
Um, even if every day there issome kind of mythical or
(16:49):
spectacular new headline aroundanother shift in AI or whatever.
So very right about the the thewhat comes out on LinkedIn.
I think people are graspingaround for for new stuff
actually.
And um, finally we're seeing alot more about trust and and
transparency and um the wordsthat link into governance but
which are coming from a morehuman base.
(17:11):
I was surprised that didn't getthere earlier, but we're we're
getting there now on some ofthose debates.
SPEAKER_01 (17:16):
So, Andrew, what
surprised you about how people
are actually implementing AI dayto day?
SPEAKER_00 (17:22):
Well, they've all
played with it.
Four years ago, if I was doingan AI talk, it'd be very hard
for me to demonstrate how itactually works.
But now that we've got ChatGPTand other genitive AI tools, if
you can send a text message, youcan engage an AI platform.
So everyone's at least playedwith it.
But often people haven't foundit very useful.
They haven't had what I call theaha moment.
I'll give you a really simpleexample.
(17:42):
Uh a few months ago, I was up inuh Newcastle talking to a
family-run business that provideindustrial tools and and uh
things like that.
Um and I I showed them somethingquite simple.
They before I met with them, Isaid, Can you send me some more
information about your company,what your challenges are?
They said, We've just done aSWOT analysis on the whole
company.
17 departments.
We'll send you an Excelworksheet.
(18:03):
It had 17,000 cells and about6,000 rows.
It was an incredibly largedocument.
I put into an AI tool and Isaid, What are the opportunities
for AI for this company giventhis SWOT analysis?
And in two minutes, it basicallyspat out department by
department, what they can do.
In the meeting, I said, Can Iask who was responsible for
doing the SWOT analysis?
And Chris put his hand up.
I said, How long did it take youto analyze this?
(18:24):
He said, It took me 10 days.
So I shocked him.
I said, Well, while I wasironing my shirt this morning to
come here, I put this into theAI tool, and in two minutes, and
then to test it, we spent thenext two hours going through
department by department, whatthe AI had suggested we could
do, and they went, Oh, hadn'tthought about that, hadn't
thought about that, hadn'tthought about that.
So it actually was telling theright thing.
Now, it even surprised me thatfrom a very little, uh very
(18:47):
small prompt to an XORspreadsheet, that was an aha
moment.
And the CEO, who I was toldbefore I got there, was a
complete skeptic.
I had him at the coffee break,he had nine or ten pages of
notes, and he said, We've got todo this.
You need to have that aha momentwhere you go, I didn't know it
could do that.
And so what I spend my lifedoing on the podcast and in
front of corporates is bringingthem to becoming digitally
(19:09):
curious, to trying that at thatsenior level, and then that then
the sparks fly.
SPEAKER_01 (19:15):
The first time um
yeah, Chat GPT did a spreadsheet
for me, I was like, that I thinkthat was an aha moment for me.
Like, wow, does spreadsheets aswell, my goodness.
Um, yeah, Galen.
SPEAKER_03 (19:25):
Yes, no, and I think
that for me links into um the
debate about kind of aderobotization of humans, which
we've been caught in for thelast 20, 30 years, where you
know a lot a lot of people outthere suddenly ended up just
gradually, but ended up as jobswhere they were literally just
robots putting data into intocells, like you're saying, and
(19:46):
spending 10 days analysing data,yeah.
And that's not that's not thegood use of the human brain at
all.
It's not good use of someone'slife, yeah.
And um, and um, so I think it'sgreat the day-to-day data um is
kind of being pushed out the wayfor us and dealt with by this
special tool that can analyseand do the analytics so fast for
(20:08):
us.
I think also it's interestingthat we take it to an individual
basis, getting help and adviceand guidance.
What we are seeing coming outfrom the facts are that ChatGPT
and various other chat tools inAI are being used massively by
individuals for mental health,for relationship health, for
(20:29):
just advice before they go intoa difficult scenario, whether
it's a home one or a work one,yeah.
And um I read something abouttwo days ago that said that
suddenly they've realized in thetravel industry that most young
people are doing all theirtravel s on these chat uh
interfaces because they can dospecial personalised itinerary
(20:51):
creations much faster thanthey're going to get through a
travel agency.
So we're seeing a very differentkind of uses be being uptake at
a very fast level where peoplego, Oh, I know, I'll just go
there and get that sorted.
I just did a travel list ofpacking because I'm just off to
Brazil next week for a set oftalks and conference things.
And because I haven't been verywell, I just did my travel and
(21:13):
I'm like, oh, I'm doing thisagain.
This is so much easier.
I put the itinerary in, theweather, the where I was going,
what what do I need?
You know, and it's just come outthe whole thing.
SPEAKER_01 (21:22):
Yeah, but buyer
beware, folks, that the
technology doesn't actually packyour suitcase for you.
SPEAKER_03 (21:26):
No, I wish it did.
SPEAKER_01 (21:28):
Well, give it time,
give it time.
Um but but I guess you know, inall seriousness, what you're
talking about there, Gillette,is is a form of digital literacy
that is emerging.
And Andrew's just given thatexample about going into
businesses and saying, hey, youknow, you could just crank out a
spreadsheet in two minutes, andthat's going to really help your
business.
So and I think all of us and andbusinesses are on different sort
(21:50):
of stages, I suppose, of digitalor I should say AI literacy.
But Andrew, that surely placessome very big responsibilities
as this AI literacy emerges.
Um big responsibilities oncommunicators and and educators,
people like you, I guess.
SPEAKER_00 (22:07):
Absolutely.
And so what I'm seeing is thatthe leaders are actually saying
in an organization, we're goingto bring everyone up to the same
speed, the same speed, the samelevel of AI literacy, because
all these things get bandiedaround.
So the leading companies I'mseeing, they're putting everyone
through a basic level oftraining, they're putting the
executives through day,sometimes multi-day level
training to get them to thatlevel.
It's not just about promptengineering, it's about bringing
(22:30):
them to those wow moments whenthey go, I didn't know we could
do that.
This is how we're going toactually automate a really
boring part of the process we'redoing.
Um, and that comes back toeducation as well.
What alarms me a bit, I'mhearing time after time that
school and university studentsare seeing AI as cheating
because their teachers aresaying, don't use it or you're
(22:50):
cheating.
I think they're missing outbecause then they come into the
workforce and they say, Oh, wehaven't used AI much because it
we've been told it's cheating.
Um I use a spell checker, that'snot cheating.
I draft um second drafts of myemails and those sort of things
with AI, that's not cheating.
I'm using all the resources thatare available to me.
So what worries me is some of usare uh itching for a level of AI
(23:12):
literacy and I'm reading all thethings that I can.
Others are holding back becausethey think they're not ready for
it yet, and that that concernsme.
SPEAKER_01 (23:19):
What about you,
Galen?
I'm thinking about thegenerational side of things.
It's the usual trope in forcethis time that it's the young
people who get it and the oldpeople who are still, or the
older people who are struggling,or no, not necessarily, says
Glenn.
SPEAKER_03 (23:32):
Necessarily, because
it, you know, however much we
say it, it is slower than wewant it to be, or feel it
actually is changing day to day.
And um, if you uh follow thewhole area of prompt
engineering, for example, weknow that that will look what
we're doing today is actuallyquite naive and quite
inefficient.
Um and that will change rapidlyin the next few years with
(23:54):
different tools coming out thatneed different ways of prompting
to actually be moreconversational with you
alongside you, your codecompanion kind of side.
And I think in definitely in myuniversity in Greenwich, and
many now, not all of them,Andrew's right, and schools are
probably struggling a bit more,but we have a very clear um uh
(24:16):
AI uh use guidance now.
And students, students, staff,and researchers are allowed to
use AI and a certain rule, somesimple rules, but you have to
you basically you have to dodocumentation to prove what you
used it for, and you have to beable to produce your prompts and
the pr and the printouts, andyou can use it as a resource and
source alongside it.
(24:37):
So um, and I think that we areparticularly in design and the
humanities area, that's alsobeen quite ahead in using AI
anyway, in music, in design, inanimation, etc.
These are quite often.
I mean, for me, I've beenworking with generative AI since
the mid-90s, yeah, in terms ofaudiovisual work, yeah.
In various, you know, we calledit advanced machine learning,
(25:00):
yeah, but basically that it'sgenerative AI now, yeah.
Um, so I think that say for anyoung animator, it it's at the
base of all of six or sevensoftwares that they are
learning, and they have to havethose skills.
Andrew's absolutely right to goout into the workplace and say,
yes, I do have these skillsbehind me.
So to the young marketing lotand the advertising, young
(25:22):
advertiser, creativeadvertising.
So we're seeing it shift.
It is redefining how we learnand how we're creating and how
we're teaching and exchanging,but that's going to continue to
shift.
Um, and I think there's somecore skills that we will always
retain as humans alongside AI,which include reasoning,
critical reflection, um, thewisdom that humans can add to
(25:45):
it.
Yeah, it's it's about it's abouttogetherness, really.
SPEAKER_01 (25:48):
Yeah, exactly.
It's not like AI substitutingus, and that's always been your
line.
Gilem, every time you know,right years ago we were talking
about this, and you always saidthat, you know, it's a
collaborative thing between thethe human and the machine.
Um, and I really love what youwere saying there about um the
the guidelines at youruniversity.
We have similar ones uh at theuniversity where I work, about
(26:08):
transparency, really what we'retalking about.
You know, show us your prompts,show us the outputs, show us how
you and the um AI at the chatbotgot to this particular um
outcome.
And I think inculcating thataccountability and transparency
at the educational level is justone of many good practices that
(26:29):
um learners can take with themout into the world, into which,
of course, they'll be going intocareers where they're going to
be using these tools, so theymay as well have been using them
while they were at college.
Um, Glenn, while you have thefloor, I want to go now to more
towards the body as well.
You know, this is very much yourwork.
The body in the digital, it's alovely phrase.
(26:50):
Tell me about how that wholeidea, the body in the digital,
applies to um AI-drivencreativity and interaction.
SPEAKER_03 (26:59):
Well, it's been an
interesting um journey for me
because coming from dancebackground and the body side, of
course, that is my first concernis the living body, and I that's
what I love is us as livingbeings and presence and you
know, our our heartbeats, oursweating, our emotions,
everything.
That's the important bit, yeah,for me of life, you know, life
(27:20):
together and and meeting andconnecting with people.
So I've been working reallysince the early 90s, looking at
how the digital technologiesthat we were building were going
to interface with that in avirtual physical blending way.
That's been my core work.
And um, and I think that reallylooking at how we build trust
(27:40):
and intimacy and into thatconnectivity, yeah.
So it's been complicated becausein the 90s it was, I think
people thought I was a bit mad,yeah.
Maverick, very maverick, and itwas quite ignored in in many
sectors.
But in fact, it's come rightthrough to the forefront.
And I I'm glad I stuck to mybeliefs because of course it's
right there with HealthCheck,with the whole areas of the um
(28:03):
telepresence, the way that webuild up is connectivity across
the world.
So, so um I'm mainly looking atthat, how we represent ourselves
as digital human digital humansout there, and at the moment
working on the speculative umresearch area, which people love
talking about, which is what ifyou had your own digital human
twin from birth till death, andwhat would that mean?
(28:27):
And talking a lot with verydifferent sectors, because I
work across a whole range ofsectors about how would you work
with a digital human twinthroughout your life that was
yours, that was your largelanguage model, which was your
your co-creator, and how thatwould be personalized.
I mean, the most obvious exampleis health, because we all
(28:47):
understand that a bit from thefitness sector, how we actually
work and start to see our datatelling us predictive data,
telling us, oh, well, you couldtry and prevent this if you
start now, you know, by cuttingdown on the nice things like
chocolate and wine.
But anyway, putting that aside,it's about how we'll learn to
work with, and again, going backto that word, co-creator
(29:09):
companion, with our digitalhuman equivalents.
SPEAKER_01 (29:13):
Yeah.
I I certainly don't intend togive up chocolate or wine
anytime soon.
Uh um, Andrew, you look at thistrust gap, don't you?
This phrase, the trust gapbetween humans and machines.
So, what kind of attitudes doyou see?
And I'm I'm interested in howthey vary between, for instance,
the corporates and um consumers.
SPEAKER_00 (29:32):
Well, one thing
we're gonna hear a lot more
about in 2025 and beyond is thenotion of agentic AI or AI
agents.
And this is where they canautonomously perform things.
So Ghlaine was talking about howshe's doing a packing list.
While we can't have the suitcasepacked yet, we could be at a
point where the travel uh agent,the AI travel agent, basically
books the flight for you andeverything else.
(29:53):
That's gonna require a lot oftrust.
So right now I trust an AI agentto book a calendar event, I
trust an AI uh agent to To startup a podcast.
Do I trust with my bank accountjust yet to actually pay for the
not just yet?
And so we need to have a highlevel of trust between people
and machines.
And I think that's where we'regoing to see people really step
(30:15):
back and say, I'm not quiteready for that yet.
It can do menial tasks, but ifyou're trusting me with money or
a job on those sort of things,and I think consumers are a
little bit more open to that.
When it comes to corporates,though, GDPR and those sort of
things, if you do the wrongthing, you can be fine a lot of
money.
So I think at the moment peopleare holding back because they
don't trust the machines becausethey're essentially black boxes.
(30:35):
They don't know what input willactually uh generate which
output.
And that's I think a big issueat the moment.
SPEAKER_01 (30:41):
Yeah.
Which is probably quite a goodthing, isn't it, Andrew?
You know, I mean we shouldprobably be reassured that
corporates aren't just saying,oh well, we'll just there's an
app for that.
We'll just throw it at themachine and everything will be
okay.
You know, a certain amount ofthings.
SPEAKER_00 (30:52):
I think the legal
and compliance departments have
a lot to say about that.
SPEAKER_01 (30:55):
Yeah, I can imagine.
Um, like can we keep our jobs?
Um, so much food for thoughtthere.
But let's kind of get into a bitof a kind of wrapping up phase
here, if we may.
And I'm just interested, comingback to you, Andrew, about say
one upcoming tech developmentthat you are especially curious
about, you're digitally curiousabout.
SPEAKER_00 (31:16):
Or remain curious
about this notion of agentic AI
or AI agents.
And they're not just chatbots,they're AI systems that can
actually take action on yourbehalf.
So imagine having this AI systemthat doesn't just answer
questions, but books for travel,negotiate with suppliers,
manages your calendar and allyour life admin by coordinating
with other people's AI agents.
I think we're on the cusp of aworld where AI doesn't just
(31:37):
inform decisions but actuallyimplements them.
That's both exciting andterribly uh and a little bit uh
terrifying as well.
I predicted way back in 2018that one day we'd be marketing
to AI agents and marketing torobots.
We're getting very close to thistoday.
SPEAKER_01 (31:51):
Oh, they don't call
you a futurist for nothing, do
they?
Um good work, Andrew, on that2018 prediction.
Um Galen, do you have anythingto add to that?
Anything that you're looking at?
I know in in the world of bodytech, for instance, it's a huge
field.
Anything that you're eitherpredicting or curious about or
worried about as we uh face thefuture?
SPEAKER_03 (32:13):
Well, I I think I'd
you know follow through on that
because the digital human twinwork I'm doing is entirely
around the AI personalizedagent, basically.
And um I really can see whatAndrew's saying there um around
the marketing to the AI agents.
I can see that discussionstarting up in uh in the
marketing sector, in the jobdescriptions, in the how we're
(32:34):
gonna write, how we do thiscopy.
What comes up in my research, inthe market research groups I do,
and the um one-to-one interviewsof experts is some very
interesting discussions about,but what what if my digital
human twin, yeah, actually istalking to your digital human
twin and making decisionsbetween them, which neither of
(32:55):
us are involved in.
Yeah.
And um, you know, it becomesmore complex than that.
You know, I mean, some of theyounger one students will say,
what if my digital human twindecided to date your digital
human twin and didn't tell us,yeah?
Or what if my digital human twincarries my biases and takes that
through and it uses my biasagainst your digital human twin?
(33:16):
Yeah.
So these are very they becomevery psychologically layered
because you basically you'redealing with uh the physical
self and the virtual self, whichare personalised.
The marketing will be highlypersonalised into those
scenarios, and actually all theactions from it can get will get
very, very complicated.
So I think we've got a lotfurther to go looking at that
(33:40):
trust and transparency side thatAndrew's mentioned, and also to
make sure that we are gettingthe psychology, we're going much
more into the humanitiesdiscussion now about AI.
And people for two, three yearsago were just batting it off and
going, oh well, we don't needyou creatives anymore, we don't
need the humanities discussion.
In fact, it's going to end up, Ithink, within a year, much more
(34:03):
um human-led.
What is this going to be?
How is this going to be for ushumans and how can we make this
safe and positive?
SPEAKER_01 (34:09):
Yeah, but it has to
be front and centre, absolutely.
Um, so just before we leave itthen, I think we should set our
combined listeners to these twopodcasts a little bit of
homework.
So is there any suggestedreading or suggested listening
from you, Andrew, out of your uhdifferent episodes?
Then uh do you have one that,you know, that would be a good
gateway into Digitly Curious?
SPEAKER_00 (34:30):
I have one
favourite, uh, I think because
it's a little bit different fromjust talking about AI, it's from
Karen Jacobson, who was theAussie Karen, the initial first
Australian voice for Siri, waybefore AI cloning happened.
And she talks about the day thatshe had done some voice samples,
and then years later, someonecalled her in the car and said,
I've just heard you on myiPhone.
And she talks about how thathappened.
(34:50):
Now it wouldn't happen todaybecause you have voice clones,
but she's made a whole careerout of that, and it's just a
fascinating human meetstechnology story from the late
90s.
SPEAKER_01 (34:59):
Yeah, it's a lovely
story.
There's a brilliant bit in thepodcast where Karen talks about
how um her son, who was six atthe time, said um it's something
like you're in everybody'sphone, aren't you, mummy?
And and assume that uh everybodyelse that she was in everybody
else's phone or something, youknow, it's really brought it
home to her.
There's a lovely edition, so gogo ahead and listen to that.
Um, and uh I was gonna say,digitally curious listeners, if
(35:22):
you are now curious aboutsomewhere on earth, if you're
looking for somewhere to start,you might want to go back to
around the middle of the year,maybe a bit earlier, where we uh
did a special edition from theWeb Summit in Brazil, and we had
Brittany Kaiser on the show.
Um, so she's of um hashtag ownyour own data, and um, she was
also like a whistleblower aroundCambridge Analytica, you know,
(35:43):
became quite big, so quitenotorious around that.
Brilliant speaker.
So Brittany's on the show.
Um, and also we had uh RoostEdge, um, who were very, very
good also on that idea of dataownership and data generation
and so on.
It was in front of an audience,it was a real vibe, it was
really lively, great fun.
Um, and that might give you abit of an idea of what we're
into.
(36:04):
Let's talk about empathy andunderstanding.
Because Andrew, when theseideas, you know, the nice stuff,
empathy and understanding, itbumps up against cold, hard data
and corporate algorithms.
Does the empathy get lostsomewhere along the way?
SPEAKER_00 (36:20):
Really good
question.
I get asked this questionprobably all the time in my QA,
and I'm very fortunate on mypodcast that I speak to some of
the world experts and thinkerson this.
And the two things that my AIexpert guests keep telling me is
that AI will never be able tofeel empathy and love.
And some out there may disagreewith me, but but work with me
here.
I have a lovely partner calledCarol Ann.
(36:41):
I love her diggly, and themoment I knew that I loved her,
it was a funny feeling in mytummy, and and I had
butterflies.
But if you asked me to describethat and write it down and
explain exactly why I fell inlove with her, I'd find it very
hard.
So if I can't explain to anotherhuman being why I did that and
what this irrational feeling wasthat led from like to love, how
(37:02):
can I possibly program it?
So I think we're gonna be okaythat the humans can look after
the empathy and love, the realstuff.
Now, having said that, AI cancheat.
I can tell you that I love youand uh I'm empathetic, but I I
won't be genuine.
Um, so I think it can it it's adifference because generative AI
is based on something it's seenbefore.
So the this concept of fallingin love is not something a human
(37:24):
can describe rationally.
So, you know, how can youprogram it?
The whole empathy thing, um, youknow, AI can enhance empathy
from humans by helping usprocess and understand different
perspectives at scale.
I've seen AI tools that helpcustomer service reps better
understand emotional context ortranslation tools that preserve
this cultural nuance.
But AI empathy is ultimatelypattern recognition, not genuine
(37:46):
emotional understanding.
It can amplify human empathy bygiving us information insights,
but it cannot replace the humanelement of truly feeling and
caring.
And I'm yet to be told that I'mcompletely wrong with that, but
I'm gonna stand by that in 2025.
SPEAKER_01 (38:02):
Ever the futurist
again.
Um and yeah, of course, peopledo fall in love.
And this is a real worry, peoplewho might be vulnerable, for
instance, psychologicallyvulnerable, do fall in love with
chatbots and go through somereally hard times around that.
So um, but nonetheless, yourpoint still stands, Andrew, that
you know, that's different fromsaying that one of these
chatbots can demonstrate love orbe in in love or show real
(38:25):
empathy.
Um, Galen, um how about I'dmemory preservation?
Perhaps there's an element herethat you know that AI can do
that.
I mean, it AIs repeat ourselvesback to us, don't they?
You know, they can bring ourmemories back to us, which might
give us some feeling that we arehaving something approaching a
(38:46):
relationship of empathy withthem.
Does that muddy the waters ornot?
SPEAKER_03 (38:51):
Yes, I mean, uh
overall I completely agree with
what Andrew said, and I do thinkthat we need to remember these
are algorithms, they'remachines, they're tools, they're
things that we are veryadvancing massively in terms of
their ability and our ability towork with them in this positive
complementary way to make humanlife better, we hope, rather
(39:11):
than worse.
Yes.
So um, but I do um in my uh noniche innovation work, I'm I'm
very much in touch with the umthe development of these uh
areas around uh mind and memoryand around uh legacy and legacy
sides.
So so what we are seeing at themoment is a lot of some of them
(39:34):
quite questionable umexperiments with e.g.
taking um brain waves, um, andgoing, look, you are feeling
joy, you're feeling angry,you're feeling you need time
off, you you know, you need toslow down, telling us back to us
what what what we we arefeeling, yeah, telling us our
thoughts and our feelings.
And also with um large languagemodels of of memory being used,
(39:58):
particularly around digitallegacy projects, where you know,
really quite fascinatingprojects where you can hold on
to somebody well beyond death.
Yeah, and and we know that youknow the more money you've got,
the more you could possibly havea hologram that speaks out the
memories from somebody who diedum last year or even 50 years
ago, yeah.
So um, so I think we've gotquite another psychological
(40:22):
shift to do here in actually howwe understand what it is, and it
it relates actually to all ofthe discussion about um social
media and truth at the moment,you know, what how we learn to
relate to the physical personand how we learn to relate to
the virtual equivalent, and whatwhat what we start to understand
(40:43):
as as the source, yeah, whetherthe source is truthfully
physical, with intuition, withreasoning, with um the
experience that you bring tolife, you know, especially as
you get older, what you add in,your critical reflections, your
wisdom, and how we learn tounderstand what's uh uh a source
which is virtual, um maybe adigital human twin or uh an
(41:05):
agentic AI, which actually ismaybe feeding us what it
possibly thinks we want to hear,which is what we're seeing at
the moment.
So, but I'm keeping my eye onthe E C E G stuff.
My a lot of my colleagues aroundthe world, they know I'm quite
um uh not sure about this yet.
You know, there's a long way togo, and there's a lot of claims
being made which aren'tnecessarily true yet.
(41:28):
The the mind is a very complexthing, and the feelings in the
bot, the body-mind interfacethat Andrew described, the
feeling that you get from, youknow, uh maybe falling in love
or whatever, yeah, we're a longway off that.
SPEAKER_00 (41:40):
So I've been using
AI for language translation.
I've spent a bit of time inFrance and uh talking to people
that are not great with Englishand vice versa.
You can actually hold the phoneup and have a conversation that
is interrupted by a translator.
What happened the last fewmonths is that Apple released a
new version of their AirPods,which will actually do live
translation, and that's good aswell.
But I think where AI will reallyhelp is by creating an AI that
(42:03):
doesn't just translate languagebut translates context and
cultural assumptions.
So something that could help aBritish something that could
help a British executiveunderstand not just what their
Japanese counterpart is saying,but the cultural context behind
how they're saying it.
We could preserve the nuance andemotional content that gets lost
in translation uh literally,helping people communicate not
(42:24):
more accurately but moreempathetically across these
cultural divides.
And understanding other culturesis really hard, but an AI system
that has has learnt aboutcultural differences can inject
that in real time.
And I think that might besomething that's that's AI for
good.
SPEAKER_01 (42:38):
And I'm just
thinking here, Galen, coming to
your work with body data,wouldn't this kind of lead
potentially into um maybe it isalready into AIs that can help
us interpret um like bodylanguage, for instance?
And we all know, for instance,if you nod your head in one
culture, it can mean somethingvery different, maybe even the
exact opposite in anotherculture, or if you shake your
(43:00):
head, or you know, even like thehandshake, for instance, is
completely culturallyunacceptable in in other
cultures.
Perhaps um the machines can helpus with that.
Or are you about to say then?
Sorry, I don't mean to to delayyour long anticipated answer.
Is an answer, another answer.
It's like for Christ's sake,we're all human, we can figure
this stuff out.
We don't need the machines tohelp us with it.
(43:20):
Let's just be more empathic,normal human beings and stop
trying to outsource it to themachines.
I don't know, you pick that up.
SPEAKER_03 (43:27):
The body language um
side is very interesting, and we
are actually seeing the umevolution of new um gesture
notations coming through veryfast.
We've got quite a few already,as we know.
We've got really quite a lot ofdifferent sign languages in the
world for people who arepartially or deaf, yeah.
And many people in differentjobs use sign languaging, like
(43:47):
emergency services and you know,referees, and it's really quite
significantly part of a lot ofpeople's jobs.
And now we're seeing all of thegesture stuff start to come in
linked to uh the VR and XRheadsets with these hand, you
know, wristbands, and gesturesare going to become much more um
(44:07):
commodified in a sense, yeah,where you, you know, you will
definitely do this, then this,then this, you know, whatever
different gestures of your handsto make various things happen in
your AR glasses, yeah, to enableyou to move forward in the
world.
Um, and I think we'll sit withthere's some nice jobs out
there.
I kept keep thinking, oh,without a job had been around in
the 90s, I'd have loved thatjob, yeah, because it's like,
(44:29):
you know, really working as kindof dramaturgical choreographic
experts within high big techcompanies, yeah, to actually
start to get these bodyintegrations into the the the
tools, yeah.
So, but I do think um the issuearound cultural translation is
still going to be fairly muchlearnt by the real body, yeah.
(44:54):
And I think um uh an examplewill be I Andrew mentioned the
um at the translation ear pods,yeah.
They are they get a really goodreaction from younger um people,
and they definitely will betaken up, whether they're apples
or whoever's doing it, yeah.
And um uh I've seen um my myyounger ones all going, yeah, we
(45:15):
want those, you know, etc.
And yeah, I've got one, one ofmy stepsons is in um Beijing
this year for his year learningMandarin, his third year at
university, full-on Mandarintraining.
Um, and every weekend, everyevery weekend, they're going off
on some major trip around Chinato different cultural
experiences and different um,he's going to a lot of theatre,
(45:36):
he's going to, you know, etc.
He'll come back in nine months'time, and there's there's no way
that can be other done thanthrough the physical body.
Yep.
And the body-mind interface is II think one of the most
fascinating things in the world,one of the most complex and
still one of the least looked atareas that we don't understand
enough about how what we take inthrough our heads actually, what
(45:58):
that how that talks to ourbodies and vice versa, how our
bodies talk to our brains.
Yeah.
I always say body-mind, everyonesays mind-body, I think, is body
first leading a lot of this.
So, so yeah, I think some ofthis will get sorted, you know,
through AI and through variousthings.
It's but we're still like withthe neurology side and the mind
reading side, I think we'restill quite a way off.
(46:21):
It's more complicated than thetechnologists would like to
think.
It's there's a long way to go,and the scientists know that,
and the the biologists knowthat, and the medicine lot know
that, yeah, the body experts,yeah, and the social scientists
and behavioural scientists, butthe technology sector tends to
be rather shallow about it all,which is partly why people like
(46:43):
my work has taken so long tocome through.
Yeah, it could have easily beenpicked up long, long, long
before.
SPEAKER_01 (46:48):
So sure.
Okay.
Well, we'd better begin to getto a little bit a bit of a kind
of end stage here.
Um Andrew, though, what is thisthe end of the podcaster?
Are you are you are you gettingto the point now, do you think,
with your podcast, with yourworkflow, you just press the
auto podcast button and it justhappens.
SPEAKER_00 (47:04):
And well, let me
give you a long-winded answer to
that question because it'sactually available today, but
let me get to that in a second.
Um, you know, I use AI forpodcast production, I use AI
tools for transcription, for snofor show notes, even identifying
you know key quotes for socialmedia.
But what's really exciting,what's coming next, I think, is
AI powered personalization.
You know, imagine podcast appsthat don't just recommend shows,
(47:26):
that actually createpersonalized versions of
episodes based on your interestand knowledge level.
We could see AI hosts conductinginterviews or AI systems that
help podcasters identify whattopics their audience would like
to hear.
So here's something you can doto experiment.
I want everyone to try this.
There's a free tool, it's beenout for a while from Google
called Google Notebook LM.
It's notebooklm.google.
(47:47):
What I did to try it out, thisis last year, I uploaded the PDF
of my book, Digitally Curious,and it created a 31-minute
podcast with two AI-poweredguests.
It was so good, I actuallypublished it as a podcast with a
top and tail to say this is AIgenerated.
But wait, there's more.
You can actually now click abutton that says interactive
mode and you can ask itquestions that go beyond the
(48:08):
script of the podcast.
So I've done this live on stage.
I've actually played the podcastand I've hit the button that
says interactive mode with anaudience full of me saying, I'm
in a room of 200 people that aredigitally curious, where should
they start in the book?
And they come back going, that'sa really good question.
They should do this.
So it's almost like choose yourown ending from the content
you've given it, and that'swhat's available today.
And I encourage people to playwith that.
(48:29):
In the future, could we actuallyhave the uh Somewhere on Earth
podcast that is built for mebecause I want a specific topic
you've covered months ago and Iactually want something built?
It's a bit far-fetched to thinkabout that, but if you go and
try this Google Notebook LM, youwill then realise we're on the
cusp of something quiteinteresting.
SPEAKER_01 (48:51):
I know it it is
freaky.
What I would say though is youcan you can I you can tell that
it's the training data, thetraining audio has been a lot of
kind of American podcasts.
Nothing, by the way, againstAmerican podcasts.
In fact, most of my listening isto American podcasts.
It's probably because I listento so many of them.
I hear so many of those sort ofstylistic elements in what this
(49:12):
AI produces.
Uh so it's freakishly good, butI suppose there are only a few
sets of training data away frommaking it kind of mimic or
become like in the style of theBBC or ABC Australia.
SPEAKER_00 (49:27):
Watch this space.
SPEAKER_01 (49:28):
Sure, yeah.
I get it.
Yeah, I know.
I it's it's happening.
Um so alright.
SPEAKER_03 (49:35):
Well, um I think uh
I just was gonna say I think um
we I I like this like choice ofendings, and we've seen some
experiments with it in the filmworld, yeah, already.
Um complex but very veryexciting.
But I think actually, if we doget to a much more personalised
digital human twin, which a lotis reliant on data ownership and
(49:56):
data sovereignty, yeah, um, thatthat will be a very easy weekly
output from your digital humantwin will be my podcast for the
week.
Yeah.
So we I think um we will see uma much more personalised
scenarios on an individual levelfrom me, but also maybe what you
two want to hear from me, yeah,from my week or whatever.
SPEAKER_01 (50:15):
Sure.
Okay, one more from you, Andrew.
I'm gonna give you give you thisone for the just just to finish
off with.
Um you'll quite like it.
I think give us a bit morehomework then.
So um one of your human-createdpodcasts, if you will.
Do you have another one from oneof your many seasons or series
that you'd like to recommend?
SPEAKER_00 (50:33):
Well, there is so
much I I often at the end of a
year, end of a season, I take abreather.
So there's a lot on AI, there'sa lot on self-sovereign.
That in fact the book has 60podcast guests in it.
But there's another one I'd loveyou to listen to, it's from my
dear friend Deborah Humble,who's a Menso Soprano based in
Sydney.
She has an amazing story whereshe had literally two hours to
get from Brisbane to the SydneyOpera House to sing an opera
(50:57):
that she'd never sung beforebecause the other Menzo Soprano
was taken ill.
She had never told this storyend-to-end before, so I got on
the podcast.
It actually is a heap oflearning for resilience, uh,
training, practice, rehearsal.
It brings it all together.
So it's not actually abouttechnology, um, but it is a
brilliant human interest story.
And I just say that because it'sjust a little bit different.
(51:19):
I'm actually seeing her tomorrownight singing Handels of Messiah
uh here in London.
But uh I just love thosebehind-the-scenes stories that
you never really would expect uhfrom a from a.
SPEAKER_01 (51:29):
It was phenomenal,
yeah.
You know, that she was givenjust hours' notice, having had
three hours sleep, and I read inbetween the lines, maybe a
little bit too much wine thenight before.
All these things that MezzoSopranos shouldn't do.
She says that's a good thing.
And uh yeah.
And then she goes on stage atthe Sydney Opera House, and then
and then dot dot dots to becontinued.
(51:50):
No more spoiler alerts,otherwise you won't go and
listen to that uh episode onyour own.
But what um you uh do verynicely is then actually pull out
from that, okay.
Well, what can we learn thenabout resilience?
What can we learn about dealingwith unplanned situations?
What can we learn about umfeeling confident in in
difficult situations?
So there are a lot of kind ofreal take-homes, even if you
don't happen to be a Mezzosoprano, which I by no means am.
(52:12):
Um okay, that'll do us verynicely indeed.
Um, Galen, thank you very muchindeed.
I know you're about to go onsome travels, so have a great
time and bring back lots ofdigital gossip for us that we
can enjoy on somewhere on earth.
And Andrew, it's been anabsolute pleasure and a
privilege working with you, sir.
Let's do it again.
SPEAKER_00 (52:28):
Thanks for having me
on and stay curious.
SPEAKER_01 (52:30):
We are part of the
Evergreen Podcast Network.
A huge thanks to our sponsors,Roost and Sazience.
Our production manager is LizTuey, the editor is Anya
Vitarovich.
You've also, of course, heardfrom Ghlenn Boddington today.
And um Andrew as well, we'vebeen hearing from Ben.
Andrew Grill, who's withDigitally Curious, and I'm
Garrett.
Thanks for listening.
Bye-bye.
SPEAKER_02 (52:51):
Thank you for
listening to Digitally Curious.
You can find all of our previousshows at digitallycurious.ai.
Andrew's book, DigitallyCurious, your simple guide to
navigating the future of AI andbeyond, is available at
digitallycurious.ai.
Until next time, we invite youto stay DigitallyCurious.