All Episodes

July 23, 2025 25 mins

Send us a text

With the development of artificial intelligence on the rise, we are at a crossroads. How will we continue our innovations and regulations of this new technology? But, this is more than a technological question. As my guest, Verity Harding states, “AI needs you.”

In this episode, I sit down with Verity Harding to discuss her book, AI Needs You: How We Can Change AI’s Future and Save Our Own

How we apply AI is a multi-disciplinary issue. We need everyone, from tech people to teachers, to students, to nurses and doctors, and to everyone else.  


Topics:

  • Why AI Needs Everyone
  • Technology's Shadow Self
  • The Socio-Technical Approach to AI
  • "What books have had an impact on you?"
  • "What advice do you have for teenagers?


Bio:

One of TIME’s 100 Most Influential People in AI, Verity Harding is director of the AI & Geopolitics Project at the Bennett Institute for Public Policy at the University of Cambridge and founder of Formation Advisory, a consultancy firm that advises on the future of technology and society. She worked for many years as Global Head of Policy for Google DeepMind and as a political adviser to Britain’s deputy prime minister.


Socials -

Lessons from Interesting People substack: https://taylorbledsoe.substack.com/

Website: https://www.aimingforthemoon.com/

Instagram: https://www.instagram.com/aiming4moon/

Twitter: https://twitter.com/Aiming4Moon

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I'm going to start on Zoom and then start this on my
microphone.
Alrighty, well, welcome to theinterview.
Thank you so much for joiningme today.
Thanks for having me.
Yeah, you published afascinating book.
Ai Needs you how we Can ChangeAI's Future and Save Our Own.
And to start off with kind ofthe obvious question here why

(00:23):
does AI need me?

Speaker 2 (00:27):
start off with kind of the obvious question here why
does AI need me?
Well, ai needs all of usbecause it's a really important
and pervasive technology thathas the potential to influence
lots of different aspects of ourlives, whether that be people
who are at school, people whoare in work, people with
families, people in the creativeworld.

(00:47):
Ai has a potential to have ahuge impact.
But at the moment, the mainvoices in that debate about
whether AI should be used hereor it should be used there, how
we think it's going to benefitus or how we think it might hurt
us be used there, how we thinkit's going to benefit us or how
we think it might hurt us thatconversation is one that's

(01:12):
dominated by quite a small groupof people, and I think it's
really important that thatconversation is broadened and
that many more people have theirsay when it comes to what the
future looks like.
And what the future looks likethat should be up to all of us,
not just the people sort ofbuilding and creating AI.

Speaker 1 (01:29):
You propose in the introduction of your book,
basically the shadow self of AI,the idea that technology
mirrors us.
You say so that's been a big, Iguess, counter argument, or at
least a big promotional part ofAI has been well, it'll all work
out in the end.
And you say, well, no, notnecessarily.
We have to be very intentionalabout the way we develop this.

(01:49):
So what is the shadow self thatwe should be thinking about as
we involve ourselves in AI, andwhy we should get involved?

Speaker 2 (01:58):
Yes, and you're right to use the word intentional.
I think that's the word I thinkabout when I think of AI.
How can we be reallyintentional about what we're
doing and aware Now?
When you think about bigtechnological changes in the
past, it feels like they werejust always there or that what
happened was in some wayinevitable.

(02:19):
But what I show through myresearch in the book and looking
at the history of, but what Ishow through my research in the
book and looking at the historyof transformative technologies
is actually that's not the case.
Technology is hugely influencedby the sort of society and
culture and politics and valuesof the time.
So while, of course, we thinkabout things like the Industrial

(02:41):
Revolution coming along andchanging how we live and work
that's true but actually thattechnology and all technologies
have really deeply influencedthe other way as well, by humans
, not only in terms of what getsbuilt, but you know what gets
funding and what doesn't getfunding, and who gets funding
and who doesn't.
Those are all very politicaldecisions or are very human

(03:03):
decisions but also how thattechnology is used, how it is
regulated, and you know,throughout the book I show all
these different examples of howwe could have made some
different decisions, thingsmight have gone a different way,
and that might be good and thatmight be bad, and it might also
be not obvious whether it'sgood or bad, and it might also
be that people disagree aboutwhether it's good or bad,

(03:25):
because of course, everyone hasthese different viewpoints.
So the the shadow self when Italk about that is to say, if
technology is just us, if AI isjust us, and that technology, no
matter how innovative and umand new, still represents and
reflects the societies thatwe're living in, then that means

(03:49):
that it's going to representall the great things about
humanity, inventiveness andcreativity and all the wondrous
things that we can do, but itmeans it's also naturally going
to represent some of the moredisturbing aspects of human
nature, some of the moredisturbing aspects of human
nature, and so I argue that weneed to be intentional about
trying to push that technologytowards those better qualities.

Speaker 1 (04:12):
You propose this interdisciplinary,
cross-discipline thing where wehave a democratic approach to
not just managing theregulations around AI but also
proposing the future of thetechnology itself, like
expanding beyond just techexperts as well, and at first to
people that might not appearapparent, like why, in a highly

(04:35):
technical thing, would you wantpeople who I don't know, high
school students, for example, orlike philosophy majors or
history majors, who don't haveand maybe don't feel equipped to
deal with a big technicalcomputer question like this?
Why should they be involved aswell?

Speaker 2 (04:52):
Well, there's this word called socio-technical, and
what that means is an approachto technology that is not
technical only, but it iswrapping in the social sciences,
wrapping in wider society'sneeds and thinking about
technology issues not just astechnology issues, but really as
human issues.
So an example might be when itcomes to AI, do we think it's

(05:17):
okay to have AI mark a student'sterm paper?
And someone might say, yeah, Imean mean, of course, if it can
do a good job, then why not?
And others might say, no, youknow, if a student's worked
really hard on something theywant to know, that a human being
has, has looked at it and usedtheir human judgment on it,
doesn't really matter about, um,the outcome that the process is

(05:39):
is really important, um, sothat those are, those are human
questions.
You can't answer something likethat just with a technical.
Can it do a good job or not?
But you will see that a lot inan ai debate people will say,
well, you know, in ai it's morelikely to make a more neutral
decision than a judge, um, andso why not just, you know,
create a program that can do it?

(06:00):
But then I would say, well,actually, we, we, uh, we take
really seriously when we'retaking away somebody's life or
liberty in a criminal justicesetting, and that deserves to be
a human interaction based oncenturies of human evolution in
terms of the law, and it doesn'tmatter whether the technology
is accurate or not.

(06:21):
So that's a sort ofsocio-technical approach, and so
when you have that, then ofcourse it can't just be
technical people that are makingthose decisions, because they
will be very expert in theirarea of science or technology.
So you will have a brilliantcomputer scientist, for example,
who's incredible at building AIprograms, but what do they know

(06:43):
about the criminal justicesystem?
Or a school or a hospital, youknow?
Or the creative industries?
Not very much.
And so what I say to people isyou don't have to be a deep AI
technical expert to be involvedin AI, because you may not be,
but you are an expert insomething, and whatever that

(07:03):
thing is is really important toworking out the socio-technical
questions when it comes to AI.

Speaker 1 (07:10):
It's fascinating because when we think about
papers being graded or AIinvolved in the criminal justice
thing, we're not makingarguments all based on the
accuracy of the decisions, whichwe are and there are terrible
examples of how AI has gone awryin some of the algorithms and
the data they've been trained onbut we're also appealing to

(07:30):
something that's more like, forexample, writing a paper.
We're also appealing tosomething that's more like, for
example, writing a paper.
Even as a high school student,I spend so much time writing the
paper that I want someone toexperience it with me.
I don't just want the gradeitself, but I'm trying to convey
an experience.
When you listen to a podcast,it's not just the information
you're going after, it's theexperience and interaction
between the guest, the idea, thehost, as well as the listener,

(07:53):
and you're engaging in theconversation as well through
that.
There's something that we seemto be missing when we replace
computers with people in some ofthese instances.
Now, I'm personally yes, yes goon no, please.

Speaker 2 (08:06):
I mean, I think, I think and you're thinking about
it in that really smart sort ofholistic, socio-technical way,
you know what?
What actually do I want here?
What actually are we trying tobuild as a society?
Trying to build more humanconnection or less?
And that doesn't mean that aiwill always lessen human
connection.

(08:26):
There are some incredibleexamples of ai where I think it
can really, um, uh, help us ahuge amount and not detract from
our experience in any way.
But there's not going to be aone size fits all approach, and
so I think you're completelyright to be thinking about it in
that way.
What society do I want to livein first, and then can AI help

(08:47):
me get towards that society?
And if it can't, then maybe wedon't actually need AI in that
case.
And if it can, then great,let's be really thoughtful about
it.

Speaker 1 (08:56):
I mean, I think that the computational and the
technical side of this to me isabsolutely fascinating.
I love programming andanalyzing algorithms themselves.
So it's not the solution.
Is that?
Well, you know, we should justban AI or not do anything with
AI, because the case study thatwhen people tell me that I
always point to is well, whatabout stroke victims who have

(09:17):
lost the ability to speak andnow, with algorithms, we can
reproduce their voice and allowthem to speak again?
That's incredible and itfurthers human interaction as
well and human connection.
Like as you repeatedly pointout in your book.
It's the intentionality behindit, and technology is not always
a replacement for progress.
In fact, sometimes progress isthrough humans as well, it seems

(09:38):
.

Speaker 2 (09:39):
Absolutely, and you're right.
That's a great example.
I think there's a lot ofexciting potential in AI in
healthcare.
We know that there are AIprograms now who can analyze, be
know, retina scans of the eyeor mammograms, um to detect

(10:01):
cancers, perhaps at an earlierstage, um, or just perhaps do
that at a scale that we're notable to do when it's purely a
human review, but that won'treplace the doctor or the nurse,
because if they bring so muchmore than just excuse me, nurse,
because if they bring so muchmore than just excuse me,
because of course they bring somuch more than just, um, looking
at that one image, and but whatit might do is enable them to

(10:23):
do their work in a moreefficient way.
And, and that's why it's reallyimportant to look at these
things on like a case-by-casebasis.
You know, and think verycarefully about whether it's
appropriate, whether whether itisn't appropriate, appropriate,
and, as I say, think about whattype of society you want to live
in.
And then is the ai programhelping us get towards that um,

(10:44):
or is it detracting from thatand try and manage the um, those
, those effects, uh, in the bestpossible way?
That that we can and we reallydo.
You know what, what, what comesacross I hope in we can and we
really do.
You know, what comes across, Ihope in the book is that we
really do have the potential toinfluence this technology.
It's very human decisions thatend up deciding which way these

(11:06):
things go.
And that's again to your pointearlier about why it's important
that's a diverse representationin those discussions and
debates about why it's important, that's a diverse
representation in thosediscussions and debates.

Speaker 1 (11:20):
I think we've made a pretty good case for why people
should be involved and why thepublic outside the technical
areas should be involved as well.
Now let's get to the pragmaticand how do we actually do this?
And you propose basically nowyou can correct me on this
bipartisan, if you think aboutit from the American perspective
.
Bipartisan support towards anintentional future, as well as
debates and having to compromiseto make good policy as well.

(11:42):
Now, it could just be that I'ma teenager and an American
growing up amidst an electioncycle, but it doesn't feel like
we have a lot of bipartisantalking really about anything.
Bipartisan talking really aboutanything.
Um, and it's pretty, it'spretty chaotic from a political
perspective and people have alot of deep-seated hate towards
um, the other aisle.

(12:02):
And how do we then proposesomething like as big as a
policy about ai and ai future,like what?
How do we get through thisessentially?

Speaker 2 (12:13):
well it's it's.
It does feel very divided andpolarized at the moment in lots
of places, not just in the US,and I think that does make it
more difficult to find apolitical solution.
I would like to see politicalleadership that says I would
like to see political leadershipthat says you know, we are

(12:33):
going to do this in aconsensus-driven way, but it's
harder to do that through apolitical process and sometimes
and one of the examples used inthe book sometimes it's best to

(12:54):
sort of almost outsource that totrusted experts, and so you
could see, rather than thisbeing something that's decided
politically at the politicallevel, at the political level,
they decide to appoint somebodyneutral and independent who
brings trusted experts togetherand they, even if they disagree,
they debate and they discuss ingood faith and they produce a
report.
And that's what happened in theUK back in the 1980s, in the

(13:17):
early stages of biotechnology,when we had to reckon with a lot
of these ethical questions aswell.
And it worked, it worked very.
It worked very, very well.
So, yes, I would like to see itdone politically.
I think it is harder, when itcomes to the polarisation that
we see today, if it's not goingto be done in a political

(13:39):
process and, of course, there'slots of areas where we don't
need politicians to be involved.
We can have what they callpermissionless policy making.
We can see informal coalitionscome together.
So you might see lots of headsof schools come together to
decide how to tackle somethingyou might see and we have seen

(14:01):
examples of in Hollywood theunions negotiated directly with
Hollywood studios and they madesome decisions about AI through
that process.
So it doesn't always have to bepolitical decisions about AI
through that process.
So it doesn't always have to bepolitical.
And if you know like you are Iknow in the US at the moment
struggling with that then Ithink it's sort of all the more
important that more people takeit upon themselves to say, hey,

(14:29):
you know I might not be an AIexpert, but I have a view on
this and let's try and pull somepeople together to think it
through.

Speaker 1 (14:32):
I've been reading in preparation for an interview
with former NIH director, drFrancis Collins, about vaccine
hesitancy and kind of expertmistrust as well.
How do we deal with somethingthat's that sensitive as well,
because a lot of people in theUS feel as if sometimes experts
either don't represent them orare after them, and that seems

(14:55):
to be a consensus that you feelin political rhetoric, and I'm
not sure exactly where this alloriginated, but it's definitely
something that my generation'sgrowing up amidst.
How do we go about talkingabout AI in a way that it both
explains it to people and alsoproposes these policies?

Speaker 2 (15:15):
Yes, it's a very good point and I think all the more
reason why we need to take itupon ourselves, where we can, to
try and talk to peoplethatusting the other person's
motives and sort of you knowwhether they're a good person or

(15:46):
not.
And so when it comes to AI, Ithink you know.
I hope to see more of that.
But to your question, you know,I think something that I've
been not pleased to see over thepast couple of years is what I
think is almost an overinflationof ai's capabilities, these um

(16:07):
warnings that it might become asort of sentient or powerful
intelligence that sort of takesover and is super dangerous.
And the reason I don't likethat is for two reasons.
One, I think it distracts usfrom some of the actual, real,
tangible, pragmatic issues thatwe have to think about when it

(16:28):
comes to AI today, becauseyou're thinking, well, this is
only going to be a problem whenit gets to this kind of far off
and, frankly, theoretical,unproven and very much disagreed
about amongst the AI communityfuture.
On the other hand, I think ifyou kind of encourage people to
believe that it's all powerfuland it will be super powerful in

(16:52):
future, then it makes themthink that it must be pretty
powerful right now, and then youcan end up sort of outsourcing
your judgment to AI programsbecause you think, well, you
know, if everyone's saying it'sso potentially powerful and
dangerous, it must be able to atleast, you know, handle this
small problem that I'm dealingwith it.
So I don't like the way thatwe've talked about it in the

(17:15):
past couple of years as asociety.
Neither do I want to talk aboutit, as we talked about earlier,
only as something that'sdangerous.
Because when you know, whilethere are considerations that we
have to think about with AI andthe harms that it can
definitely and has, can do andhas done again, I think that can

(17:36):
also encourage people to sortof step away and disengage.
When people are frightened,disturbed, they tend to
disengage, and I think that isgoing to leave us all the poorer
.
If people feel like this isn'tsomething that they want to
bother getting involved in, itjust sounds too difficult, too
frightening, too scary.
And we, of course, you knowscience and technology always

(17:57):
moves us forward, and so we dowant the good side of this to
come through.
So I'd love to see us talkingabout it in a more measured,
practical, thoughtful way, likeyou know we're doing today, and
so it's great that you're doingthis podcast, because I think
it's the type of debate thatwarrants calm, you know,
warrants rationality, and youknow that can uh hard to find

(18:19):
sometimes these days yeah,absolutely, um, this has been a
fascinating conversation so farand I I think this kind of
approach of intentionality andthinking and having
conversations about all of thiswill definitely hopefully help
shape the future, um, in a in agood way, I guess.

Speaker 1 (18:36):
We we'll see.
I hope so.
Wrapping up with these last twoquestions, what books have had
an impact on you?

Speaker 2 (18:44):
You know I'm a big reader.
I love books and I've read themsince I can remember, as soon
as I could read.
So I've had books that haveinfluenced me my whole life.
When I was younger I was reallyinfluenced by To Kill a
Mockingbird, which we read atschool, and I know that book has
touched a lot of people andthat definitely had a big impact

(19:05):
on what I chose to go on andstudy.
It made me sort of reallyinterested in studying history
and the past and what we couldlearn from it so that we don't
repeat those mistakes.
And a historian that does thatbeautifully now, who I read many
years later, because I readthat when I was a teenager and
unfortunately that was a longtime ago, Taylor, that I was a
teenager, so the ones I readtoday is this Harvard historian

(19:29):
called Jill Lepore and she wrotean incredible one volume
history of the US called theseTruths, which really is an
incredible use of history tobring to life some of the
current issues as you talkedabout today.
Why are we in this position?
And she does a great job ofshowing how we got there and
that was really my inspirationfor how I chose to tackle this

(19:52):
book and how I chose to tackleAI, really to say we've been
through this before as societies.
We've dealt with hugelytransformative technology before
.
I mean, imagine if you showedan iPhone to somebody from 1850,
you know, and we've navigatedthat before and we can do it
again.
But we should learn fromhistory to help us guide that

(20:15):
future.
So I think that I really dofind those types of books that
are able to do that bring thepast to life and contextualize
it and situate it and use it tohelp us explain today.
I think those really reallyinfluence me.

Speaker 1 (20:31):
What advice do you have for teenagers?

Speaker 2 (20:34):
Well, look, I think AI is an area that really needs
younger input into it.
And right at the end of thebook, in the conclusion, I tell
a story about some teenagersbandying together in the UK to
protest against what they feltwas an injustice relating to

(20:55):
technology technology.
During the pandemic in the UK,the government decided that
students would get their finalgrades these are the grades that
decide whether you go, whichuniversity you can get into or
not and they decided that thesegrades would be decided by

(21:16):
algorithm, so there wouldn't beexams and they would just take
their predicted grades, becausewe have a system here in the UK
predicted grades.
They would take their predictedgrades and then they would use
an algorithm to adjust for howgood of a school they were at.
And if your school was anunderperforming school, you
would.
You would get your gradesmarked down.

(21:38):
Um, and obviously this isdeeply unfair, because we're
incredibly talented student whojust happens to have to go to a
maybe a more underprivileged andtherefore less highly
performing school.
You might not get the gradesthat you deserve, and people
felt very strongly about this,understandably, and they protest
.
They went to uh, you know,downing street, the equivalent

(22:00):
of our white house, and theystood outside and they
campaigned and they protestedand they the the government
overturned that um decision andI quote in the book the prime
minister at the time who saidyou know, sorry, this was a
rogue algorithm or somethinglike this.
And um, it's a really incredibleexample of the power that you
do have, and I think sometimes,growing up in this society, it

(22:23):
can probably feel like, well,you know, do I have much power?
But what I would, the advice Iwould give to teenagers is
encourage them to really to knowthat they do have a huge amount
of power, especially if youteam up together.
But even one person alone canmake an enormous difference,
especially if you team uptogether, but even one person

(22:43):
alone can make an enormousdifference.
And so I'd encourage them tothink about that, to think what
they think of AI.
Try and work out what theythink first, before anybody else
tells them Read my book, butread other books too, and then
use that voice in creating thetype of future that you want to
see.

Speaker 1 (22:57):
Well, Ms Harding, thank you so much for coming on
and speaking to us and promotingthe idea that AI needs us all
throughout all of the differentsegments of our societies.
It's been a great conversation.
I've really enjoyed it.

Speaker 2 (23:10):
Thank you so much.
It really does.
It does need everyone.
I appreciate you, taylor, fortaking the time today, and I
really enjoyed our conversationtoo.
Thank you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.