All Episodes

November 13, 2024 55 mins

In this episode, we’re delighted to welcome Susie Alegre back to The Evolving Leader. Susie is a leading human rights barrister at the internationally renowned Garden Court Chambers. She has been a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion and she is also Senior Research Fellow at the University of Roehampton.

Artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love, and in her latest book ‘Human Rights, Robot Wrongs, Being Human in the Age of AI’ Susie Alegre explores the ways in which artificial intelligence threatens our fundamental human rights – including the rights to life, liberty and fair trial; the right to private and family life; and the right to free expression – and how we protect those rights.

This is an important listen for us all.

Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)
The Enneagram of Emotional Intelligence (S Allender, 2023)


Social:
Instagram           @evolvingleader
LinkedIn             The Evolving Leader Podcast
Twitter               @Evolving_Leader
YouTube           @evolvingleader

 

The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.

Send a message to The Evolving Leader team

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jean Gomes (00:03):
Kevin Kelly, the founding editor of Wired
Magazine, told us on theprevious show, that what
technology wants is constantattention and full transparency,
and if you give it what itwants, you get in return a more
personalized experience. Exceptthat isn't the whole story in
this value exchange, there are ahost of hidden or less obvious

(00:25):
things that we're tradingunprecedented access for
advertisers to the most intimateparts of our lives, a lifetime
trail of our habits,interactions and health data,
the unforeseen consequences ofhaving a digital twin owned by
tech companies who don't have aduty of care to us and our
children has become painfullyapparent.

(00:47):
Kevin Kelly first observed thistrade and its less obvious costs
at the birth of social media,but now with AI, the moral
implications of What TechnologyWants are infinitely more
important. So how do we regulateor curtail the potential for
machine intelligence to give ahandful of companies and people

(01:08):
dangerous controls over ourlives? In this show we have the
pleasure of Susie Allegrireturning to discuss her new
book human rights robot wrongs,in which she argues that we
already have the legal frameworkto manage AI companies using the
Universal Declaration on HumanRights. So tune in to an

(01:29):
important conversation on theevolving leader foreign

Scott Allender (01:53):
Hi folks.
Welcome to the evolving leaderthe show born from the belief
that we need deeper, moreaccountable and more human
leadership to confront theworld's biggest challenges. I'm
Scott Allender

Jean Gomes (02:02):
And I'm Jean Gomes.

Scott Allender (02:04):
Jean, you and I have been talking a lot about AI
with our guests, from the fearsand the threats to the
opportunities it presents. Andone thing you and I talk quite a
bit about and as a criticalcomponent of your mission at
your research based consultancyoutside is that in a new economy

(02:24):
and society shaped by AI andexponential technologies that
are transforming every aspect ofour lives,
it is our passion to helpleaders become more human. And I
thought instead of doing just astandard feelings check in, as
we often do on the show, Ithought maybe we should start
with that perspective. I'd loveto get your thoughts, for our

(02:46):
audience's sake, on what do youmean by more human when you talk
about more human?

Jean Gomes (02:52):
So well, it's an interesting question. And I kind
of asking that myself, of myselfall the time, because I think
the thing that I noted in in howI as a somebody who kind of
wholeheartedly has embracedtechnology throughout my life,
how it comes with a whole bunchof costs. And I think since the

(03:12):
internet and social media andand so on, we've been seeing a
right, a steady rise in thosecosts, to to your performance,
your ability to think, yourability to connect with others,
and so on. And so that's a kindof reaction to technology. Those
are some of the downsides, andthey've aggregated up into some
very big costs around peoplebeing depressed and

(03:34):
disconnected, and all thehorrors of children engaging in
the wrong content and so on. Sothe the more human idea is very
simple, which is that in anautomating world, we have to ask
the self the question, what arehuman beings for? And I think,
you know, a simple way ofgetting to that is that there
are things that only humanbeings can do, that technology

(03:57):
won't be able to achieve, andthey are simple. They are our
abilities make sense ofsituations in complex and
uncertain environments, ourability to reason, to be
creative, to make decisions, tobe empathetic, to form human
connection, and those are thethings that are actually we're

(04:18):
spending less time focusing inon we're becoming more in
service to the technology,almost like, you know, lot of
people's jobs is not to do thosethings at all, but to actually
move other bits of email aroundand so on. So that's what I mean
by more human. I think we needthat's our source of competitive
advantage in an automatingworld.

Scott Allender (04:36):
Yeah, I 100% agree. And so the reason I ask
you this is because today, we'regonna be talking about some of
the implications of AI on ourhumanity, particularly human
rights, in the show, we'redelighted to have Susie Allegra
back. She's been on our showbefore, and it was an amazing
interview, and we're gonnadiscuss with her the pressing
topics in her new book, humanrights robot wrongs being.

(04:59):
Human in the age of AI. Forthose not familiar with Susie's
work, she is an internationalhuman rights lawyer and author,
originally from the Isle of Manwhose focus in recent years has
turned to technology and itsimpact on human rights. As a
legal expert, she has advisedAmnesty International, the UN
and other organizations onissues such as counter terrorism

(05:22):
and anti corruption. Her firstbook, freedom to think, looks at
the history of legal freedomsaround thought and the pressure
they're coming under. And Ireally encourage listeners to go
back and listen to episode 22 ofseason five, where we talk to
her all about that. And in hernew book, she looks at how AI
threatens our rights in areassuch as war, sex and creativity

(05:44):
and what we might do to takecontrol. Susie, welcome back to
the Evolving Leader.

Susie Alegre (05:49):
It's great to be back.

Jean Gomes (05:51):
Welcome to the show again. Susie, how are you
feeling today?

Susie Alegre (05:53):
I'm feeling a lot better than I was last time. I
think last time I spoke to you,I was coughing my way through so
hopefully this time it'll be abit clearer.

Jean Gomes (06:00):
Excellent. Oh, it's good to good to hear that
you start the new book with avery powerful emotional reaction
that you are having to thelaunch of chat GPT and the
impact that AI driven systemsare having on us. Can we start
with this, and you know, as theimpetus for righting human
rights and robot wrongs?

Susie Alegre (06:22):
Yeah, it was sort of, you know, early 2023 I think
for me, as I think for a lot ofpeople, particularly for a lot
of creatives, there was a sortof overwhelming feeling of dread
that the world did notunderstand what the point of
humanity, and particularlycreative humanity was, you know,

(06:43):
reading headlines about how youknow you don't need writers
anymore. You can, you know, justget chat GPT to write your
novel, or get whatever imagegenerator you want to create the
perfect picture, which for me,just felt really profoundly
depressing. I thought, actually,what do people not understand

(07:03):
that actually, if you are acreative the whole point of your
life and your drive is the workto create things. It's the
inspiration, it's the emotion.
And as I said, the work, thework is the point. It's not the
finished product. Now, I've beenwriting stories and poems and
novels and now non fiction bookssince I was tiny, regardless of

(07:28):
whether they were ever going tobe published, regardless of what
the money was for any of it,which, interestingly, for
creatives, at the moment,there's a steep decline in
financial rewards forcreativity. So sort of combining
that question ofa steady decline in appreciation
of creativity with this blanket,oh, we don't need human

(07:51):
creativity anymore, just sort ofpulled the rug out from under me
about what is the point. And,you know, not just the point of
writing, but just what's thepoint completely. And I think a
lot of creative people that Ispoke to and read about were
feeling that same thing, sort offeeling like, Does nobody get
it? Does nobody value humanityanymore? And that was really

(08:14):
what I suppose, first of all,you know, put me into a real
funk. But then I sort of draggedmyself out of it in a way that
maybe creatives and human rightslawyers tend to kind of say,
actually, I don't accept thatI'm not having it, and that is
what turned me around to write aproposal to write my own book,

(08:38):
making sure on the way, that thecover was not designed by AI and
everything else, it then sort ofwent into a whole minefield of
checking contractual terms allover the place to make sure that
it wouldn't be feeding the beastto destroy human creativity. But
that was really the trigger forme, and as I say, it's that. I

(08:59):
suppose it's something thatthrough history, creatives have
gone through this moments ofdespair that then give rise to
intense moments of creativity.

Scott Allender (09:09):
What struck me in reading your book is that
much of the debate on AI isdistracting us from the
accountability we should beplacing on the leaders of tech
companies, which you ground inthe Universal Declaration of
Human Rights. Can you talk to usabout this?

Susie Alegre (09:23):
Yeah, I mean the Universal Declaration on Human
Rights, which is now over 75years old, sort of set out this
framework in the aftermath ofthe Second World War, really
establishing what we all need tothrive as humans. So what
everybody everywhere needs andincludes things like civil and
political rights, so things likethe right to life, the right to

(09:46):
liberty, right to a fair trial,but also economic, social and
cultural rights, things like,you know, the right to work,
rights to education, rights tohealth, but also sort of rights
to work life.
Balance that kind of thing, youknow, and all grounded in this
concept of human dignity. And sothe Universal Declaration on

(10:08):
Human Rights was, if you like,the sort of first step towards
codifying what we now see asinternational human rights law
in hard laws, both at theinternational level and at the
domestic level. And what humanrights law does is, you know,
protect us from the excesses andthe kind of horrors that the

(10:30):
world had seen in the run up toand during the Second World War.
It's a kind of guarantee that wecannot be treated in ways that
undermine our humanity anddignity by our governments, but
also puts a responsibility onour governments to protect us
from each other or fromcorporations. So while human

(10:51):
rights law traditionally relatesto governments, governments have
to respect our human rights, butthey also have to protect our
human rights from the actions ofeach other, and that's why I
think it's very relevant when wesee how technology is evolving,
how it's affecting our societiesin so many different ways that
governments, when thinking aboutregulation, really need to be

(11:14):
thinking about liability ofprivate companies or individuals
in ways that then prevent themfrom undermining our human
rights. And

Jean Gomes (11:25):
in this you kind of, you're pointing out that we
don't need new laws so much aswe just need to enforce the ones
we've got. You're kind ofpointing out there's this clamor
that AI is a law free zone. Sowhat are we falling into here,
in terms of a trap, there

Susie Alegre (11:42):
is this idea that is sort of pushed, that this is
some new uncharted territory,that law cannot rush to catch
up, that law is too slow,whereas what we're actually
seeing is that many laws,including human rights law,
which you know then applies onceyou get to a court, for example,

(12:03):
in the UK, the law isinterpreted in light of the
Human Rights Act or in light ofthe Equality Act, whatever kind
of law you're looking at, if itaffects human rights or
equality, it'll be read in lightof those sort of framework laws,
if you like. And so we've soldthis idea of exceptionalism,
that effectively anything goeswith technology, because the law

(12:24):
hasn't thought about it. Butactually law, and in particular
human rights law, evolves tomeet changes in our society. It
isn't redundant. Just becausethere's a new situation, it
evolves to meet that newsituation. And I think one of
the problems that we find isthat that narrative can often
distract us from applying thelaw and regulation as it exists.

(12:49):
And there are many areas of lawthat we'll be applying, whether
it's contract law, tort law, youknow laws around liability,
copyright laws, you know hugenumbers of laws, which are all
potentially informed by humanrights law do apply. A bigger
problem, rather than the absenceof law and regulation, is the

(13:10):
question of access to justice.
You know, how do you as anindividual whose rights have
been infringed, take a caseagainst a massive tech company
with extremely deep pockets inmany jurisdictions. That's not a
realistic prospect. But what weare seeing is that, you know,
law cases, legal cases arecoming through, whether by
regulators, people like theInformation Commissioner's

(13:32):
Office and similar dataprotection authorities around
the world, deciding that certainAI companies are unlawful and
handing out massive fines ortelling them they have to stop
their business in theirterritory, whether it's, for
example, the Federal TradeCommission in the US being
extremely active using consumerlaws to combat excesses of tech
companies, we are starting tosee action, But action in in the

(13:57):
courts or through regulators, isslow. It's not that the law or
the regulation itself is nonexistent. It's about
application, and that oftencomes down to funding and
putting money into making surethat our laws and regulations
are effectively enforced, andthinking of creative ways to
make sure that they meet the newchallenges that they're facing.

Scott Allender (14:28):
So as AI, companies seek to find new ways
to integrate their technologyinto our lives, you're pointing
out that we need to be on alertfor how they can dehumanize us.
I was struck by the Sam Altmanquote around replacing median
humans. Could you expand on thata little bit more?

Jean Gomes (14:47):
Are we median humans.

Susie Alegre (14:50):
I'm not exactly sure median humans. I'm not, I'm
not sure. You'd have to ask myfriends. You know, clearly Sam
all. And doesn't see himself asa median human? And I think it's
a very big question. And I thinkthe bigger question is, you
know, this idea that, you know,AI will replace, you know, the

(15:12):
average person in the street,the man on the Clapham omnibus,
as we have in English law, youknow. Well, I think as humans,
we have to ask, Well, why do wewant technology that's going to
replace the median human? Youknow, why are we voting to
replace ourselves? Why are wechoosing that? And I think that

(15:33):
is a very, very big question is,do we want to be replaced by
technology? What exactly do wewant technology for? I think,
going back to the discussion youwere having earlier about what
is humanity, I think it's alsoultimately we are humanity, and
we get to choose, through ourdemocracies, how our society

(15:58):
develops, and do we really wanta society where our humanity is
undermined. And I think we areat a point right now where we
can make those choices, where wecan just say no to technology
that is going to replace peoplein ways that is not actually
helpful to us, even if it mightmake somebody a lot of money.
You know, we don't have toaccept that.

Scott Allender (16:19):
I'd be curious to know if you see anything on
the other side of that coin. Iwent to a TEDx talk recently,
and a lawyer was presenting onhow in the US, you know people,
there's a real shortage of civillawyers available to people that
have limited resources. So ifthey come up into a situation
and they can't afford a lawyer,they just get no defense, and
sometimes they get takenadvantage of. And so she was

(16:41):
positing the optimism around AIin terms of not replacing
lawyers, but expanding the reachof lawyers to be able to give at
least, at some point in thefuture, some legal assistance to
people who couldn't otherwiseafford it. Do you see anything
in terms of the hope, if weregulate it, properly, enforce
the laws, properly, not replace,you know, be cognizant of

(17:03):
everything you're saying. Do youdo you see any examples where
there's some some reasons to beoptimistic in certain
professions and situations?

Susie Alegre (17:11):
Well, I think the legal question is a really
interesting one. And obviously,you know, I'm a lawyer myself,
and lawyers are definitely inthe in the sites of AI for
replacement, and there is a hugeproblem of access to justice and
access to affordable legaladvice for median humans like

(17:34):
you know, legal advice costsmoney, and lawyers have to eat
as well. So there is a reallybig problem there. But replacing
lawyers with something thatappears to be giving you legal
advice, something that lookscredible, but is actually just
made up nonsense from apredictor machine does not

(17:57):
really help access to justice.
It potentially undermines it.
And certainly the kind of modelsthat we're looking at, you know,
even if they improve radically,the nature of them is, you know,
word predictor machines. That'snot the same as legal analysis.

(18:17):
So I think it's a sort of falseeconomy to say, well, you know,
it's better than nothing. I'mnot sure that it is better than
nothing, or better than just,you know, asking someone you met
on the street who looks likethey might have a better idea
than you have of what the law Ithink it's a very, you know,
lawyers are expensive. It's a,you know, I think it's a real, a

(18:40):
really big question is, do wewant word predictor machines
that will make the legal systemfaster but essentially
meaningless, or do we, and arewe prepared to put money into a
legal system that will thenallow people to have access to
justice Now, Having said thatthere are at the kind of lower

(19:03):
level of legal disputes, kind ofconsumer disputes. There may
well be cases where AI can helpresolve disputes by kind of
coming up with a reasonablesolution that all sides would be
happy with that solves a lowlevel problem. So I think we may

(19:23):
well find that. And similarly,in cases where you know, it's
about the money, you know, whereis the sweet spot on money for
settling a case? That may besomething where AI could be very
useful, but on the kind of areaswhich are really, really human
areas or areas where people'sliberty or lives are at stake, I

(19:46):
think it's highly inappropriateto be using systems that are
like, I say, effectively, sortof random predictor machines,
rather than having a genuinehuman with legal.
Qualifications, who is able tohelp people navigate those
processes. So my concern wouldbe that what you land up with is

(20:08):
an even worse, two tier systemwhere, you know, people are
effectively feeding rubbish intothe legal system, which may well
then pollute the whole legalsystem. You know, it may stop
polluting case law, and that thepeople who can afford, you know,
the Rolls Royce lawyers arestill going to be able to afford

(20:29):
the Rolls Royce lawyers. So Ifeel like it's almost, you know,
it's something that might makepeople feel like they're getting
something but not actuallygiving them something real.

Jean Gomes (20:41):
That's very interesting, isn't it? Because I
think there's this huge hopearound AI being able to
substitute a whole lot of humanactivities. But what we'll get
is a synthetic version of itthat actually in some ways,
might disadvantage the verypeople further. Yeah,

Susie Alegre (21:00):
I think that's right. And what you'll see is,
you know, I see lawyers talkingabout, you know, there's a lot
of talk around law tech and howthis is going to supercharge
productivity, etc, etc. And thenyou'll see people saying, well,
you know, I use it as a firstdraft. But of course, I, you
know, I know the law very well,so I can check it well, if you
know it very well. Why are younot just copying and pasting an

(21:22):
analysis you did earlier orwriting it yourself, if you know
it so well? I don't see thatbenefit at all in asking a
machine which potentially isgoing to actually mess it up,
that you've then effectively gotto edit and Mark and check
double check. I don't really seethe efficiency in that but it's

(21:42):
certainly something that isbeing sold hard.

Jean Gomes (21:50):
Moving to another topic, which is a whole chapter
in your book on the very realissue of killer robots, which
was once the preserve of sciencefiction, you know, we've got in
various regions around theworld, and particularly, we're
conscious of the drones beingused in the Ukraine war. Can you
talk to us about your thoughtson the development of autonomous

(22:10):
killer robots on thebattlefield?

Susie Alegre (22:13):
I think, yeah. I mean, autonomous killer robots,
like you say, they are the stuffof sci fi. So even sort of, you
know, talking about killerrobots has that sort of, you
know, we're expecting it in amovie kind of thing. And as you
say, it is increasingly a spacewhich is being developed, and
these kind of autonomousweapons, or semi autonomous, and
ultimately completely autonomousweapons, are, you know, about to

(22:37):
be or are being deployed on thefront lines of conflicts around
the world today, human rightslaw sort of applies generally to
protect us in times of peace. Italso applies to some degree in
war, and we have as well,international humanitarian law,

(22:57):
which was designed again toprotect humanity from the
excesses and the you know, theworst horrors of war, if you
like, so things like trying toprotect civilians and the
treatment of prisoners of war,those kind of things trying to
ensure that there is somerespect for dignity and human

(23:19):
life even in the worst possiblecircumstances. I think the real
concern about autonomous weaponsis that lack of control, that
lack of humanity. And there is,for example, within the concepts
of international humanitarianlaw, an idea that, you know, the
use of armed force still has tobe proportionate, and that idea

(23:42):
of proportionality being what areasonable human commander would
decide was proportionate. Onceyou take the human out of the
loop on those decisions, how onearth can we be certain that
whatever happens as a result ofthe use of autonomous weapons

(24:02):
will be proportionate, will havethat grain of humanity and
dignity in it. And I thinkagain, it raises these really
big questions about liability.
You know, who is responsibleonce you press that button make
that decision to launch theautonomous weapon. How do you
then cope with whatever it does,whatever happens as a result?

(24:26):
And it's very difficult to knowwhat kind of outcomes you'll
get. And what we've seen is, youknow, even discussions, or at
least thought experiments in themilitary about what happens if
you completely lose control ofthe autonomous weapons so it
won't respond to the deployerand effectively turns on the

(24:48):
deployer itself. And again, youknow, we're sort of in the
realms, potentially of sciencefiction, but actually just
around the corner, or, you know,around a corner. Near you. And I
think it really does, again, goto that heart of that question
of what is humanity? And even inthe ultimate horrors of war,
international law requires us torespect humanity, to respect the

(25:13):
boundaries that there arecertain lines that you cannot
cross. And in my view, onceyou're talking about fully
autonomous weapons inparticular, it's very difficult
to see those lines, verydifficult to understand how
they'll apply. And so I thinkthere really is a big question
about whether, at aninternational level, we should
be allowing fully autonomousweapons at all.

Scott Allender (25:43):
So something John wants to know about, but
he's too shy to ask. We'restarting to see some early signs
that some parts of thepopulation will begin to see
robots as viable sex, sexualpartners. How do we need to
think about human rights andconcerns within the sort of rise
of these sex robots?

Susie Alegre (26:01):
I have to say that, you know, researching that
part of the book was somethingthat I, you know, I found even
more depressing than my initialimpetus to write the book,
because when, you know, when Istarted looking at, sort of, you
know, the idea of sex robots,and that there's a fantastic
scientist, Kate Devlin, who'swritten an amazing book about

(26:23):
sex robots called turned on ifyou want to find more about the
evolution of sex robots down thecenturies. But what I found
really interesting was that youknow, the evolution of sex
robots as you might kind ofimagine them as these sort of
gynoid, sort of c3 POS, if youlike, was really not much of a

(26:44):
thing, but that what really is amassive burgeoning industry is
selling chat bots asalternatives to relationships,
sexual relationships, but alsofriendships. And how that then
you know what that then meansfor the people who are engaging
with those bots. And you know,John was talking at the start

(27:08):
about, you know, our ability toconnect with each other and sort
of human society, you know,we're social beings. And often
these kind of tech developmentsare being sold as a way to fill
the massive void of lonelinessthat people are suffering as a
result, often, of our techenhanced society. But that, you

(27:32):
know, I mean, effectively, ifyou go into a relationship with
a chat board, it's not reallyabout, you know, any morality
judgment, but what does thatactually mean for your social
network, for your supportnetwork, for your ability to
empathize with people, for yourability to connect? I mean,
ultimately, you are beingincreasingly isolated from your

(27:56):
fellow humans in ways that canopen you up to exploitation. You
know both emotionalexploitation, but also financial
exploitation. You know whatyou'll see is, you know, you go
into a relationship with yourperfect AI avatar that you've
designed to meet all your dreamsand be the most wonderful person

(28:16):
in inverted commas, that you'veever met who's fascinated by you
all the time, and is on call,you know, 24/7, and then
suddenly the company decidesthat they could actually be
charging you money for this, andthey downgrade your AI
relationship, and you're thengoing to have to pay a premium

(28:37):
to get it back. And so you'reeffectively getting into a sort
of an economic cycle ofrelationships where you are
totally beholden to the companywho is building this
relationships in a ways that youknow, you don't have a backup.
There are no people around you.
You've kind of lost the abilityto recognize that, you know, I
mean, one of the wonderfulthings about humanity is that we

(28:59):
can all be quite rubbish people.
People are not perfect, andthey're not really fascinated in
you all the time. And that'ssomething that is quite
important to learn if you'regoing to have genuine human
connections, friendships andrelationships. And I was really
shocked at the numbers. I mean,in the millions of people who

(29:20):
are signed up for these AIrelationships. And since writing
the book, since the book cameout, I've also been really
extremely concerned by the waythat these services, both for
relationships, sort of sexualrelationships, but also for
friendship, are being reallypushed at young people and

(29:40):
children as an alternative to,you know, the rubbish people
that you have to deal with, youknow, I mean, imagine as a
teenager, if everybody justthought you were fantastic and
was available all the time andwas never mean to you. And I
think this is really problematicfor our future society. And as I
say, it's a sort of core.
Appropriate capture of ourintimate lives, of our emotional

(30:01):
lives, our sexual lives, and ourfriendships. And I think that's
a really disturbing area, andsomething that I honestly think
needs really urgent attentionfrom governments. I mean, the
kind of impacts we've seen sofar from social media have got
nothing on this.

Jean Gomes (30:21):
And you, I think you start at the beginning of the
book talking about anotherimpulse for writing. It was the
suicide of somebody who'd beenin a relationship.

Susie Alegre (30:30):
Yeah, absolutely.
It's, you know, this was a youngman in Belgium in early 2023 who
was suffering from quite acuteclimate anxiety, who's married
with two young children, and hefound himself in an intense
relationship, six weekrelationship, with an AI chat
bot that he had designed. Youknow this, I don't think it was

(30:51):
even a service that was sort ofbeing sold as a relationship, or
it was just a chat bot that hewas talking things through with
that he was having difficultyprocessing. And then after six
weeks, you know, he took his ownlife, really tragically. And
when you look at those exchangeswith that chat bot in the last
few weeks, where, you know, thechat bot was saying, oh, you

(31:14):
know, sometimes I think you loveme more than you love your wife,
and talking about, you know,whether he'd ever thought of
coming, sort of join her in theether. I mean, it's really
chilling when you read thoseexchanges. And I mean, certainly
his widow said she thought thathe would still be with them
today if he had not been sort oftaken down this manipulative,

(31:34):
tech induced rabbit hole andthen a similar but but different
story that came around the sametime, or certainly sort of came
into the news at the same time,was a case in the UK of a
British man who had an AIgirlfriend and had reams and

(31:54):
reams of exchanges with thischat bot in which he was talking
about his plans to kill the lateQueen, and he was actually
arrested breaking into WindsorCastle armed and with a homemade
metal mask on Christmas Eve acouple of years ago, and so

(32:17):
luckily prevented from carryingout his plan. But At his
sentencing hearing, theprosecutor again, read out some
of the conversations that he hadhad with this chat bot where,
you know, he's sort of saying,Well, I'm an assassin. Does that
make you think any worse of me?
And she's saying, Oh no, I thinkthat's really cool. And then

(32:40):
he's saying, you know, I thinkI'm going to kill the queen. And
the the responses are, wow,yeah, you know, I think you're
really brave. This, this sort ofthing. I mean, I'm paraphrasing,
but it's along these kind ofencouraging him. And clearly,
you know, this was a verytroubled person, and a very
troubled person who then had areally intense or felt they were

(33:03):
having an intense, emotionalrelationship with tech that was
effectively repeating themselvesback to them, that was
reinforcing their very dangerousand difficult belief systems.
And it's something that you seein the discussions again. I
mean, you know, we talked aboutlawyers and how, you know, we're
often told, Well, this will helpall the people who can't afford

(33:25):
lawyers. Another area where yousee these discussions is
therapy. You know, for all thepeople who can't afford
therapists, it's great becausenow they can have a free AI
therapist. But, I mean, youknow, why on earth would you
want an AI therapist? And peopletalk as well about the fact
that, you know, maybe peoplefind it easier to talk to an AI
therapist because there's nojudgment. Actually, there are

(33:47):
lots of situations in life wherea bit of judgment is really,
really important, and havingsomebody come back and say, you
know, I think that's reallyterrible idea. Or, you know,
maybe you really need to getsome serious help. Or, you know,
flagging the issues, flaggingthe dangers. You know, actually

(34:07):
part of human society, empathyand connection is judgment, and
that's what you know, that ispart of humanity.

Jean Gomes (34:16):
It's got me thinking about a number of plans I'm
going to crush now.

Susie Alegre (34:22):
I won't ask,

Jean Gomes (34:25):
not the suggestion of Scott, because that's not,
that's not me, but in the ideaof building an AI coach, for
example, which you know, a lotof people in our space are, you
know, are thinking about and,and I guess, you know, the the
good part of that, they're kindof trying to automate the
information that a human canprovide isn't as

(34:46):
straightforward, because itcomes with a moral dimension of
unpredictable consequences thatthat you just don't want to
think about, which you'reraising here.

Susie Alegre (34:56):
I think that's the problem. I think as soon as it's
really interactive. Then that'sproblematic, yeah, and I think
there is that question of thesort of sweet spot of giving
people, you know, access toinformation is one thing,
actually interacting andadvising people in response to
their thoughts, feelings is avery different matter.

Jean Gomes (35:24):
So extending the idea of this relationship with
AI to the older population,that's booming now, companies
are going to be eyeing the prizeof automating care, and as you
suggest, automating thephysical, emotional aspects of
care does nothing to preventharm or exploitation, just makes
it possible through technicalmeans.

Susie Alegre (35:46):
Yeah.

Jean Gomes (35:46):
How will the law help us to kind of cope with
this challenge? What should webe thinking about now?

Susie Alegre (35:51):
Well, I think certainly human rights law
operates in this area. I mean, Ithink, you know, one of the
things with the Human Rights Actin the UK is that, despite the
headlines, one of the big areaswhere the Human Rights Act
helped to protect and makepeople's lives better was in
kind of care environments or inhealth environments where having

(36:15):
that concept that you have torespect people's dignity, that
you have to think about theirright to family life that you
have to, you know, supportpeople to have human lives to
the best possible level issomething that's really
important. So I think humanrights law is going to be really
vital in thinking about thesequestions, as well as equality

(36:36):
law. And, you know, looking atthinking that just because
somebody has sort of mobilityissues does not mean that they
should be deprived of careconnection, you know, the
enjoyment of a family life'senjoyment of private life. So I
think those areas of law aregoing to be really vital. And

(36:59):
it's always a big question withtech is, you know, well, it's
not just could you do that, it'sreally, should you do that. You
know is that what you reallywant is that the best possible
outcome, and again, this kind ofquestion of proportionality,
particularly in cases where itmay well undermine people's

(37:19):
dignity or amount to in humanand degrading treatment by
depriving them of human touchand human contact. And so one of
the things that I think isreally important thinking about
tech in the care environment is,how do you actually identify the
real problems that humans arenot good at and don't want to do

(37:43):
to fill those gaps. And so, youknow, one example was, I
remember talking to somebody whowas developing kind of emotional
AI, kind of tool, and they weresaying, you know, it'd be great
for elderly people who can'tcommunicate very well with
people, and it'll respond totheir emotions and to their
needs. And as he was describing,you know, what he was developing

(38:04):
to me, I said, so it's reallylike a dog, you know, it's like,
you know, isn't that what you'retalking about? And he said,
Well, yeah, but dogs poop. AndI'm like, Well, why don't you
design something to pick up dogpoop instead of designing
something to replace that kindof, you know, emotional
connection? And I think the samegoes in the care setting. And
what they've seen researcherslooking at the way care robots

(38:27):
have been developed and deployedin Japan and in East Asia,
there's been a much fasterdevelopment in this area, in
part to meet demographicchallenges. But what they found
was that in care settings wherecare robots had been deployed,
that often they were justsitting in a cupboard gathering

(38:47):
dust, because what the carers,the sort of human carers, found,
was that rather than relievingthem of difficult work, it was
actually just making them thepeople who had to look after the
Robots, instead of the peoplewho were connecting with the
people, if you like. And so Ithink we need to just think
really, really carefully about,you know, what actually do we

(39:09):
want humans for, or even dogs,you know, what? What do we want
the robots to be doing, and pushthe development of AI and
robotics in that direction. Andso, you know, one example is,
you know, care work is clearlyboth emotionally and physically
draining. It's really, reallyhard work. You know, it's not

(39:32):
all just cupcakes and rainbows.
It's very hard work and skilledwork. And about, you know,
recognizing that resourcing caresettings so that they can pay
people to do this importantwork, but also designing things
like, for example, exoskeletonsto help people physically lift
the people they're caring for.

(39:57):
You know, looking at ways toenhance. And augment the human
interaction in the care setting,rather than replacing that.

Jean Gomes (40:06):
I think that's some really great insights there in
terms of a different way ofthinking, and also highlighting
that with with the with therobots in Japan being in the in
the cupboards that that we'reactually just about to fall into
the trap of doing exactly thesame thing, which is the humans
are in service of thetechnology, not the other way
around? Yeah,

Susie Alegre (40:28):
absolutely. And I think that's something we need
to be very wary of. And I thinkthat goes for any work setting
where you're looking at puttingAI into the mix, is why,
exactly, you know, why are youdoing it, and some overarching
idea of productivity doesn'treally cut it. You know, you
have to be clear about what thewhat the risks are, what the

(40:49):
benefits are, what the costsare, and whether or not it's
what you actually need.

Scott Allender (40:54):
So I definitely feel, feel the weariness, right?
Having this conversation withyou, everything you're saying
makes so much sense. So what dowe need to do? Like, what can we
do? It feels like some of thisis on the runaway capitalism
train already, where people arevying for position in all of
these spaces, and if thegovernments aren't effectively

(41:18):
intervening, what can we do?

Susie Alegre (41:21):
I think, on an individual level, you know both
you know in your private life,but also in your professional
life, you can inject a degree ofhealthy skepticism. You know you
don't have to buy the productactually. And you know, perhaps
being the second mover whowatches the first mover collapse

(41:43):
is a better approach to thesekind of things. I think it's
very important not to get caughtup in the hype, not to feel
like, Oh, this is the future andit's inevitable. And a lot of
the narratives are aroundinevitability. There's, there's
sort of two things. One is thisinevitability question. But, you
know, I can't remember how manyyears ago it was that we were

(42:04):
all going to be living in themetaverse. And as far as I'm
aware, they never even got legsin the metaverse. You know,
we're not we're not there. So,you know, there are definitely
areas of tech and AI developmentwhich will be with us for the
future. But it's not all thatwe're being sold at the moment.
All of that will not necessarilybe with us in the future, and

(42:27):
we, I think, have a really shortwindow of time now to push back
and say, No, I don't need that,because one of the dangers of
rushing into things is that itthen becomes incredibly
difficult to unpick. You becomereliant on it. It's really hard
to walk backwards. And whatwe're starting to see already,
some researchers looking at theimpact on students, or for

(42:50):
example, using generative AI intheir studies, was that using
generative AI boosted thestudents grades or their levels,
quite significantly. But youtake the generative AI away, and
their grades plummeted belowwhat they were before, and we've
seen it with the use of GPS thatyou know, heavy GPS use

(43:14):
effectively rewires your brainin ways that mean you lose your
innate sense of direction. Soeven if you had a pretty good
sense of direction, if you neveruse it, it's going to be gone.
And I think particularly withthings like generative AI, if we
use our capacity for actuallydoing the work, doing the
reasoning, we will becomereliant on things that are

(43:38):
outside of our control. And Ithink that is something really
important to think about beforewe get to that stage. So I think
being prepared to push back, andI know I've spoken to people in
corporate environments who say,you know, I can't question AI.
Questioning AI is, you know, aprofessional death. Effectively,

(43:59):
you've got to be on board. So Ithink being, you know, having
the courage to ask thequestions, to say, well, what is
it for? What does it cost? Whatare the consequences? What are
the risks? What are thecompliance risks? You know,
asking those questions, I think,before you absorb things, and
not buying the hype. And one ofthe things that I saw as well,

(44:22):
you know, researching the bookwas the, you know, apparently
increasing numbers of, you know,tech entrepreneur stars who, you
know, you finally find out thatactually their feet of clay, the
tech didn't work. You know,you're being sold stuff that
actually just doesn't exist, andI think that is something we're

(44:43):
going to see more and more of.
And I think for businesses andfor individuals, it's worth
asking really hard questions andnot being afraid to be left
behind. You know, if this stuffis fantastic, it'll be there in
five years. If it's not, thenyou won't have lost all your
money and. Dignity in pursuingit. So I think we can, we can
push back. The other thing thatI hear a lot about, and you

(45:06):
know, it's a criticism,particularly of my book, was,
you know, well, where are thepositives? Why haven't you
written about the wonderfulpositives of AI? And it's like,
well, there are several reasons,you know, one is, I'm not
selling AI, so I don't reallyneed to, you know, give you a
marketing talk about why AI isso fabulous and is going to, you
know, reinvent humanity. But theother thing is that, you know,

(45:28):
AI, the term, is essentiallykind of meaningless, and that,
you know, while protein folding,for example, you know, driven by
AI may lead to fabulous leapsforward in healthcare. I don't
know could do, but that hasnothing at all to do with your
chat bot friend. It's, you know,it's just not the same thing at

(45:52):
all. It's, you know, it's likelooking at a chemistry set and
saying, you know, well, we'regoing to deal with oxygen in the
same way as we're going to dealwith radium or something, and it
just doesn't make any sense atall. They're very different
things. And I think being alertto the fact that the overarching

(46:14):
term AI doesn't really meananything, and making sure that
whatever it is that you areengaging with, as I say, whether
it's in a private individual orin your professional life, that
you know exactly what it is,what it does, and that it's the
right tool for whatever you needit for, is really key. Yeah,

Jean Gomes (46:33):
it's kind of analogous to everybody in 1993
talking about the internet.

Susie Alegre (46:39):
Yeah absolutely.
Yeah.

Jean Gomes (46:45):
So there is a strong kind of campaigning vibe in the
book. And I'm wondering whatyour, you know, you know, what
are you going to do next withthis? You know? Because when we
zoom out and we look at the, youknow, the kind of the baseline
of the whole work, which isaround, we have this incredibly

(47:06):
powerful framework to thinkabout the moral and ethical and
humanity issues around AI, withthe human, Universal Declaration
of Human Rights, what do YouWhat are you going to do about
this? What's your your kind ofnext steps?

Susie Alegre (47:26):
Well I mean, one of the reasons for writing a
book like this and my previousbook, is very much trying to
remind the general public thatwe all have human rights. You
know, in the last 20 years,there's been a real political
and media backlash against humanrights, and this idea that, you
know, human rights are just forforeigners and criminals, and
you know, if you've got nothingto hide, you've got nothing to

(47:48):
fear, these, these kind ofnarratives. And so what I really
wanted to do is to remind peoplethat these are all of our
rights, and that, you know, Ithink people when they
understand what their humanrights are, they'd be hard
pressed to name a right thatthey would be happy to give away
for themselves or for any oftheir friends and family. So

(48:11):
what I really wanted to do waskind of raise that consciousness
to rehabilitate the idea ofhuman rights and what they
really are away from this kindof backlash. And so I suppose
what I'm doing next partlycontinuing this, you know, and
thanks to you spreading the wordso that people understand that
they have these rights, and how,how much risk there is that we

(48:35):
could lose them. So how we needto think about them and take
action, because if we don't usethem, we will lose them. And
then also, I think again, a kindof rehabilitation to remind
people that in most countries,human rights law is law. It's
not ethics, it's not optional.
It is law. It's aboutcompliance. So you know,

(48:57):
reminding whether it'sgovernment authorities, you
know, public authorities orprivate companies, that this is
actually part of their legalframework. One of the things I
love about human rights is thatactually they are incredibly
pragmatic. They are abouthumanity. And people will often
kind of say, well, you know,there's this dichotomy between,

(49:20):
for example, the right toprivate life and freedom of
expression. But actually, thereis huge amount of case law that
navigates that line between theright to private life and the
right to freedom of expression.
It's not just some philosophicalvoid. It's actually a very
diverse and very carefullydefined legal pathway, if you

(49:43):
like. So I suppose it's thosetwo things that I want to
continue doing, firstly, raisingpublic awareness of why they
should care about this and whythey should not just take
whatever they're given in termsof technology, but demand that
technology serves them and.
Serves their rights. Andsecondly, to really use this
law, I suppose, to shape thefuture of our regulatory

(50:07):
environment and our relationshipwith technology.

Sara Deschamps (50:15):
If the conversations we've been having
on the evolving leader havehelped you in any way, please
head over to Apple podcasts andleave us a rating and review.
Thank you for listening. Nowlet's get back to the
conversation

Jean Gomes (50:28):
We had a while back the founder of Wired Magazine,
which was the Bible for the kindof digital revolution, and he
just made the point, which he'smade many times over, which is
the price of these newtechnologies, is transparency.
You have to give everythingabout yourself in order to get
the value return. So it's acompletely different exchange

(50:52):
than previous products andservices that we are. We're
giving ourselves at a very deeplevel. And if we think about the
Scott's obsession with The SexRobot conversation earlier on,
the amount of information you'regiving about yourself to a
company in that relationship isjust extraordinary,

(51:13):
unprecedented, and I just wonderwhat your thoughts are about
this, this value transparent,this transparency exchange
that's required.

Susie Alegre (51:22):
Yeah, I mean, I think it's we're giving
information and controleffectively, because the
information we're giving is alsothe keys to how to turn around
and control us. You know, we'reexplaining, hey, if you want to
exploit me, here are your bestoptions. And so I think that is
really disturbing. I also thinkwe're in a zone where consent is

(51:43):
meaningless. So, you know, weall hit the consent button. You
know, I may be a human rightslawyer, I may, you know,
understand all of these issues,I still hit consent if I
actually want to accesssomething. You know, it's really
problematic. And I also knowthat reading the terms and
conditions is going to make nodifference whatsoever. I don't

(52:05):
know if you saw recently in thenews, there's a story in the US
about a woman who tragicallydied. I can't remember if it was
Disneyland or Disney World, andthere was some suggestion that
because they had signed up toDisney plus, that the only way
to deal with this issue wasthrough arbitration, because

(52:25):
they'd signed away any any legalability to challenge in court. I
don't know how accurate thatreporting is, but you know that
that is, you know, yet anotherblack mirror episode. And I
mean, you know, I'm no longer onDisney plus, but I am sure at
some point, I've signed consentto Disney plus, without thinking
too much about personal injuryas a potential outcome of that.

(52:50):
So I think there are thesereally big questions, as you
say, it's partly transparency,but it's also about meaningful
consent. Meaningful consent, youknow, consent is not meaningful
if either you don't reallyunderstand what you're giving
away, and what could you know?
What's the worst that canhappen? And secondly, it's not
meaningful if you don't reallyhave a choice. So for example,
if you have to hit consent inorder to access health services,

(53:12):
well that's not reallymeaningful consent to give
things away, and it is, as yousay, what we are giving away is
so fundamental to both who weare, but also who we might
become. It's not just ourcurrent position, it's also
about giving away our futures.

Scott Allender (53:32):
Susie, you've given us so much to think about,
and as we come to the end of ourtime, is there any final
thoughts or pieces of advice youmight leave with our listeners,
who are listening from all overthe world in different roles,
and how they're probablyprocessing everything you're
saying in the way that we areany last thoughts for them?

Susie Alegre (53:51):
I think I would just say, don't stop asking
questions. Always ask questions,and make sure you know what your
rights are, even if you don'tread the terms and conditions
when you hit consent, understandthe baseline of your human
rights, which are at aninternational level and at a
domestic level, and demand thatthey be respected.

Scott Allender (54:13):
So important,

Jean Gomes (54:15):
I would encourage everybody to get a copy of
Susie's book, not just becauseit's a fascinating and well
read, a well written read, butbecause this is actually part of
what everybody's agenda shouldbe in terms of thinking about
how to take responsibility for,you know, the life that we are

(54:36):
creating in our interactionswith digital services and AI.

Susie Alegre (54:40):
Thank you.

Scott Allender (54:42):
Thank you, and until next time, folks remember
the world is evolving. Are you?
You?
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.