All Episodes

November 29, 2022 36 mins

In this second episode, YS Chi explores the subject of Artificial Intelligence and Ethics, also sometimes called Responsible AI. Like any technology gaining prominence, there is both substance and fuzziness to be found in discussions around AI. There are also many grey areas that are sometimes presented as black and white. Vijay Raghavan, chief technology officer of LexisNexis Risk Solutions, explains how we look at Responsible AI and offers a helpful framework that differentiates real-world bias from data bias or algorithmic bias.

Also in this episode, we hear from our first external podcast guest, Kirk Borne, chief science officer at DataPrime and one of the world's top artificial intelligence influencers. Kirk talks about how AI has evolved and the importance of transparency and human oversight.

This podcast is brought to you by RELX.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
YS Chi (00:00):
The Unique Contributions podcast is brought to you by
RELX. Find out more about us byvisiting RELX.com.

Kirk Borne (00:10):
I saw it in business and sport. You know, even the
recommender engines on ecommercestores were already using
machine learning to recommendproducts to people. And all of
that just fascinated me. I mean,as a scientist who loves working
with data to make discoveries.
This is like I was a kid in acandy store.

YS Chi (00:41):
Hello, and welcome to series three of unique
contributions, a RELX podcastwhere we bring you closer to
some of the most interestingpeople from around our
businesses. I'm YS Chi, andtogether with my guests, I'll be
exploring some of the biggestissues that matter to society,
and how we're working to make adifference. We have some
exciting new guests lined up forthis new series. So I'm very

(01:03):
excited to dive in. Today, I'mexploring the topic of AI in
business and asking thequestion, how should we think
about responsible AI? And whatare we doing about it? Later in
this episode, I'll be gettingthe thoughts of AI expert and
influencer Kirk Borne, but myfirst guest this week is Vijay
Raghavan, who is the EVP andChief Technology Officer of

(01:26):
LexisNexis Risk Solutions hereat RELX. Vijay now joins us from
our office in Georgia to explorethe perennial question of data
analytics and ethics in AI. SoVijay, welcome back.

Vijay Raghavan (01:41):
Thank you for inviting me YS, it's a pleasure
to be on your podcast.

YS Chi (01:45):
So I'm going to start off with basics for our
listeners who may not be asfamiliar with the technology
lingo. There are a lot ofbuzzwords, right. AI, machine
learning, deep learning, and soon. And I think this huge web of
terminology can be confusing attimes. Can you start with
defining each of these terms?
And explain how they areconnected to one another?

Vijay Raghavan (02:07):
Yeah, yes, you certainly you do have these
terms that tend to overlap, somaybe you can define them using
a few examples. Let's start withartificial intelligence. That's
actually an umbrella term thatloosely refers to the
representation of humanintelligence and machines. And
there are many ways ofrepresenting AI. For example, if
you think back to the days ofDeep Blue back in the late 90s,
if you recall, this was the IBMcomputer that beat Garry

(02:29):
Kasparov, the reigning chesschampion at chess, that was
cutting edge AI at the time. Andthe way it worked was it had a
library of opening chess moves,and then went through a tree of
rules and probabilities tocalculate the best next chess
move up to a certain depth andDeep Blue used what was called
symbolic AI, or good oldfashioned AI. Today, we think of
that just one type of AI, right?
Machine learning is another typeof AI. And as such, it falls

(02:51):
onto this AI umbrella that Ijust talked about. And it's
actually a set of AI techniques.
And the essence of machinelearning is that you train an
algorithm with lots ofhistorical data, which is where
the term big data comes fromright. And the algorithm
effectively learns how to makepredictions based on these large
volumes of historical data orbig data. The idea is that the

(03:12):
algorithm gets progressivelybetter over time, because it
gets trained on more and morehistorical data. And a good
example is the recommendationengine within Elsevier Science
Direct, because it recommendsarticles related to what the
subscriber is reading, based onhow it has seen similar other
subscribers picking and choosingsimilar other articles. There's
actually a great example ofmachine learning I was reading

(03:33):
about a few days ago. YS, areyou familiar with Peter Jackson,
who made the Lord of the Ringsmovies? Absolutely. Right, he
made a docuseries about theBeatles recently called Get
Back. And he made it out of 50year old tapes. And the audio
was recorded in mono back in1969, during the Let It Be album
sessions, and so the audio wasgarbled, the conversations were
submerged, you couldn't reallyhear what the Beatles were

(03:54):
saying. So Peter Jackson hiredsome machine learning experts to
restore the audio, by trainingthe software to recognise Paul
McCartney's voice and JohnLennon's voice, using other
recordings available, andsurfacing the audio, I thought
it was a fascinating use ofmachine learning, obviously, not
in the context of what we do,but it's a good example of
machine learning. So when youmove onto deep learning, just

(04:14):
like machine learning is asubset of AI as a whole, deep
learning is a further subset ofmachine learning. And it's an
evolution from machine learningthat tries to mimic the human
brain using a concept calledneural networks, similar to how
the brain if you will, hasneural networks. And what that
really means is that a deeplearning algorithm requires less
training from an AI practitionerthan a machine learning
algorithm does. So a goodexample is again, going back to

(04:36):
games, the AlphaGo computer,which learned how to play the
game go. It's a great example ofdeep learning at work because it
improved by itself by learningfrom each game that it played
against increasinglysophisticated players, as
opposed to humans training itafter each game. So it
ultimately got better than thebest human Go player. And that's
a good example of deep learningat work. And so to your point,

(04:58):
YS, even when you look at DeepLearning and when you look at
some of the techniques, likeneural networks to use an
example. There are specificneural networks like
convolutional neural networks orCNNs as they call them which are
good for processing images, andvisual pattern recognition, then
you have ordinands, or recurrentneural networks, which might be
better suited for something likelanguage translation on the fly
like Google Translate. So,really all these techniques have

(05:20):
become more powerful, moresophisticated, because computers
become more powerful. Data hasbecome more readily available,
and which has led to thesymbiotic concept where...
situation where AI techniquesare therefore proliferated, as
people have taken advantage ofthe hardware and the
availability of data.

YS Chi (05:37):
It makes a lot of sense.
So now that we have a betteridea of what all these terms
mean, how they are related toone another, let's explore some
of the big questions we'refacing today, especially within
our industry, and how our workat RELX is connected to this.
With any conversation about AI,you cannot avoid the issue of
bias in the data or algorithm.

(05:59):
So how big of an issue is biasin data? And is this a realistic
goal for technology companieslike RELX to have or are we
approaching this challenge witha wrong mindset?

Vijay Raghavan (06:12):
It's a great question. And the term bias
itself is an interesting term.
It has a pejorative connotation,but it isn't necessarily always
a bad thing, right. Soobviously, if the source data is
collected in a skewed fashion,or the algorithm itself is
flawed, or if the AIpractitioner is biassed, all of
which is possible, there is allkinds of bad bias that can creep
in. But if you set those thingsaside for just a minute. And

(06:35):
I'll come back to it. The pointof an AI algorithm is to
programmatically identify biaswithin the data. And in this
case, when I say bias, I'mreferring to the patterns within
the data that need to besurfaced like a clustering
algorithm that separates thedata into what it thinks of as
logical clusters, right. So ifthere is no bias in the data at
all, the algorithm is not goingto be able to create clusters or

(06:56):
to find patterns that allow youto make meaningful decisions. So
the goal isn't to remove allbias. First of all, that to your
point in your question, thatoften is impossible. But it's
also not always desirable. Thegoal, of course, should be to
prevent bad bias from creepingin. So for example, if you're
building relevant to ourindustry, if you're building a
credit scoring model usingmachine learning, are we

(07:16):
training that model with datathat is broadly representative
of the population that you wantto implement the models for? Or
are we training it using justdata that's affiliated with a
few affluent zip codes, or justfrom some low income zip codes,
because those are the onlyplaces where we happen to be
able to collect the data, right?
That's the kind of thing thatcan introduce bad bias. And we
do need new people to recogniseand prevent that kind of thing.

YS Chi (07:39):
Right. So that's where human intervention is absolutely
necessary?

Vijay Raghavan (07:43):
Exactly right.
So when we talk about models,especially models that we that
impact society that impactconsumers, their livelihood,
their wallets, we don't just,you know, use a deep learning
model and cast it out into thewind and say, here's a model
that's gone to production.
There's always a human elementin terms of understanding the
attributes that go into themodel, the data sets that go
into the model. There is areview after the fact to make

(08:05):
sure there's no bad bias that'scrept in. So there's a there's a
formal process to make sure thatwe're doing it the right way,
and the ethical way.

YS Chi (08:13):
Right, so this is one type of bias, right, about the
the composition of the dataitself. The other is whether
that composition, or the data,encroaches on data privacy and
digital security. And theCOVID-19 pandemic has really
shown us that we need rigorousdata security and privacy
regulations in place to protectourselves, whether it's

(08:34):
ecommerce fraud attacks, orlarge scale ransom attacks on
corporations. We've seen justhow vulnerable information can
be in datasets. What are yourpredictions for data related
regulations in the next fewyears to come?

Vijay Raghavan (08:49):
Yeah, up until now, regulators have not been,
frankly AI savvy enough tounderstand how to regulate AI.
And that's problematic becausethat can lead to a patchwork of
contradictory laws that can varyfrom state to state. It reminds
me of you remember, when MarkZuckerberg went to the Senate a
couple of years ago, and one ofthe senators asked him "How do
you make money if Facebook isfree?", and he had to explain,

(09:11):
Senator, we sell ads. So there'sa knowledge gap in the
legislative branch when it comesto social media monetization,
let alone something like AI.
Right. All that said, I doexpect regulations around AI to
increase. That's okay, as longas they increase in a manner
that's consistent acrossjurisdictions. So if you take
the California Consumer PrivacyAct, the CCPA, which a lot of
people have heard of, that's notactually AI regulation, it's

(09:32):
more around privacy and securityregulation. But it is a good
example of one state setting thetone around privacy security
regulations in the US and otherstates modelling the regulations
on the CCPA as opposed to eachof them rolling their own,
because that'd be completelychaotic. And that's where that's
how I'd like to see regulationshead. What I see from early
previews is that lawmakers wantproviders of AI based solutions

(09:55):
to conform to certain tenets,like using compete and
representative datasets todesign AI models or for
companies to test their AIalgorithms for discriminatory
outcomes. And regulators willwant us to do what we say and
say what we do similar to theregulations that exist around
unfair, deceptive and abusivepractices and those kinds of

(10:15):
things. And they're going towant AI to be transparent,
explainable and auditable,meaning available for
independent reviews, all ofwhich is fine. But the caveat is
that the regulations will needto be context dependent, by
which I mean, there should bemore regulatory oversight around
the explainability andtransparency of AI, as relates
to things like consumer creditmodels, because that directly
impacts people's wallets orlifestyle or livelihood, as

(10:38):
opposed to the same AI algorithmthat might be used to flag say,
information security threats,which is arguably a more benign
scenario. For what it's worth,the RELX AI survey that we did,
YS that you're familiar with, wefound that only 62% of US
business executives areconfident that they can comply
with AI regulations withoutsignificant additional

(10:58):
investment. And this goes handin hand with the waining desire
on the part of US executives tosee increased regulation around
AI, for obvious reasons.

YS Chi (11:09):
You know, I think that clearly shows the other side of
the coin, which is thatcollecting data has been the
engine of innovation for us inthe last couple of decades, at
least, right and more visibly,and it is somewhat debatable on
how much consumers actuallytrust it. They're, I guess
becoming more aware of thisissue now? How do we establish

(11:33):
trust with our customers, andsecure their privacy, but still
deliver things that are actuallybeneficial to them?

Vijay Raghavan (11:41):
Very good. A big part of this, in my opinion YS
is messaging to consumers and toour customers and regulators
about what it is that we do, andhow we do it in a very
transparent way. As we've seensome of the negative news around
Facebook or ClearView AI, lackof transparency around what you
do, and how you do it makes youvery unpopular, and can be seen
as untrustworthy, right. Forsomeone like us at RELX, we

(12:02):
would not be in business if ourcustomers didn't trust us to do
the right thing for them, andfor consumers. We use our data
for good every day, whether it'sto help lawyers win cases or
doctors to make the rightdiagnoses, or underbanked
consumers being able to getloans and mortgages, and so on.
But even beyond that, any any USconsumer can come to our website
at LexisNexis Risk Solutions andask for a full file disclosure

(12:24):
report that tells them exactlywhat data we have on them. That
builds trust, and we give themthat information for free. If
they find something incorrect onthe data we have on them, they
get to tell us and we are thenobliged to correct it after we
verify it. Right. That's anexample of how you establish
trust. Here's a different way ofanswering that question YS. If
you think back to, let's say 100120 years ago, during the early

(12:45):
days of credit, the way in whicha credit bureau might have
established your creditworthiness was to send someone
to your neighbourhood and ask orto my neighbourhood and asked my
neighbours what kind of a guy isVijay. And if my neighbours
liked me, and were not biassedagainst me for some reason, they
would probably say nice thingsabout me, and I'll be seen as a
good credit risk. But if theythought that I was in some way
different from them, and werebiassed towards me, I would be

(13:05):
seen as a bad credit risk. So mypoint is, there was actually
less transparency and more bias100 or so years ago, than there
is now with our AI based creditscoring models. And we need to
be able to explain those kindsof things to consumers to put
things in the right context. Andwhen it comes to regulators,
it's obviously it comes down tofaithfully abiding by
regulations and laws. There's analphabet soup of regulations

(13:28):
that someone like us has toabide by. So if you're affecting
a consumers livelihood to myearlier point, we have to abide
by the FCRA. There are certainrules that we have to abide by.
If it falls under lawenforcement, if that's a
scenario, that's more likely tobe a non FCRA situation, but we
still have certain regulatoryobligations around privacy and
security. And because of allthese things, our customers and

(13:49):
suppliers and consumers trustus.

YS Chi (13:52):
You know, listening to you, as much as technology has
become complicated and whatnot,it does come down to very basic
human nature, isn't it abouttransparency and authenticity
and in sincerity in dealing withpeople. And whatever regulations
we come up with, it needs to goback to that concept of data for

(14:13):
good.

Vijay Raghavan (14:14):
You're absolutely right. YS, have you
seen an old movie calledJudgement at Nuremberg?

YS Chi (14:20):
No, I have not.

Vijay Raghavan (14:21):
OK, I would recommend it. It's an old movie
about the Nuremberg trials ofthe Nazis after World War Two.
And in the movie, there's apivotal scene where the judge
who's played by Spencer Tracytells a defence attorney who's
defending the Nazis on trial, hesays, "Counsellor, you are a
very logical man, but to belogical is not to be right". And
that sentence has stayed withme. So watch the movie, because

(14:43):
the point I'm making is, yeah,we were already abiding by
certain principles aroundprivacy and security and
transparency, even before someof these regulations were
implemented, because we knewthey were right, and not just
logical. So long before wecreated models using machine
learning, or deep learning, wealready had best practices
around how to collect data, howto link data accurately, how to
create attributes, models inways that minimise bad bias. We

(15:06):
aren't waiting for regulationsto be passed to keep us honest
and ethical.

YS Chi (15:10):
Yeah, and a lot of the negative feedback for some
companies is because they havebeen not so forthright about
what exactly it is they'redoing, in fear that they were
somehow giving away their secretsauce, when in fact, they should
have been upfront about it. SoI'm glad that RELX is doing a
good job on that front. And asalways, Vijay, in 15 minutes, I

(15:33):
can learn more from you than in15 weeks of class. I can't thank
you enough for joining us today,and for sharing your insights
and wisdom on this verycontemporary topic. Thank you so
much.

Vijay Raghavan (15:45):
It was my pleasure. Thank you for having
me.

YS Chi (15:53):
Like Vijay, my next guest is definitely an AI
expert. Kirk Borne is the ChiefScience Officer at Data Prime, a
B2B provider of data and AIservices where he is responsible
for developing data teams atclient companies. Kirk is also
an internationally renownedinfluencer and thought leader.
We are very honoured to speakwith him today on our podcast as

(16:15):
our very first non-RELX guest.
So before we delve into AIethics and responsibility,
welcome, Kirk.

Kirk Borne (16:23):
Thank you. It's great to be here today. And I
look forward to thisconversation.

YS Chi (16:27):
Well I have to ask, you are originally a trained
astrophysicist. And here you arenow, in the world of AI. What's
the overlap between these two?
And are there any elements ofyour astrophysics training that
are relevant to what you dotoday?

Kirk Borne (16:45):
Well, for me, it's a completely continuous
transition. It may seem odd topeople to go from one of those
to the other. But they're bothvery similar, in that they're
very focused on solving problemsand discovering insights from
data, and building models ofcomplex things, informed by

(17:05):
data. And so it's computational.
It's data intensive. It's ascientific process that relies
on creativity and curiosity. Soall the things that inspired me
to become an astrophysicist tobecome a scientist, I'm still a
scientist. So none of that haschanged. In fact, I've had a
variety of different jobs in mycareer with jobs with the Hubble
Space Telescope, I spent 20years working at NASA. I was a

(17:28):
professor of astrophysics atUniversity for 12 years, I
worked at a major internationalconsulting firm for six years
now I'm working at a startuppart time and actually started
my own little business recently.
And doing all of these differentjobs, I have only been in one
career and that's being as ascientist.

YS Chi (17:48):
That is true that you have a fundamental foundation as
a scientist. But you've alsobeen applying that scientific
intuition and talent andinterest in so many different
directions, right, as you said,as a professor, as a
businessman, as an entrepreneur,as a, as a researcher, and so on
and so forth. What was it liketo find applications all the

(18:08):
time?

Kirk Borne (18:10):
Well, I think the very first indication that what
I was doing had applicationbeyond just the sciences was
about 20 years ago, after thisterrorist event in United States
called the 911 event. And I gotcontacted by the White House to
brief the President on datamining techniques. Basically,

(18:30):
how do you discover patterns anddata to build predictive models
of things? And I was doing thisstuff in astronomy, and I didn't
realise that things I was doingactually had this international
importance. And so I startedlooking around and I discovered,
well, businesses are doing thisin medicine and healthcare is
doing this. You know, evensports, I mean, sports
analytics, I mean, this wholethere's a movie about baseball,

(18:51):
and how people use statistics,in baseball. So the idea was
that to me, well, two things.
First of all, I was I wassurprised, at that point, I was
just beginning to learn aboutmachine learning and data
science. And that very littlebit that I knew was considered
expert level because very fewpeople were doing it. So that's
surprised me. And the secondthing was how what I was doing
in astronomy had this massiveapplication and benefit, you

(19:14):
know, far beyond the sciences.
And I saw it in business andsports and entertainment and
medicine, logistics, you know,retail. Now even the recommender
engines on ecommerce stores,were already using machine
learning to recommend productsto people. And all of that just
fascinated me. I mean, as ascientist who loves working with

(19:34):
data to make discoveries. Thisis like I was a kid in a candy
store, I just, I just couldn'tget enough of all the fun things
that I saw people doing. And Ijust wanted to do that more and
more.

YS Chi (19:45):
Yeah, I bet you're just so excited. But then you know,
it's still going, there's somany problems we need to help
solve with data.

Kirk Borne (19:53):
That's actually true. And actually, one of the
reasons I left NASA was Irealised that we're going to
need to train the nextgeneration to do this. And that
was 18 years ago, I made thatdecision to leave my my lovely,
wonderful work at the spaceagency, and go to a university.
It was always my dream to be aprofessor at a university, and I
became professor of astrophysics18 years ago, but I never

(20:17):
actually taught astrophysics, weactually laid out and started
the world's first undergraduatedata science degree programme.
And that was really my goal isto bring data science to the
masses. It's not just for thesciences, it's for everyone.

YS Chi (20:31):
Well, I'm so glad you did, because we still need
millions more in our world tosolve these big, big problems.
So why don't we do a little bitof training here? Vijay
explained the differencesbetween AI, machine learning and
deep learning some concepts thatcan often become quite
confusing. For the benefit ofpeople who do not have technical

(20:52):
background in either datascience or mathematics. Can you
please help describe someconcrete applications of AI? And
the business sectors that AI isimpacting the most already?

Kirk Borne (21:04):
Well, the impacts are primarily, I mean,
everywhere, of course. But thereally big use cases we see in
finance, and healthcare, andinsurance and government,
logistics manufacturing. Oh,wait, I'm and I'm just
practically naming everyindustry there is. But for me,
I'd like to start with sort of adefinition of terms. And I guess
that's the professor inside ofme. So I tell people that data

(21:27):
science is a scientific process,right? It's this, it's the
application of scientific methodto discovery from data. So what
we're doing is data science,because we're doing discovery
from data, we test a hypothesis,we observe something, we infer
how it works, that's calledbuilding a model, right? You try
to build that model and youtweak the model, you change the

(21:48):
parameters, you change the formof the model in order to see how
you can best improve it. That'sa that's a scientific process.
And the way we do that is we usea set of mathematical algorithms
called machine learning. Somachine learning is simply
mathematics that is patterneddiscovery mathematics, finding
trends, correlations, clusters,outliers, anomalies,

(22:08):
associations, all of thosethings are just mathematical
techniques. And then when wewhen we learn what the the most
meaningful patterns are throughdata science method through
machine learning algorithms,then we deploy those things. And
we have the actionable thingthat we deploy is the artificial
intelligence, the actual thingthat does the work for us. And

(22:28):
so it could be a recommenderengine, or it could be a cancer
diagnosis. For example, imageunderstanding is one of the
categories of AI that is lookingat images and understanding
what's in the image, forexample, self driving cars,
autonomous vehicles, need tounderstand what's in front of
the car, what's near the car.
And so, image understanding oneof the big applications of AI,

(22:49):
another one is just languageunderstanding. So for example,
when you talk into a chatbot, Ido voice search on my phone,
right, I want to search forsomething on a search engine, I
just say the words, I don't typethe words on my phone. And so
voice understanding, but it goesboth ways. And that is you can
have a dialogue. And that'scalled a chatbot of a
conversational AI. And we usethese all the time without even

(23:11):
realising it. But the mostimportant one for me not just
language understanding, andimage understanding its context
understanding, that is the otherdata that tells you what's going
on in that environment. Forexample, during the COVID,
period, there was a tremendouschange in the kinds of things
that people purchased, the allthe models, predictive models of
what kinds of products peoplewould buy at different times of

(23:33):
the year, or even different daysof the week, or hours of the
day. All that completely changedwhen everyone was working from
home and we had this thistraumatic thing called the
pandemic. And so those modelswere all wrong. So the models
had to understand that there wassome app, there were some
context. It wasn't just time ofday. It was there's a context in
which all these things wereaffecting the model.

YS Chi (23:57):
That's right. And in fact, this is one of the things
that I think many people arefearing and that that somehow
this tsunami of talent that weneed to develop and train is all
technical that it's allmathematical. Whereas you are
now talking about thiscontextual skill set, which can
be also formed as domainexpertise, isn't it?

Kirk Borne (24:18):
Well, I'm so glad you said that, because that's
one of my big passions. Goingall the way back to that story
where I said, I left, work atthe space agency to train the
next generation. And I wasn'ttalking about training the next
generation of astrophysicists,or trying to train the next
generation of mathematicians,it's about recognising that
every single person in the worldneeds to understand how their

(24:40):
data are being used, how datacan generate value. When I
taught my students, I taughtvery introductory courses, as
well as the advanced courses.
But an introduction course, Ialways brought out my smartphone
during the first day of class,and I said, you all have one of
these right? And they say,"Yeah", and I said, "Well, you
know, that you're generatingtonnes of data, you know, how,
what you're looking at whatyou're searching for, you know,

(25:01):
what videos, you're looking atwhat, you know, what things
you're reading, all that isgenerating data for businesses,
and they're making money, you'regenerating data, hey, don't you
want to be part of thatrevolution and have value in
your own life, not just forvalue creation for this other
business?"

YS Chi (25:19):
Yeah, I think that this issue of everyone participating
also then has some rule settingthat we need to do so that we
are using these new skills andnew capabilities with
responsibility, right? So how dowe ensure that AI does not

(25:42):
contain unintended consequences,particularly around biases.

Kirk Borne (25:48):
So there's two ways to deal with this one is just
remember, you need to alwayshave the human in the loop. That
is you need to have someone withsome domain expertise in some
some human compassion orempathy, if you want to call it
that, that looks at thisapplication of AI, look at the
algorithm, and then see if it'sequitable, and see if it's just

(26:09):
and see if it's doing the rightkinds of things. But there's
also a mathematical way that isapproaching this problem. And I
really like something I heard ofone company was talking about,
at a conference, they wereactually a financial services
company, who actually did sortof, basically loan decisions,
for people, individual people.
And one of the things they didwith their with their algorithm
is they did what they call thereverse engineering. And so they

(26:31):
said, when they built the creditscoring model, they removed all
the factors that they should notbe using in the model, for
example, gender, or maybeethnicity, and other issues like
that, which we shouldn't beusing those types of factors and
making decisions. So theyremoved that from when they
built the model to predict whata credit score or a credit risk

(26:51):
might be for a particularindividual. So after they built
the model, then they reversedit, they said, Okay, given that
this person or this set ofpersons, we say do or do not
give a loan to that is creditrisk is either high or low for
this particular group, let'sreverse engineer and see if we
can infer what the gender orethnicity or whatever, whatever

(27:14):
those factors were, that wereremoved, intentionally see if we
can infer what those factors arewithout even knowing what they
are just using the output fromthe model, reverse engineering,
see if we can work back toinputs that we should not be
using. And if they can infersome of those factors that
should not be part of thedecision making, then they
realise that some kind of biashas leaked into their, into

(27:37):
their algorithm. And then theycan address that. And so that
they're taking a verymathematical technical approach,
which, which is a really goodway to look at this because we
want to have some objectivity,not just subjectivity, in how we
handle this, because after all,that's what bias is, right? It's
putting too much subjectivityinto our models.

YS Chi (27:54):
Right. Knowingly or unknowingly? Precisely, right.
So beyond implicit bias, it'sbecome clear that data and AI
can do some real harm when putin the wrong hands. So the words
responsible AI are now beingtossed around very often. It's a
big discussion for, you know,governments, academics, business

(28:14):
people, ethicists to discuss, doyou have any suggestions for a
Responsible AI framework,something that will be broad
enough to span all industriesthat make use of AI?

Kirk Borne (28:26):
Actually, I have an idea about that. Back when I was
at the university actually did Imean, this could be a little
detour here, but I'll come rightback to your question. When I
was at the university, we didsome research into sort of
educational research, basically,what kind of things we can use
to teach students better datascience. And I had to sign a

(28:47):
form and fill out someapplications about what's called
informed consent and humansubject research. And this was
very novel to me it was it wasnot surprising at all for all
the education researchers, norshould it be surprising at all
for medical researchers becausethey involve human beings. All
of my research and my career hadto deal with distant stars and
galaxies. You know, I did not Idid not need the consent of

(29:10):
those stars or those galaxies,to do research on them. But if
we're doing research on people,we need this informed consent,
and there's principles of humansubjects research, like do no
harm, informed consent, sharedbenefits, shared risk. And I
realise AI is like that, becausewhat AI is the implementation, I
should say, of AI across theworld across all of our

(29:30):
industries. It's really a grandexperiment on humanity. Right,
we're actually doing a grandexperiment because it affects
human beings. And so we need totake these principles of human
subject research into account,that is do no harm, first,
informed consents, so that, youknow, people get to have a
choice. And there's also thisconcept of shared benefit and

(29:50):
shared risk. And that's quite aninteresting one, because there
will be risk, but there willalso be benefits. And so we
can't expect we're going to dothings that have zero risk. But
the point is, is that if thereis risk, if there are benefits,
they should be a sharedequitably across all populations
and users, you know, not benefitone population over another or

(30:11):
harm one population overanother.

YS Chi (30:14):
So in following your, your insights there, is it
possible that we allow some roomwhere those who experiment and
find some damage are notunreasonably punished? And that
there are some rules for beingable to correct the action? Or

(30:39):
is it something that where weneed to set the rules so
stringently upfront that it canactually inhibit people from
experimenting or worse yet,bypassing it and doing it
clandestinely?

Kirk Borne (30:52):
This is an enormous challenge. I don't think there's
a single podcast or 100 podcaststhat can resolve this question.
But but all those things you'resaying are absolutely serious
and true. But I think one of theways we can deal with this is
just realise, even in clinicaltrials, medical clinical trials,
they're testing, for example,drugs and treatments. And

(31:15):
sometimes there'll be seriousconsequences. In fact, if you've
ever heard any, like advertisingfor drugs, they always list all
the possible side effects. Sohow did they learn there were
these side effects? Well, theylearned that there were these
serious side effects, becauseduring the clinical trials,
there were people who probablycontracted those particular side
effects. And so if, again, if wefocus on this as an experiment,

(31:38):
and then you need if you'regoing to involve people in the
experiment, they have to beinformed about what's going on,
they have to be able to say yesor no, I want to participate in
this even though there might berisk. Now, of course, you'd like
you said there can beclandestine things where people
are doing these AIimplementations with without
sort of that oversight withoutthat informed consent. And
unfortunately, you'd need, youknow, sort of regulation to help

(32:02):
with that. But again, people cansidestep regulation. And again,
I'm not going to get try to getinto all those issues, about
enforcement of regulations. Butwell, what we need to do is
have, again, a balance. I think,in some cases, we've seen some
AI, responsible AI documents, insome quarters of the world that
are hundreds of pages long. Andit's almost makes it impossible

(32:22):
to do anything with this. So weso we need to have more balance,
and not as just stop things intheir tracks. And I'm gonna go
all the way back to what I saidthe beginning, we've always used
algorithms as humans, most ofthose were in our head. So it's
not like this is the first timewe've ever used algorithms. But
the scale is so large that weneed these regulations. But we
don't need to make it so heavy,that we just stop humanity in

(32:43):
its tracks.

YS Chi (32:45):
You know, people who regulate, some are experts, but
most of them are not, right?
Particularly those of those whoare elected officials that
handle myriad different topics.
How do we ensure that we cantrain those regulators or
legislators to properlyunderstand it rather than
making, you know, overlyprotective process?

Kirk Borne (33:09):
No, seriously, I think every person needs to have
some kind of training on this,you know, not in-depth
mathematical necessarily, butcertainly awareness training.
And so sort of our literacy,maybe that's a better word,
there needs to be sort of AIliteracy. So people, not
necessarily being trained to doprogramming or trained in
mathematics, and trained inmodel building, necessarily, but

(33:30):
they need to understand theterminology, the implications,
and the implementations that arebeneficial. And so seeing both
the risk and the benefit, seeingthe applications that have
worked and the ones that haven'tworked, that's part of building
the literacy and being able touse the words in a sentence
correctly. As I was sayingearlier, the difference between
machine learning, data scienceand AI sometimes it's very

(33:52):
blurred with people. So I liketo make sure people understand
the words we're using before westart describing more serious
things like regulations, andbiases and things like that.

YS Chi (34:06):
So you know, when I try to explain some new concept to
people who are curious, I tendto use examples, right a real
case, give us a one case thatyou would be delighted to catch
a congressman or senator in anelevator and give an example of
how AI was used responsibly, andhow good an impact it has had,

(34:29):
for those concerned?

Kirk Borne (34:31):
Well, if I had to do a one minute elevator speech, I
would pull out my smartphone, Iwould turn it on, I would stare
at it, and it wouldautomatically log me in. It
would use my face. And in fact,this happens a million times a
day for me and everyone else. Ituses facial recognition to
unlock my phone, my face is mypassword to unlock my phone. And

(34:54):
so facial recognition is a bighot topic for regulators, a big
hot topic for a lot ofethicists. And I understand that
but at the same time, facialrecognition helps me a million
times a day when I don't have totype in my passcode on my phone
every time I want to use it. Soit just used that example right
there. I use my face, I usefacial recognition, AI software,

(35:14):
to do something very efficientfor me, which is to unlock my
phone and get me into my apps todo the things I need to do.

YS Chi (35:21):
A very live example. In fact, there is a saying that I
learned that for every discoveryof scientific nature, there is
the light and then there is theshadow. And the question is, how
do we balance them so that theshadow does not overtake the
light that comes from thingslike facial recognition? You

(35:43):
know, Kirk, I am so glad that wewere able to have this
conversation. Thank you so muchfor spending the time with us
today and giving us such a very,very simple but insightful and
direct view toward AI and itsfuture. Thank you so much.

Kirk Borne (36:02):
You're welcome. It's my pleasure.

YS Chi (36:05):
Thank you to our listeners for tuning in. Don't
forget to hit subscribe on yourpodcast app to get new episodes
as soon as they're released. Andthank you for listening. Please
stay well.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.