All Episodes

July 2, 2024 19 mins
THOMAS FREY IS ON TO TALK AI ETHICS As the production of Artificial Intelligence is whizzing along it's time to come up with an ethical framework to use going forward and our man Thomas has one right here. He joins me at 1 to discuss. Find him to speak at your event by clicking here.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
What we don't know is whether ornot artificial intelligence has any real ethical framework.
But luckily our futurist in yours,Thomas Fry from the Da Vinci Institute
and the Futurist Speaker join us totalk about that very issue. How are
you doing, Thomas, I'm doinggreat. I'm doing great. This This
is a topic that's very complicated andwe're going to have lots of debates moving

(00:26):
forward on this. I'm sure.Wellan, we're talking about ethics for artificial
intelligence, and here's the thing,and correct me if I'm wrong, Thomas.
But it feels like the science ofartificial intelligence, the ability of us
to create artificial intelligence, is movingat an incredibly rapid clip, and it
feels like are we now playing catchup by saying, oh, we need

(00:49):
an ethical framework that everybody kind ofagrees upon to make sure that we don't
go too far or use this inan unethical fashion, even though you and
I both know it's going to beused in an unethical fashion. So are
we playing ketchup? Or is thishow technology works? We create something and
then we figure out a regulatory frameworkto control it. Yeah, we are

(01:11):
playing ketchup, and very likely we'llcreate an ethical framework, and then we'll
have to create ethics for the ethnicalframework. So it just seems like this
will be a never ending battle thatwe're contending with the fighting the AI too.

(01:32):
I don't know. We want itto be better than us, and
that's hard to do. Well,you know, I think that it's going
to be very interesting. And partof the part about AI that freaks me
out and alternately alternately intrigues me isthe notion that there is there's bound to
be significant differences between artificial intelligence andhuman beings because we also have an emotional

(01:57):
component that is not necessarily going toexist, right, or maybe it is.
I mean, are you building datameaning data from Deep or Star Trek
the next generation or or what isthat even going to look like? And
how is that going to be differentthan human beings? Because one would argue
that if you only used rational thoughtand took emotion out of it, you'd
probably make much better decisions in life. Well, but some of it's for

(02:25):
actually helping us manage the emotions,like if you if you have an AI
system that's designed to cure the lonelinessproblem for senior citizens that that are home
alone or that are in some nursinghome somewhere. You've got to you've got

(02:46):
to deal with the emotions of thepeople and the people working with them.
And so that's you can't leave theemotions totally out. But how do you
ject I mean, I can't evenclearly define the mecha. Isn't that creates
an emotion? We know there's partsof the brain that are involved in that,
but how do you how do youconvey emotion to AI? Okay,

(03:07):
we're getting off track here. Let'stalk about the ethical framework. First of
all, who would decide on theethical framework for the world? Who would
decide? Who are the deciders onthis? Well, there's there's a list
of different principles that are being bandiedabout right now, and this columnent I

(03:28):
wrote recently, I put together eightof these principles and discuss them in broad
terms. But invariably they're going tochange. They're going to change over time
because I think as we get deeperinto the weeds, as we start understanding
how AI makes decisions, and notjust this generation, but how the next
ten generations are going to make decisions, that will have to realise these these

(03:54):
ethics rules along the way. Ithink it's a fascinating topic that we're going
to have to address again and againand again. So you've set it out
as a set of principles that couldbe the framework of ethical AI, and
I think the first one is alwaysthe best, and that is transparency.

(04:14):
What are you talking about when youtalk about transparency, are you talking about
basically open source you know, everything, so everybody can poke into everybody else's
business or or what does that meantransparency? Well, let's let's use free
example social media. They're using differentthings for content moderation. That should be

(04:35):
very transparent. We should be ableto see what's how they're doing that,
And that I think is a fairlystraightforward example that uh, we don't we
don't want them to blindside us byoh we just pulled this new rule out
of the hat here that we don'tthink you should be able to post on

(04:56):
our platform anymore, and then cancelyour account. So I think that one's
fairly straightforward. I think most peoplewould agree with that. Yeah. Principle
number two accountability. This has beena big one as we've been wrangling with
who is responsible for content that maybe published on their websites and things of
that nature. What does accountability inthe AI real mean, Well, like

(05:20):
you mean a lot of different things. But let's let's use an example of
financial services. If you get approvedor denied for a loan, you should
know what the criteria is for howyou got approved or how you got denied.
And it needs to take in allthese social economic factors in different cultures

(05:44):
into play, and so we canwe need to be fair about all these
things. So having having it accountableto uh to make sure that everybody uh
is is accounted for us UH.Maybe that's a good way of looking at
it. Okay, Principal number threefairness, I think this is an interesting,

(06:09):
interesting principle to be in there,But I think you're talking about fairness
of opportunity, not necessarily fairness ofoutcome. Yeah, fairness could be applied
to quite a few different things.As an example, if you're admitting students
into a college, you should havesome sort of a fair system that allows

(06:34):
people in different cultures to actually compete. It's the same thing as if you're
getting a house loan, or ifyou're buying a car. All these things
they need to have this sense offairness about them, and you need to

(06:55):
know what things are being considered toto gaining this sense of fairness. We're
seeing again this is very tricky,especially because now we're seeing, and I've
seen multiple news stories as of late, that AI as it exists now has
been infested with the left wing viewpointsof the people who programmed it. And

(07:16):
if you ask it to say somethingnice about a Democratic politician, it will
If you ask it to say somethingnice about Trump, it will bag off
the question. And so we're alreadyseeing that kind of bias baked into the
early stages. How do we thisis one of those tricky little things.
Who gets to decide what is rightand what is wrong? Because that should

(07:39):
be simple, But in the lastelection cycle, we saw what happened when
the wrong people decided something was rightor something was wrong and the American people
were misled. So it all comesdown to who is the decider? Who
you know? Who is this?Is it a group, is it a
panel? Who is the decider?What I think has been really interesting on
Twitter is to watch community notes kindof because I'm the decider and I like

(08:01):
that because that's open source. Anybodycan participate and all they do is aggregate
the community notes into one community notessay Okay, most posters say this,
and then you can make a decisionif you want to go with the majority
of you or not. Yeah,well, that's certainly that's one way of
doing it. When you think aboutall the different cultures that are involved.

(08:22):
I mean, there's hundreds of countriesaround the world, and how do you
make sure there's fairness in every politicalsystem in the world. That's going to
be extremely tricky. And I'm notsure. I'm not sure we're they're yet.
I don't think we're even close.And so, yeah, and you're

(08:43):
bringing up the right question who getsto decide? Well, right now,
it's a bunch of programmers in someback room that nobody knows who they are.
I think this needs to be outin the open a bit more and
actually have Yeah, I think itneeds to be a much more transparent process

(09:03):
all the way around. I agree. I think true transparency solves a lot
of these problems. Right if weall have the ability to poke our head
under the hood and look around,that's much more comforting to think about that.
But that brings me to number four. Privacy. We willingly give away
our privacy to any company that wedownload an app to or whatever. I
mean, Thomas, we have becomeaccustomed to not having any privacy online.

(09:26):
How do we change that with AI? Yeah, it's kind of kind of
tricky because it's this bargain that they'remaking. If you want to have access
to this data, you have togive up your privacy, right Well,
if it's attractive, if there's enoughenough things of interest to us, then

(09:52):
we give up for our privacy toget that. But I think I think
it needs to be more straightforward thanthat. As an example, if we
go into a doctor and we getsome work done, we need privacy about
our medical history, medical conditions.I don't think that should be open for
public knowledge, and so how dohow do we maintain that? How do

(10:18):
we make sure that that's under wrapsthe whole time? This again, this
is very complicated topics, and sowe're not even close to getting there yet.
Well, somebody just asked this question. I think it's a really good
one, and it was about moreprinciples to go through. But I want
to get this question and Mandy,for your guest, isn't a big part
of the question that AI is creatingitself. AI will be able to engineer

(10:43):
more AI without human input, andwould that make any ethical rules. Moot
AI is on the edge of beingable to kind of create its own self
and then do our rules. ButI think all of this programmed in somehow

(11:05):
in the base the core function ofthe AI systems, So I think it
needs too. I don't think itcan exist on its own and that it
can write its own rules. Thenthen we really run into problems. We
need some sort of human oversight onall of this. Well, number five

(11:31):
takes us to safety and security.What does that look like? Well,
if we have autonomous drones that areflying around doing surveillance on everything, we're
going to make sure that they're they'reoperating in safe, safe areas to navigate

(11:52):
in. But then the amount ofdata that that they're collecting, how how
dangerous is that? I mean,if if we have we're showing some kids
alone by themselves up in some remotearea, does that mean that we need

(12:16):
to deploy people to get the kidsrescued? Or are the kids safe there,
but they just need to alert somebodythat they're there. I mean,
there's lots of lots of issues thatcome up on every example that you give
for these ethical issues. You wantto make sure that we're covering all the

(12:39):
bases. And that's that's why Ithink having these principles just this is this
is a starting point. This isnot anywhere as close to an endpoint,
and I think this is what startsthe discussion that doesn't end them. Well,
in principal number six kind of goesimmediately from what you're just saying now,
and that is human centered values andyou talk about imbewing AI with the

(13:03):
ability to have empathy and sympathy andthose things that make us uniquely human.
But I want to read a questionfrom my friend Ralph. He said,
one item to Grill fry On isthat we certainly don't have similar ethical frameworks
at all. He's a Pollyanna inthat regard. We will use AI driven
weapons systems. We will use AIto benefit ourselves, our families, our

(13:26):
tribe, our corporations, our nations, and the global UN style ethical framework
be damned. It's already a freefor all. We mouth be us about
ethics, but our intelligence and DoDcommunities like the Chinese, Iranians and North
Koreans don't give a rip about ethicalframeworks. So is this a case of
you're trying to create a framework thatgood and decent people in society would work

(13:50):
within. But then major players wouldjust kind of go, you know,
do we really need to follow thatand let the chips fall where they may.
That's a really good point because that'sthat's exactly what happens during a war.
So when and we're going to havea lot more tools for fighting wars
in the future, and so yeah, I think all the ethics goes out

(14:13):
the window during during times of warconflict. That that's probably exactly right,
And that would kind of can wethat would be that would be I viewing
it with your values? Yeah,I mean you can't because there are going
to be people that are always goingto find a way to exploit whatever we're
doing. Right, So should wejust give up an ethics totally then No,

(14:41):
I don't think so. Anythink weneed to come up with some principles
here. Let me ask this principalnumber seven inclusivity. What does that mean?
So, so we need to includein a healthcare application, we should

(15:05):
include input from diverse patient groups tomake sure that we have all the health
concerns taken care of. That it'swell known that different ethnic groups have different
health care issues, and so weneed to include everybody in our healthcare topics

(15:26):
that we're working on. And sowe can't leave some group out just because
they're shorter than average, or theirdifferent skin color, or just because they
have they have problems with certain diseasesthat the rest of us don't. So
somehow we need to include everybody inthat number eight principle honesty and integrity.

(15:58):
Yeah, so that's that's a trickyone, actually, because I was gonna
say it was, how do youteach how do you teach a computer what
integrity is? Which is, inmy view, integrity is doing the right
thing even when nobody else is payingattention. I mean, is that a
clearly definable trait for AI at thispresent moment? No, I don't.

(16:26):
I don't think it is. Andagain, I think this is a goal.
I don't think we're in yours closeto this yet. But ah,
I don't know. I think weneed to disclose what the purpose of the
AI is and how to avoid havingit manipulated and having exploited in different ways.

(16:53):
Uh. Yeah, So somehow weneed to have faith and confidence in
and what we're doing. If weput in a prompt into an AI system.
We should have some measure of comfortthat we're going to get reasonable results
from it, not that it's goingto lie to us and tell us false

(17:17):
things and cause us to panic.I mean, this is what happens today.
You go to a doctor, anda doctor misdiagnosis a person, and
that patient can live in total panicfor the next couple of weeks until they
go to a different doctor and getit all resolved. That happens all the
time. So we want this tobe better than humans, and how do

(17:41):
we do that? Well, again, this is real tricky. No,
it's it's completely tricky, and Idon't know. The worst part is is
that we have to trust people Iguess that I don't have a lot of
confidence in and that as our politicalleadership to come up with some reasonable framework
that they can then present to therest of the world. If not work
on the framework with the rest ofthe world. Is this something that is

(18:03):
being worked on to your knowledge,Is there a body or an organization that
is trying to put this together.To the best of my knowledge, the
only thing that's out there are somehotgpoch organizations inside of companies like open Ai,
or Google or some of those.And I don't think that that constitutes

(18:29):
a reasonable approach. I think itneeds to be cross country lines. I
think it needs to be across companylines, and I think we need to
get input from the general public aswell. And so this again would indicate
that we have a long wayte togo in actually getting to something that constitutes

(18:52):
reasonable ethics. For AI, ThomasFry is our guest. He is our
resident futurist, and if you needhim to seek to an organization about any
topic, or you just want totalk about the future, you can find
him at futurist speaker dot com andyou can find the written out version of
what we just talked about linked ontoday's blog as well. Thomas, good
to see my friend. Happy Fourthof July to you. Thank you all

(19:18):
right, Thomas, you have agreat fourth as well than you too.
That's Thomas for everybody.

The Mandy Connell Podcast News

Advertise With Us

Popular Podcasts

24/7 News: The Latest
True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.