Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production of I Heart Radios
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickling them an executive producer with
How Stuff Works in I Heart Radio and I Love
all Things Tech. And you know, guys, there's no shortage
of scenarios in which AI proves to be our downfall.
(00:26):
You've got popular films like The Terminator and Matrix series
in which we have artificial intelligence literally revolting against us
and then subjugating us. To the numerous predictions that automation
is going to displace every job, and we run the
gamut of all these different scenarios where AI is going
to be our end And then we have various companies
(00:50):
and organizations that are investing billions of dollars to develop
an advance artificial intelligence who are saying, no, no, no,
no, no no no, you don't need to worry about that.
AI is not gonna totally story of the world. It's
gonna make our world better. It's going to take over
the more repetitive, dull, and dangerous parts of our jobs,
and it's going to free us up to concentrate on
more rewarding activities. So can we get to any truth
(01:12):
in the matter? Is there some sort of truth we
can suss out from these extremes? Well, today I'm joined
by Oz and Kara, the hosts of the series Sleepwalkers,
a show all about AI and if you haven't checked
it out yet, I highly recommend you do because it
is a phenomenal show. Guys, welcome to tech Stuff. Hi,
(01:33):
thank you for having us. Yeah, thank you so much. Jonathan.
We're huge fans of tech stuff and delights to be
joining the house stuff works. My heart family in the
and and made part of the tech Stuff networks. So
thank you. Well, thank you because you know you've you
have lifted up the boat of tech stuff, certainly because
your work is really inspiring. Before we jump into this conversation,
(01:56):
if you could just take a couple of moments and
let my listeners know kind of you know what Sleepwalkers is,
how you would describe that show to somebody. Let's say
you're at a cocktail party and you are asked what
do you do for a living? You say, well, I'm
working on this show. How do you describe it? So
they called I think they call it an elevator pitch.
But this is a cocktail pitch. Yeah, and were based
in New York, so we spend a whole lives of
(02:17):
cocktail parties. I was born at a cocktail party and
committee which happened in elevators in New York, as I
understand exactly, or apartments, the size of elevators. Um. So,
Sleepwalkers is a podcast that actually Oz came to me with.
It was his idea, his brain child. But I will
(02:38):
say first, you know, I've I used to report on
tech and science at the Huffington Post, and I had
a show called Talk Nerdy to me and when Oz
came to me and said, you know, I want to
I want to really make a show that deals with
all of the human touch points that AI could possibly
come in contact with, so healthcare, agriculture or uh, you know,
(03:02):
science in general. Love you know, all of these places
where people aren't necessarily thinking that a I will have
an impact, but they already should be basically, And you know,
I said yes very quickly because I'm very interested in
all of those touch points. So each episode really is
a deep dive into one of those areas, as I said,
(03:25):
whether it be healthcare, transhumanism, agriculture, the military, for example, um,
you know, these are these are places where we're going
to see the presence of AI. Were already seeing the
presence of AI, and and the show really tries to
explore that. Yeah, I think it's uh, it's really pretty
incredible when you sit down and look at where AI
(03:48):
has already kind of crept into our day to day experience,
sometimes in ways that we wouldn't necessarily associate with AI.
Like one report I read said you could argue it's
a very limited application of AI, but that things like
spell check and grammar check, which are now standard in
(04:09):
apps and clients and smartphones and browsers, that that's a
type of artificial intelligence that if it's doing something besides
just detecting No, this sequence of letters doesn't correspond with
any words in the language you are writing in. It's
also perhaps looking at context, like saying, well, you use
(04:29):
the word weather, but you use the word weather as
in the types of meteor logical activity that are outside
the window, as opposed to whether or not you should
do something. And so you think about that and you'll realize, oh, yeah,
I guess, I guess there is a lot more to
it than I thought, which kind of brings me to
the first point I wanted to make before we dive
(04:50):
into the various doom and gloom scenarios of AI, which is,
how do you guys define artificial intelligence? Because I found
that the this concept it's so broad that often you
can have two people trying to have a meaningful conversation
about AI and they're not able to meet in the middle.
(05:10):
But it's not because they don't agree with each other.
It's simply because they're working from vastly different definitions of
what artificial intelligence actually is. I think that's a great point, Jonathan.
Just to back up a little bit, UM, I want
to tell you how I count with the name Sleepwalkers
for the series UM, and then I'd love to dive
into I think the excellent point you make, which is
(05:31):
that effectively yesterday's AI is today is computing. UM. But
so I was very struck about eighteen months ago when
several of the senior and early employees of Facebook, people
like Sean Parker, the first president of Facebook, who have
now left the firm, obviously Chris Hughes more recently coming
(05:52):
out and saying, you know, I wouldn't let my children
use technology. Steve Jobs famously said, um, Steve Jobs, we
had an audio shoot. I was repeat that. Steve Jobs
famously gave an interview to Nick Bilton where he said that,
you know, he wouldn't let his children use the iPad.
But when the Facebook he's actually just came out and
one after the other said that they didn't want their
children using this technology kind of made me sit up
(06:15):
and think, you know, if the people creating this technology
don't want their kids to use it, what does that say.
I mean, it's like, would you go to a restaurant
where the owner didn't let their children eat? I certainly wouldn't.
So that was the first sort of point of inspiration
for for sleepwalkers, in other words, not being aware of
the future we may be going into. And then there
(06:35):
was a Zuckerberg hearings in the Senate, and Mark Zuckerberg
sat there looking increasingly from from slightly nervous two relieved
and calm to actively smug as it became abundantly clear
that the senators were not going to be able to
hold into account. I think the idea of those hearings
(06:55):
were Senator orn Hatch asking Mark Zuckerberg how the platform
made my the if it was free and Zugerberg smirkingly replied, Senator,
we run ads um. And so between those two things,
between the Facebook is not wanting their own children on
the platform and the grown ups are either senators not
being able to hold Facebook to account. I thought, Okay,
what's going on here? And how can we wake up
(07:18):
and make sure that we don't sort of flush our
democracy down the toilet and pollute our children's minds by
not asking some fundamental questions about how technology is changing
how we already live. And that brings me to your
second question, Jonathan, what is AI? And it's a fantastic
question because AI is everywhere, and it's not just the
robot future that you see in sci fi films that
(07:41):
you mentioned, and it's not the future facing products that
you know many brands tell us they're developing. And it's
basically just statistics and probability, which has got better and
better and better over time. But one of the things
that we make clear to our listeners in the first
episode is that they've already encountered AI ten times or
hundred times by the time they listened to this podcast
(08:02):
in their day, because if they took an uber to
work in the morning, likely the driver was matched to
them and the route was chosen with AI. If they
woke up next to somebody this morning who they met
through a dating app, AI effectively intervened in their romantic
life and connected them with somebody who they matched with.
And even even if you're listening to this podcast right now,
there are algorithms AI algorithms at work smoothing our voices,
(08:26):
compressing the audio, helping with the editing techniques. So AI
is everywhere, and it's already changing our perception of the
world and how we relate to the world around us
and each other. Yeah, you could even argue at this
point that AI is really just a slightly more focused
branch of computer science and that it's It's almost the
(08:48):
same as saying will computer science save us or doom us?
It is too big of a question. You have to
start narrowing things down. I think the real issue is
that for the longest time, we've associated artificial intelligence with
the concept of strong AI, which is that idea that
we would create a machine that was either capable of
(09:12):
or so close to capable that we can't tell the
difference of thinking like a human or processing information like
a human and coming to decisions like a human would,
possibly with the added elements of consciousness and self awareness. UM.
And and you know, I talk about how many times
(09:33):
in this show. How that's a very complicated thing even
for us to talk about just as human beings without
bringing machines into it. So I'm sorry, I can't do that, Jonathan, Yes, yes,
how HOW or or IBM if you prefer. Uh, you know,
they're just three letters off UM. But yeah, it's it's
ah that wonderful, that wonderful feeling that that's the only
(09:55):
thing that AI really is, right, It's it's the super
intelligent deep thought or how computer that's capable of processing
information typically in natural language. Uh, it's the Watson platform
participating on Jeopardy. Like we've we've precipitated this, uh, this thought,
this this concept of AI, and we've reinforced it with
(10:19):
entertainment and with applications that try to emulate the stuff
that we saw on entertainment. But as you point out,
AI is is a much more broad concept than this
super intelligent machine. It's a whole bunch of stuff that's
all about processing information in a particular way, typically to
(10:39):
come to some sort of uh, decision or action upon
information that has been automated. So it might be something
like Facebook's algorithm, which is all designed ultimately. What Facebook's
algorithm is designed to do is to keep you on Facebook.
It's it's ultimately ultimately designed so that you will see
(11:00):
the next thing on Facebook. It's it's reinforcing that desire
and uh. And so that's what once the algorithm quote
unquote figures you out, that's why you're gonna start seeing
a pretty uh, a pretty consistent presentation of what you
would see on a day to day basis. UM. But
(11:21):
that would be one example of that. So, as you
point out, we do interact with AI all the time,
whether it's on social media with those algorithms, UM, whether
it's with an app. Maybe we have one of those
personal assistants in our home that that uses AI to
various extents. UM. I talked about just recently on the radio.
(11:43):
I had a conversation about how Comcast is coming out
with sensors that are meant to monitor the health of
people living in homes that have been outfitted with these
ambient sensors, and they monitor things like how often you
get up to go to the bathroom or whether you
stay in bed a longer time than normal. And to
(12:04):
be perfectly fair to Comcast, they're they're pitching this as
something to help the elderly or people who otherwise need
caretakers to give them more independence in their own homes.
But you could also very easily, without much imagination at all,
start to come up with scenarios where that could become
truly invasive. Oh yeah, So I was bringing up Second
(12:27):
Chance AI, which was a project that came out of
the University of Washington, which was a was designed to
detect opioid overdoses early on, using UM an opioid users
cell phone to detect changes in breath and really act
as a monitor for people who were long time or
(12:51):
short time heroin and opioid users. So that device would
then be able to detect this overdose and allow family
members to know or also alert the person who is
overdosing that they're in a bad way. So in the
case of opioid users, it's worth the trade off because
um you know, it's very helpful and potentially life saving
(13:13):
for them to know based on previous breathing patterns and
previous movements, what's likely to happen next. In an overdose scenario.
And for most Facebook users, they indeed get to see
the ads which are relevant to them. But the problem
with AI is that can't discriminate between individuals and general population.
So although it's more probable that somebody who have a
successful pregnancy than not, is very painful for the edge cases,
(13:35):
and AI can't effectively discriminate for them. And I just
want to say really quickly, and I think this is
an important point to make, especially about second Chance. So
second chance basically is harnessing the power of a cell
phone's microphone, which is the same microphone that you can
either choose to turn on or off when you're in Instagram,
that can listen to what you're saying, and basically then
(13:57):
use data that's collected to target you with products that
you probably don't need, like another pair of shoes designed
by a company that you've never heard of, but that
you might like. So my point is is that this
microphone that you know, as Oz was saying, could you
know in a inappropriately spy on you essentially unless you
(14:18):
are taking control of it, is also a microphone that
could save a life of somebody who is in the
early stages of an opioid overdose. So I think that
kind of rocks my world when I think about the
two existing on the same piece of technology. Um, again,
it's that they're being used for different things, and two
different things that are you know, have hugely different outcomes.
(14:40):
But they're all about making guesses about what's going to
happen in the future based on what's happened in the past,
and that can be liberating or constraining, depending on the
technology and the intention and your interaction with it. Yeah,
I'm reminded of something similar that was it was an
interesting use of AI that ended up being ah. Another
(15:06):
embarrassing and emotionally traumatic story that broke a few years ago.
I want to say it was Target that sent coupons
like maternity coupons to a young woman who her father
had intercepted the thing and was incensed that Target would
(15:26):
send these to his his daughter. Uh. And then because
the father, the father of the young woman, did not
realize that she was actually pregnant, she had not told him,
and so he was upset and he was very angry
at Target, you know, saying how dare you suggest this?
Then discovered that she was pregnant after all, and it
(15:50):
shows again that it was the intent was trying to
be helpful. You could see that at least from you know,
from a thousand yards away, you could see that where
it say a company that says, you know you're going
to have need of these things, here are some coupons
for those things. If you shop with us, we can
get you some deals. So you know it's gonna be
a mutually beneficial kind of arrangement. But then you realize, oh,
(16:12):
but this is on a subject that is extremely personal
and in this case had this unintended consequence. It was
the same sort of predictive approach, and they were able
to predict the fact that she was pregnant based upon
her browsing history. So they were proactively acting on this
data that had been kind of gathered through her browsing activity.
(16:36):
And then uh, that's what ended up causing this sort
of a uh scandal is probably too strong of a
word for it, but certainly a bruhaha. I think if
we're looking at the grand scheme of of of how
do we determine the level of of awkwardness, embarrassment, and
potential emotional trauma? Um? So yes, please, one of the
(16:59):
things that can me think about? Jonathan is a study
at Stanford which basically turned AI onto identifying sexual orientation
from photographs. So they took a data set publicly available
data set of images of people's faces from dating websites
which have been tagged bisexual preference I, straight, gay or bisexual.
(17:22):
Then they train the algorithm on which faces corresponded to
which express sexual preferences, and the algorithm, after this training
was able to identify with accuracy for men and accuracy
for women sexual orientation just from seeing five photographs of them.
So again that technology by itself is more or less neutral.
(17:46):
But you think about it being overlaid onto a citywide
surveillance system in a country like Brunei or Saudi Arabia
where homosexuality is punishable up to the death penalty, and
it starts to become very very scared. Ry. Um. So
we are in this world now where where technology is
advancing and the ability to make these predictions based on
(18:06):
past data is so advanced. It doesn't need to have
consciousness to be killer, right right, Yeah, The fear of
the matrix or terminator future, while compelling, turns out to
not be necessary at all. Like that doesn't need to
be a component for this to already be dangerous. Yeah,
we'll we'll go into that in greater detail in just
(18:30):
a moment, but first let's take a quick break. As
you were saying just before the break, I mean, you
made that great point about how AI has this potential
to do potentially, you know, great harm as a as
(18:51):
a possibility without the need for any sort of intelligence
or malevolence on the part of the machine. In fact,
it can just unthinkingly in human terms, cause some some
pretty terrible consequences, unintended certainly, or at least we hope so,
on the on the part of those who designed the systems.
And I wanted to kind of talk a little bit
(19:12):
more about that about how sometimes that can happen. And one,
and I'm sure you've come across this in your reporting
and in your podcasting. One problem that's not only confined
to AI, but too and not just a tech but
across the board is bias. Right, This idea that when
you're designing a system, you're doing so from a particular
(19:34):
point of view, and because of that, uh, you are
likely excluding other points of view, maybe not consciously, but
you are. And that ends up meaning that if it's
a system that's supposed to apply to everyone, but it
particularly applies well to people who are similar to the
people who designed the system, and not so well to
(19:57):
everybody else, that becomes a problem. And and we've certainly
seen this in systems like UM Microsoft Connect. When Microsoft
was pushing the Connect peripheral, which is the gesture recognition peripheral,
where there's a camera had an infrared camera and a
regular optical camera that could detect motions so that it
(20:18):
could be translated into commands for the system. UM it
was discovered pretty quickly that it worked great for white
people but not so great for people of color. It
had been designed by people who had not really worked
with it in that regard, and so we see there.
You could argue a fairly um harmless in the grand
(20:40):
scheme of things failure of a system, but you look
at something like computer vision for maybe an autonomous car,
and you could argue, well, now you're talking about life
or death situations. So to me, one of the big
challenges in AI is making sure that you that the
people designing the systems are doing their best to eliminate
(21:01):
bias as best they can. And part of that I
think falls to a real concentrated effort to increase diversity
in the organization's companies that are actually designing these systems
in the first place. Yeah. No, absolutely. I mean I
think that the conversation about AI and bias has sort
(21:23):
of reached critical critical mass. I guess, you know, I
think it was yesterday or the day before, you know, UM,
Alexander Rocascio Cortez was speaking out specifically about this problem
as it pertains to facial recognition technology. UM, there was
a very good M I. T study that recently came
(21:46):
out that you know, a lot of these programs are
developed by white men and therefore are extremely bias and
and and I think politicians now are really trying to
sound the alarm because I think it's, um, it's not
something people think about in their everyday lives. You know.
(22:07):
I don't think people are you know, walking around getting
to their job that maybe they don't want to be at,
you know, driving to work, driving their kids to school,
you know, thinking about the implications of bias and facial
recognition technology. I think people have other things to think about,
but I think it's very important, UM, especially when you know,
politicians start bringing up these problems, uh for sort of
(22:28):
ordinary people to start to think, well, actually, wait a minute,
I might encounter this technology UM at at border patrol,
you know, when I'm flying out of the country, or
you know, I might encounter this technology as I walk
into a stadium that's now using you know, a quick lane.
And I think when people start to listen to politicians
(22:50):
who care about these issues, UM, they realize again that
there are much more human touch points than we think.
And then so issues of like bias and gender discrimine nation.
Whereas before people weren't thinking about them as much in
terms of technology and artificial intelligence, you know, now people
are realizing that there's real I don't know, there's there's
(23:11):
real issues in terms of who is developing these technologies
and who is harmed by the inherent bias within these technologies.
And I just want to say something really quickly. One
hypocrisy that I think is is really wild and worth
noting is, you know, the European Union has recently released
basically a list of seven I don't know, I don't
(23:34):
even know what you call them, but bullet points about
you know, the way in which we should be talking
about and regulating artificial intelligence, and you know, one of them,
one of like the main bullet points is to say,
you know, we really have to focus on uh by
the inherent bias UM within these you know, both algorithms
(23:55):
and the way this technology is built. UM, we don't
we want to make sure that it doesn't get ahead
of us essentially, right. And at the same time, the
European Union in Latvia and Hungary and Greece is using,
is piloting a program called Eyeborder Control UM, which is
basically being tested and run by border patrol agents. UM
(24:19):
two match people's faces on a very very large amount
of data and then decide if a person should be
detained for further questioning. Right. So, I think right now,
both politically and socially, there is a reckoning that's going
on which was like, Okay, we want to use algorithms
(24:40):
to quote unquote make our borders safer, but we also
don't want to allow these same things to get ahead
of us so far that you know, we no longer
have control over them. And I think that human beings
in general and specifically politicians are having a really difficult
(25:01):
time reckoning with the sort of inherent hypocrisy of wanting
to harness the power of AI to you know, make
smarter predictions, uh, make policing easier, but also regulating these things. Yeah,
we're seeing it in in business too, right, Like we're
seeing businesses that rely heavily upon algorithms. They're not necessarily
(25:27):
nearly as as critical as the sort of decisions that
would take place at a border where you could potentially
really disrupt a person's life unfairly, and that would that's terrible.
But like I just did an episode recently about the
YouTube ad apocalypse. You know, this idea of advertisers pulling
(25:48):
their money and they're they're advertising out of YouTube and
how that hurt a lot of content creators and sort
of the problems that YouTube faces. One of the big
ones being that, you know, they have a pretty aggressive
algorithm that goes again, goes in and tags videos and
has them as being potentially uh not family friendly and
(26:12):
therefore they cannot be monetized. Uh. And the reason why
YouTube has to depend upon that is because you have
more than four fifty hours of content being uploaded every
single minute, So there's no way you could actually have
human gate keepers who could review all the video footage
that's being uploaded to YouTube every day and determine whether
(26:34):
or not this actually merits being allowed into the monetization
camp versus being demonetized. So you see from the scale
that they have to rely on it, but you also
see from the limitation of the algorithms themselves, how all
these different cases that if a human were to review
would probably be considered perfectly fine for monetization get you know, excluded.
(26:59):
So we're seeing that as well. This idea that we're
seeing the limitations of artificial intelligence where they're working off
a certain set of criteria, but they aren't always able
to apply them in the same way that a human would, right,
they don't they don't take in all the context. So
we see a lot of videos that are covering sensitive
(27:20):
subjects like news about the l g B t Q communities, uh,
news about places that are full of conflict, and these
are meaningful and useful and educational videos. They're not sensationalized,
they're not you know, trying to to exploit anyone, and
(27:40):
the creators are trying to monetize the videos in order
to be able to fund their efforts, but then they
get demonetized. So again we're seeing where artificial intelligence can
cause harm um in ways that we wouldn't have necessarily
anticipated back when you know, folks like Arthur C. Clarke,
we're writing about artificial and aligence. One of the things
(28:02):
that we've found very exciting about Sleepwalkers is that we've
been able to get access to a lot of kind
of hard to get into places. So we went to
the Facebook headquarters in Palo Alto to meet Nathaniel Glika,
who runs cybersecurity policy for Facebook. And we went to
the NYPD headquarters to meet the director of Analytics, the
(28:23):
guy who makes the calls and helps develop the software,
on what kind of predictive policing is acceptable, what kind
of policing predicative policing is not acceptable, And we went
to Google. We went to Google twice. We went to
Google x, which is the kind of secret lab which
invents the future, like the self driving cars, the balloons
which sail in the stratosphere to deliver Internet too hard
(28:46):
to reach places. But we also went to a very
interesting program at Google called Jigsaw, and Jigsaw's mission is
to right some of the wrongs of the Internet, and
one of the big projects they're working on is sentiment analysis,
because you know the early promise of the internet, which
Jonathan you may remember better than me and karaoke. No,
(29:09):
that was not. He meant more than your podcasts podcast.
That's fair. That's fair that the podcast. I don't say.
I put up with that with Tori, but I don't
need that go ahead. Was comments, right, The Internet was comments,
It was comment boards, and it was MSN messenger with
random people you've never met before. And then all of
(29:30):
a sudden, comments became this morass of utter hatred, and
most websites stopped accepting comments because it was just too
horrific and they couldn't afford to have moderators to to
make it a safe space. So this program at Google Jigsaw,
one of the things they're working on is sentiment analysis,
so putting a bunch of comments through an algorithm to
(29:51):
detect whether or not the comments are hateful. And the
technology is now being used by the New York Times,
who are trying to reintroduce a comments section on their
web site. The problem is these um algorithms learn from
how humans have historically perceived the negativity or positivity of language,
(30:12):
and so guess what gay black female was originally considered
by the algorithm to be hate speech, and white Man
was considered positive. So you know, there's a lot of
work to be done to make sure these algorithms don't
reproduce are very painful history and in trench it right. Yeah,
that's an excellent point, and it also kind of reminds me.
(30:35):
I created an outline for this episode, and I'm sort
of generally making my way through it. Uh, this is
sort of my milieu, but I was I was thinking
also that this plays into another component of AI that
doesn't have anything to do with the AI natively, but
(30:56):
rather our interactions with AI, and this comes was something
that humans are particularly good at that AI isn't good at,
and humans are really good at sussing out what the
high level operations are for a system and then figuring
out how to game that system. So we also see
(31:17):
a lot of examples of people who have recognized how
the AI is going about detecting something and then they
end up using that to their own advantage. And in fact,
I listened to one of your recent episodes of Sleepwalkers,
the poker Face episode. First of all, Kara, amazing, Lady Gaga.
(31:38):
Second of all, you're welcome as like Karaoke King that
was actually a robot version of me doing that was
my head is off to the to robo then but
the but yeah, the the there was the discussion, There
was the the the professor who was talking about how
students had figured out how to uh to insert keywords
(32:00):
in their cvs, but they used white text on white
background so it wouldn't show up to a human reviewer.
But it was the sort of stuff that machines could read.
So machines were picking up on the cvs that had
these words that typically we're going to very prestigious schools.
It was. It was linking things back to things like
Harvard or Cambridge, and so their cvs were popping up
(32:23):
at the top of the pile for consideration, because the
machines were the ones in charge of going through the
first pass of these cvs, and then humans would look
at the next pass, and so it increased your your
chances getting called in for an interview. And meanwhile, the
humans are none the wiser because they don't they don't
see this hidden text, which I thought was a fascinating point.
(32:44):
It reminded me actually of the early days of S E.
O and web search where people would just flood a
web page with all the top searched topics at the
bottom of the page, even even though they had nothing
to do with whatever the intent of the page was.
It was the same sort of thing. They were gaming
the system. And that's another way that AI could potentially
(33:06):
become harmful. You know, in this case, I don't think
it's harmful. I think it's brilliant. The kids are doing
this because, you know, any way to get your foot
in the door. If you're the best candidate for the role,
you should definitely give that interview. But well, especially if
the game is rigged exactly. Yes, that's another great point.
Julian and I have Julian's our producer, and uh, Julian
and I have talked about how we hope to see
(33:29):
much more cyber I don't know cyberpunk rock in the future,
whereas you know, I think, yes, cyberpunk is not cyberpunk
future cyberpunk rock. We don't want cyberpunk rock because that
would be bad music created by an algorithm. But you know,
there are it's fun, it's I mean, it's kind of fun.
(33:51):
I think when deep fakes can get tricky, but it's
sometimes fun to see how people are gaming computers. You know,
I was talking about this thing, uh, the reflectacles, which
were actually designed We're part of a Kickstarter campaign actually
to UM raise money to design these glasses that would
(34:12):
basically direct natural light right back into a camera that
was equipped with facial recognition technology. So it was sort
of a way for kids to dodge cameras that we're
trying to recognize them. And I, you know, I just
I don't know. I guess that my rebellious side really
really UM is warmed by by things like that. It's
(34:34):
nice that we can still resist. I mean, you know,
if she feels so overwhelming technology. And we may talk
about China later on. You know, part of the problem
of this kind of surveillance architecture we have is that
it kind of demotivates you to even try and resist.
But the issue of these students and peppering the applications
with with with keywords like Harvard and Stanford on their
(34:58):
applications in white X versus white background does bring up
another concern or issue, which is what we call data poisoning. UH.
And data poisoning is is a military term that we
heard from the former Secretary of State. Sorry is a
military term that we heard from the former Navy secretary
under President Clinton, Richard Danzig, who's a guest on our
(35:19):
podcast Sleepwalkers. He said that, you know, as we're relying
on algorithms more and more to make decisions in the battlefield,
decisions about which targets are threatening, which targets to civilian,
whether an adversary is preparing for an attack or not.
And we're relying on algorithms to make these calls for us,
or at least to inform our decisions. You know, smart
(35:40):
enemies can start to feed the algorithms they know exist
poison data. In other words, you know, they can put
on their own reflecticles and use our technological infrastructure against
us by tricking our algorithms into thinking things are happening
that aren't happening. Yeah, that's a another scary concept. It
(36:02):
reminds me the last little point I have on my
on my outline will will loop back in a second.
But this uh the the various cases of false alarms
that have happened since the nineteen fifties in the early
warning systems for various nuclear programs. This has happened both
in the United States and the former Soviet Union. Uh,
(36:25):
we have seen cases where there were systems that detected
a nuclear strike when in fact that it never happened.
But but these were, you know, again, automated systems designed
to detect patterns, something that AI is really that's one
of the main things that AI does is look for
patterns and then uh, start to predict things based upon
(36:46):
the patterns that have been observed. And it was a
couple of different cases of mistaken things that were not
actually patterns but were interpreted as patterns, and that we
thus saw very near miss into going into full nuclear war.
And the only reason we did it is because there
were actually human beings who said, hang on, let me,
(37:09):
let me triple check this before we commit to mutually
assured destruction. And uh, you know, we were very fortunate
in that case that we had clear thinking individuals who
were second guessing the systems. The danger I see is
that we start to depend more and more heavily upon
(37:29):
the systems, where we are less likely to resist the
decisions coming out. And um, we'll talk a little bit
more about that again in just a moment, but first
let's take another quick break. So I was talking about
(37:52):
the early warning systems. That kind of relates to another
problem that we hear in AI. This one's uh one
I hear side by side with bias as being one
of the big concerns about AI, and that's what is
commonly referred to as the black box problem, which is
where you've designed a system that is so uh complicated
or perhaps purposefully Obvius skated, that you cannot see how
(38:16):
the system actually operates, and so you're getting output from
this system, and the output appears to be good, but
you don't necessarily understand all the steps that went through
the system to come to that. And we see this
in machine learning in particular, where you've got, you know,
these artificial neural networks that have different weights on different decisions,
and then they give you what is, at least statistically speaking,
(38:40):
the most correct answer for whatever it is you're looking for.
If we don't know how the machine is coming into
that decision, then we can't be fully sure that it
is the best one. And so there have been a
lot of people that I've seen arguing for more transparent
approaches to AI to make sure that it's sort of
(39:00):
the system that we can audit so that we do
feel reasonably certain it's working as intended and not producing
results that could be less than ideal or even harmful. Um,
it's one of the big concerns I've seen over the
recent years that you know, the bias one being on
one side and the black box problem being on the other.
(39:21):
Have you guys encountered any of that in your work
so far? Yeah, we have actually and and and just
in a lot of recent news. Um, the black box
AI problem it kind of feels like a Ponzi scheme
where it's like, Okay, we have these returns that we
know are good, and someone selling you these returns. They're
not telling you how these returns are happening, but you
(39:43):
trust that because you want to see your money grow exponentially,
You're going to give them the money that you have
now and expect to see those returns. And that's how
people get tainted. I mean, that's how people It's not funny,
but it's sort of you know, how Ponzi schemes work. Um.
The black box A is similar to me, at least
in my understanding, in that we don't really understand what
(40:09):
linguistic patterns the networks are actually analyzing. We just know
that they're analyzing them. And that to me, as someone
who is, um not a computer scientist, I'm like, what, like,
that's how is that possible? UM? And it's I mean,
I think I think it's a bit alarming. And I
know there are people there's a team at Google right
(40:29):
now that's sort of working on this, working to fix it,
and they sort of call it, you know, going I'm
not a driver, so I don't know, popping the hood,
going under the hood of of of AI to to
you know, better understand what exactly is going on, UM,
because I think, you know, again going back to what
(40:49):
I was saying about the EU UM releasing these sort
of seven guidelines. You know, one of them is transparency, right,
and that's not only transparency and sort of how we're
using AI and you know, various touch points in human life,
but also how AI or how algorithms actually work. And
(41:10):
I think, you know, not only do people not understand
how many human touch points daily you know, consist of
some form of artificial intelligence, they don't understand exactly how
the AI is working. I mean that's an even that's
more difficult. And so I think this idea that even
the people who are feeding data into these algorithms, don't
(41:33):
know exactly how the algorithms are treating the data. Is
really a cause for alarm, and not not to not
to not to be alarmists, but but I do think
it's a cause for alarm, and and I do know
there's a there's actually a lot of research going out
in my t about it as well, because I think
even for people who are in the field, it's something
that worries them. I think it's worth mentioning that Henry Kissinger,
(41:56):
who is obviously a controversial figure, wrote a piece about
this last year for The Atlantic under the headline how
the Enlightenment Ends. And you know, Kissinger is somebody who
into his nineties, you know, likes being in the game
and being hot. So and something he invested in their
enough somebody he didn't he was involved, but so he
(42:19):
you know, so he has an appetite for for these
for these topics. On the other hand, you know, here's
somebody in their nineties. And the piece was basically he
convened as many of the leading minds in the world
on AI that he could and wrote this piece, the
State of the Nation piece on AI called how the
enlightenment ends, And the main topic of the of this
(42:41):
essay was about the black box problem. So Kissinger's point was,
throughout human history we have been able to state why
we did stuff, look at the outcome, argue about whether
our reasoning that got us to that outcome was correct
or faulty, and then improve our ability to reason. And
when you have these black box AIA systems which make
(43:03):
decisions but we're as as yet unable to understand why
they made the decisions, it takes away the ability to
have a debate. And that is such a fundamental part
of what it means to be the human being in
twenty one century liberal society, um that it's frightening to
think about losing that ability. On the other hand, and
(43:24):
the classic, you know, the classic illustration of this problem
is called the trolley car problem. An autonomous car is
driving along, it has to choose one person to kill.
Does it choose as a swerve right and kill the
child or swerve left and kill the old person, um,
And it will never be able to explain why it
made the decision it made. You know, that's probably true
(43:45):
for most drivers as well, because they'll either have been
killed themselves they'll have had so much trauma in the
crash that they can't remember or they simply won't know.
And as humans, we like to post rationalize things and
then believe there are rationalizations are why we did what
we did, But that also may not be true. So
I don't know bash Ai too hard for being black box,
(44:06):
because I think that humans, despite our best interests and
thousands of years of our statilion onwards syllogisms and culture,
you know, our logic and rationality, and is overlaid on
some very hard to explain animal instincts. Yeah, and and
when I think about this problem, so this isn't this
isn't strictly a I but I have a very strong
(44:30):
emotional response to the black box problem. But that's because
I live in the state of Georgia. And in Georgia
you may or may not know this, we rely heavily
upon technologically ancient electronic voting machines that have no paper trail,
so there's no way to audit them. They also have
(44:52):
been proven to be vulnerable to to um attack, you know,
to outside attack. And in fact, there's an enormous controversy
in the State of Georgia that some servers may have
been tampered with, and then the servers that may or
may not have been tampered with were mysteriously wiped clear
(45:12):
a couple of days before anyone could do an investigation
of it. And so when you see something like that
where that lack of transparency can have not just a
direct impact on lives, I mean we're talking about actually
threatening the very concept of the democratic process. Right. If
you cannot trust the results of your election, you have
(45:33):
undermined democracy. And so when I see that, that's why
I end up having a very kind of heightened emotional
response to the thought of these opaque systems. But Odds
to your point, that is absolutely correct that people like
we we don't necessarily hold people to that same standard.
We will take them at their word if they tell us, oh, well,
(45:55):
what what I was thinking when it happened was X, Y,
and Z, When in reality, maybe they weren't thinking anything
at all, Maybe they were reacting, but in in the
post event, they have come up with a rationalization for
that action that works within the narrative that they've constructed
for their own lives. So maybe maybe that's because maybe
(46:15):
that means I just need to give machines a little
bit of the same slack I would give people. We
do hold machines to a to an unreasonable expectation. I mean,
you know how many people are killed every year on
the roads by drunk driving, by unqualified driving, by poor driving,
you know, and when that happens, we kind of take
it as a you know, a necessary evil so that
(46:36):
people can get around in cars. And yet if anyone
is killed in a you know, an accident involving driver
as car like that which has happened with Tesla, you know,
it's news for runs for months and months and months.
I'm not saying it shouldn't be news. I'm not saying
it's not saying we should scrutinize. But we also know
in order to enjoy the benefits of AI and technology,
we have to accept that it comes with risks, just
(46:58):
like the automobile itself comes with risks. Well, I'm sorry,
go ahead, Kara, No I was gonna say. I was
speaking at ODDS earlier today about this case of um,
a man who basically was pitching around a AI powered
hedge fund and is now in a lot of trouble
because he lost a lot of money for people and
(47:21):
you know, I think there's a I think it's an
interesting story because you know, it's a legal battle that
has emerged that it's sort of going to set up
precedent for how you know, AI is incorporated into at
least this facet of life, right in terms of making
financial decisions for real human beings with real money, right,
(47:44):
And if we're allowing computer programs to make decisions based
on data and then those decisions lead to a significant
loss of finding a significant loss of money. You know,
who are we holding accountable? Are we holding the money
manager accountable or reholding the program or you know, or
(48:07):
holding the algorithm accountable, algorithm the person who wrote the
algorithm accountable? You know, I think, and I actually don't
think the American legal system. I don't think any legal
system really knows how to handle this problem. And how
would you How would you if you don't even know
how the algorithm is working, and that you have no
(48:27):
language for like human language for it. So I think,
and we're going to see more and more cases of
this because I think at the same time and Oz
talks about this a lot with me, is you know,
AI is used as such a strong marketing tool right
now in all facets of life, and again in healthcare
and agriculture, you know, in computing, in in in the
automobile industry. And so I think people are very susceptible
(48:51):
to being marketed with a I it's it's has a
serious factor right now. But at the same time, are
we willing to accept AI shortcomings? I mean, I think
we have to be um But I think, you know,
as Oz just said, like people are setting their expectations
a bit high. I mean, they are computers, after all. Yeah,
(49:12):
and well we've also we've lived in an era where
we've seen such incredible advancement in computers that it starts
to reinforce this idea that technology can accomplish just about anything.
I mean, if you had told ten year old Jonathan
that one day he would have a computer that would
fit in his pocket and would allow him to communicate
(49:33):
with everyone he knows, and whether it's through voice or
video or text, that I would be able to tap
into the world's you know, database of all human knowledge
at a touch of a button, I would have thought
you were crazy. That that would have seemed completely patently
impossible to me. I mean, let's talking about an era
(49:54):
where at that point the most sophisticated machine out there
was a ma Cantosh computer or the IBM PC, and
you look at that and you think, well, these are
great machines, but no, there's no way I'm going to
have one of these in my pocket, let alone be
able to do all these other things you're talking about.
So once you look into that, you start to realize, oh,
(50:16):
we have now built up this expectation that because we
have this amazing, uh incredibly rapid evolution of technology in
our recent past, we start projecting that and thinking the
same sort of progress is going to continue unabated. It's
actually just going to pick up speed. And then we
start thinking, oh, well, that means that before long we're
gonna have the sort of uh, incredibly sophisticated, artificially intelligent
(50:40):
constructs as part of our day to day lives. Uh.
And that's not necessarily the case because of what it does.
It assumes that all technological advancement proceeds at the same speed,
which isn't That's not the case. What do you mean
the chat bots, the chatbots that I was going to
get fun? What are you talking? I'm sorry, I'm sorry.
The chat bought past your two ring test. Well, one
(51:02):
thing I did want to kind of end on because
I think this is sort of the the the capper
of discussions about how AI is potentially hazardous is this
is a discussion that's come up many times of the past,
i'd say three or four years, about how AI and
automation are going to end up displacing people. It's going
(51:25):
to end up eliminating jobs. And there are lots of
different points of view on the subject. You've got people
who say, yes, some jobs are going to go away.
They are the very repetitive jobs, the ones the things
that AI are good at, like being able to do
the same thing over and over and over again with
very little variation. You know, the more you vary from
the norm, the more difficult it is for a machine
(51:46):
to do. But those jobs will probably go away, but
as a result, more jobs will be created. And other
people are saying maybe in the short term, but in
the long term, we're going to see automation take over
everything and no one's gonna have a job, and we've
got to figure this out, and something's gonna you know,
the entire world economy is gonna collapse, or we're gonna
have to go to some form of guaranteed basic income
(52:08):
for the entire world, or we're gonna have to do
away with the concept of money altogether. Um, now that
we've divorced money from labor what we do, and so
we're seeing like all these kind of conversations going around,
and I thought I would tell you guys a bit
because just for the heck of it, I found an M. I. T. H.
Technology Review article from two thousand eighteen that gathered together
(52:29):
all of the major predictions for what automation was going
to do, um, like how many jobs it was going
to destroy versus create. And Uh, I think it's pretty telling.
I'm just gonna cite one year. They have years from
two thousand sixteen up to let's see, but I'm just
gonna do twenty twenty five two different predictions you had
(52:52):
Forrester predicting that in the US, automation would destroy the
words of the view not me, uh, twenty four million
on six thousand, two hundred forty jobs and only create million,
six hundred four thousand, seven hundred sixty jobs. So you're
looking at a deficit of more than ten million jobs. Meanwhile,
(53:16):
Science Alert said jobs destroyed three million, four hundred thousand,
so twenty one million jobs fewer predicted than Forrester. So
if you're looking at the twenty one million disparity between predictions,
do you think it's safe to say we don't know yet.
I don't think we know yet. I don't think you
(53:37):
know yet. I think it's a very I think it.
I think the idea of unfortunately the line of jobs
being lost is UH is part of the if it
bleeds it leads, you know, method of journalism. I do
think it's absolutely true that automation is not only on
the horizon, it's here, you know. I mean, if we
(53:57):
just talk about agriculture, for example, you know there is
UM right now in Washington Can Washington State. UM is
you know, piloting a harvesting robot that they are going
to be using for the first time in this next
harvest apple harvest where they're using this sort of huge
hoover like vacuum to pick apples. Right. You know, Amazon
(54:19):
is introducing uh some new automation technology that's going to
UH cut the box building jobs that you see in
some of their warehouses, so they're not they're displacing roles
they're changing roles, right, so uh, instead of actually creating
actually making boxes, they're still human beings putting boxes on
(54:42):
conveyor belts, but they're not making the boxes because that
leads to a lot of waste, right, because there's a
lot of human error involved. Um. And also, these machines
can crank out six hundred to seven hundred boxes, you know,
per hour, which a human being cannot do. Um. So
there are certain uh, there's there's there's no denying that
(55:02):
machines are replacing human beings, and in that way, I
don't I don't think it's like literal robots. I think
that there are machines that are doing jobs that are
very difficult and taxing on human beings. They're doing those
jobs better and therefore, yes, displacing people. Um. You know
what Amazon will say is that it's not so much
(55:23):
about replacing people, it's about repurposing people and um giving
people jobs that are more meaningful. I think that is
a public relations line, um. But I also think there's
a there's a certain element of truth to it, which is,
you know, can we use machines to take people out
of jobs that are both physically and emotionally taxing for them,
(55:45):
I think certainly, and that could you know, it could
be one of the upsides, but I think that, yeah,
of course, there are jobs that are going to be
replaced by machines that are, you know, not only faster,
but have a much lower margin of error. And then
maybe some you know, read distributive universal basic income solution
to solve the practical problem of how will people eat,
(56:07):
but it won't solve the bigger culture and a psychological problem,
which is at the American dream and everything we're encouraged
to think in this country is that through work you
can better yourself and that this one major source of
your identity and value in the world is your success
in your career and how much you achieve and how
(56:27):
many promotions you get. I mean, look at the famous
Christmas movie, Um, what's it called? Sorry Christmas Carol? No, no no, no, nos,
Oh It's a wonderful life. No, not even that one.
It's along with the guy chevy Chase, Oh Christmas Vacation.
I mean, look at and look at National Lampoon's Christmas Vacation.
(56:48):
Chevy Chase's whole identity and worldview is predicated on that
Christmas bonus and you know, we've been encouraged by a
hundred years if not more, of this post industrial revolution
world to equate our value in life with a financial
value that we create. And we may be technically economically
able to move away from that, but psychologically it's going
(57:08):
to be intensely traumatic and we have not even begun
to deal with the consequences of that or even think
about them. Yeah, that's a good point, I think. Uh,
you know, do you have the technologists who argue, and
I think rightly so that they're going to be a
lot of aspects that AI simply will not be ready
to just take over. Again, the further out from the
(57:30):
repetitive norm you get, the more challenging it is for
a machine to do, whereas a human can pick up
on it pretty quickly. We're really good at doing that.
But um so there's gonna be certain things that, at
least for the foreseeable future, are going to be really
firmly in the realm of human beings. Uh. But you also,
(57:52):
you know, end up having to think about who's messaging
this out right, because that always creates that little question
you have too. If it's IBM saying the technology where
creating is going to augment people in the future, then
you remember, oh, well, IBM is also designing those systems. UM.
But I still think that there is truth to it.
(58:13):
I mean, I think that there is truth that AI
can augment people, and as you were saying, Kara can
help take over parts of jobs that really humans are
not very well suited for in the first place, and
certainly wouldn't be considered the type of jobs that most
people would find meaning from right that they wouldn't find
value in that opportunity. They would be doing it because
(58:36):
they would need to make ends meet, But it's not
necessarily I don't think there's a lot of people who
dream of making boxes UM. So I think it's it's
one of those things where I think it always benefits
you to kind of take a step back, think about
who's messaging this um and and really take a look
at what's actually going on. Because, as it turns out,
(58:58):
when you look at a prediction and one since predicting
that twenty four million jobs are going to be destroyed
in and someone else is saying it's more like three
million jobs, what it ultimately what it ultimately tells us
is that nobody really knows and that that in itself
is scary. It's not. It's not making us feel better
about the future necessarily, But I think what it really
(59:20):
tells us is the future is not set in stone
at all, and that if we are going forward knowing
the capabilities of AI, how it can work with us.
If we hold companies and individuals accountable for designing AI
systems that can uh be used in an ethical way
and UH and then hold the people who are implementing
(59:43):
those systems to make sure it's done in that ethical way,
then we can see the benefits of AI. I think
AI ultimately is a very complicated tool, but it's like
other tools, which means you can use it for good
or you can use it for evil, And ultimately comes
down to the implementation and and vigilance. Right, we have
(01:00:05):
to just make sure that we're paying attention to what's
going on and not just trusting that the machines are
doing everything correctly, because as far as the machines are concerned,
they're doing everything correctly. It's just that the outcome is
not so great for us. Um, a hammer is always
doing its job. Yeah, it's just a matter of who's
using Yeah, exactly. Yeah, It depends on whoever's holding the hammer,
(01:00:25):
what he or she thinks of as a nail. That's
what it really comes down to. Um, well, guys, thank
you so much. We're going to have another episode coming
up in next week guys, so so stay tuned because
Os and carrere gonna be back. We're gonna talk about
how different parts of the world are viewing a I
from sort of a policy and regulations kind of perspective,
(01:00:49):
as well as just like what are just the different
approaches to artificial intelligence around the world, because, as it
turns out, you know, Kara, you've already mentioned a couple
of times how the EU has been taking steps to
try and and think about this ahead of everybody else.
But what's going on around the world. And I think
you guys are going to be surprised. I know I
was because I am so US centric in my show
(01:01:10):
that I often have blinders on. So we'll have to
join us for that episode that's coming out next week.
And if you haven't already gone out and subscribe to Sleepwalkers,
this is your reminder to go out and do that
because the show is fantastic. You've got some great interviews,
you have fantastic conversations between the two of you about
(01:01:32):
these these subjects, and it's really educational and entertaining and
thought provoking, and congratulations on creating such a really compelling show. Well,
thank you, Jonathan. We're we're already enjoying working on Sleepwalkers,
and you know, this conversation is has been fantastic for
us to have a chance to step out of our
own show and think about some of these ideas in
(01:01:53):
conversation with you, so we already enjoyed it. Thank you, Jonathan.
You're very welcome, and so guys, if you want to
get in touch with me, send me an email the
addresses tech stuff at how stuff works dot com. Pop
on over to the website that's tech stuff podcast dot com.
You'll find an archive of all of our past episodes. There.
You also find links to the social media presence in
(01:02:14):
our online store, where every purchase you make goes to
help the show. We greatly appreciate it, and we will
talk to you again really soon. Y tech Stuff is
a production of I Heart Radio's How Stuff Works. For
more podcasts from my heart Radio, visit the i heart
Radio app, Apple Podcasts, or wherever you listen to your
(01:02:35):
favorite shows,