All Episodes

July 25, 2018 52 mins

If you're like millions of other people, you hate making phone calls. "Why can't I just text," you might think, "and only call if there's an emergency?" Google may soon have a solution with Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. For many this feels like a startling and innovative convenience -- no more awkward conversations making appointments or routine check-ins, right? Yet critics of this concept warn we may be approaching something much bigger, and much more dangerous, than a simple piece of helpful software. If computers begin effectively impersonating you, how will you be able to prove your own identity?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. M Hello,

(00:24):
and welcome back to the show. I'm no standing in
for Matt with that line, because whenever Matt's not here,
Ben and I Ben and I always look at each
other for a few minutes and realize that we're not
quite sure how to start the show. However, Never fear
Met is off on a very special secret project that
we cannot wait to tell you about. We will have

(00:44):
to tell you sometime soon, so look forward to that.
No spoilers. In the meantime, they call me Ben. We
are joined with our super producer, Paul the Personal Digital
Assistant decade. Yeah, p d A. But most importantly, you
are you. You are here, and that makes this stuff
they don't want you to know. We're we're pretty excited

(01:05):
about today's episode. What did p d A stand for
when it was like a pom pilot? Was that personal
digital Assistant? Uh? Yes, I believe so? Or was it okay?
So there was public display of Affection's that? But they
used to call like blackberries PDAs back before the advent
of the iPhone and the more tablet like device. Personal

(01:26):
digital assistant. That's what it has to be. Isn't that
funny though, because that's kind of become a new thing,
because today we're talking about the kinds of personal digital
assistance that you can talk to and can potentially talk back. Yes,
So let's look at some of the context. It's a
it's a very common trope in science fiction robots impersonating

(01:46):
human beings with increasing levels of fidelity, and we see
it in pop culture all the time, and some stories,
like in every episode of The Terminator, every everything related
to the Terminator franchise, machine consciousness tries to mimic humanity
exclusively as a means of waging war, and in other
places or other series, such as The Matrix, artificial intelligence

(02:08):
or AI attempts to surround us meat bags with an
impersonation of reality, complete with individual machine minds that can
pass for humans. And then in other cases, humans work
together to build technologies that can impersonate other human beings
in any number of ways, sometimes for sexy times reasons,
sometimes for just companionship. And we've seen that go awry.

(02:31):
I mean the Terminator, I think The whole point of
that series was that humans created Skynet or whatever for
their own to serve their own purposes, and then Skynet said,
what up, humans were done with? You are gonna do
our own thing or like in Westworld. And this goes,
like you said, in any number of ways, from increasingly
lifelike androids to entities that exist purely in the digital sphere,

(02:55):
able to hold genuine seeming conversations and fun shoeing at
or above what we would consider the average level of
human intelligence. As history has proven, science fiction is often prescient,
and it's not uncommon for authors to spin fantastic tales,
only for those tales to move years decades later from

(03:18):
the realm of science fiction to the world of science fact.
And the quest for this humanlike technology in real life
is no different. Yeah, I mean, I actually saw a
YouTube video a bunch of clips of times that science
fiction films have predicted technology that's totally a thing right now,
Like um Star Trek, for example, there's a little communicators

(03:39):
they're basically iPhones. And remember that scene in Total Recall
where they're walking through a full body scanner at the
airport and you see like their skeletons and you can
see the weapons hiding and stuff. That's pretty much what
we do now at the airport. We gotta put our
hands over our heads and stand in those full body
scanners where hopefully they just draw a little cartoon of
us with the naughty bits scrubbed out. But in theory

(04:00):
they can see everything. So too. Is AI and the
kinds of things we're talking about today. Yeah, and the
concept of what we call artificial intelligence. Long time listeners,
you may recall, as we use this phrase, you may
recall that some former guests of ours have objected to
the term artificial intelligence with a very good question, what

(04:21):
makes it artificial? What makes it if we're talking about consciousness,
what makes it any less of a consciousness than our own? Right?
And we think that's a good We think that's a
very good and valid point. Uh. We tend to agree
with it. But for the sake of brevity, we're just
gonna go with AI as a non pejorative thing. It's

(04:41):
just easier to say it that way. The concept of
AI has surprisingly old roots in our culture, especially if
we consider those ancient tales of non human entities impersonating
human beings. The ferry stories of changelings switched out at birth,
or or God's changing shape to breathe, eat with animals
or people, or shape shifters. In the twentieth century, the

(05:05):
concept of this artificial, inorganic thinking life form was popularized,
just like the point we made with science fiction through
culture and fiction, Like if you think of one of
the earliest artificial intelligences that blew up in the Western world,
it's the tin man in The Wizard of Oz. Yeah,
I never thought of him like that, but I guess
it's true. It's a good standing for that because even

(05:26):
the heart that he gets to the end spoiler alert
for the Wizard of Oz is like a clockwork cart,
you know, it's like something to add to his mechanations.
It's not actually a physical heart. So he's absolutely meant
to be like an automaton, which is kind of the
earliest form of robotics that go back to ancient times,
even where you have these incredibly intricate creations that move

(05:48):
through a series of gears and pulleys and what have you,
but are necessarily imbued with any kind of like ability
to make choices. But there are some that can even
like play games. I think there's one that's like a writer.
The mechanical term yeah, yeah, yeah, And they were lauded
for their ability to appear to do human esque things, right,

(06:09):
but people generally did not think they had a soul,
for instance. And it kind of goes back to you're
mentioning we had some folks say you had to take
issue with the term of artificial intelligence. We've also had
some folks right and taking issue with the term the
idea of machine learning, because you know, it is a
matter of we're still at a place where we have
to program machines to do what we want. We certainly

(06:32):
have have yet to fully experienced this like idea of
a singularity where the machine takes what we've imbuted it
with and develops its ability to go outside of that
or like make decisions outside of the parameters that we've programmed.
Very rarely have we seen that, and when we do
see it typically gets shut down. It's yeah, medic cognition, right,

(06:52):
so thinking about thinking rather so. By the nineteen fifties,
scientists and mathematicians and philosophers were for familiar with this
concept of quote unquote artificial intelligence, and our species began
to cognitively migrate from this world of fantasy toward a
world increasingly grounded in fact, and we can we can

(07:13):
hit some of the high points of artificial intelligence here.
In ninety the famous code breaker Alan Turing made one
of the most significant early steps in real life AI
when he and some of his colleagues created what we
now commonly call the Touring Tests. Named after him, of course,
he wrote this paper called Computing Machinery and Intelligence that

(07:35):
laid the groundwork both for the means of constructing AI
and for the ways in which we could measure the
intelligence of that AI or our success building it, which
might kind of be two different things. And sadly, well,
amazingly and sadly this line of thought was ahead of
the curve. Touring could not get right to work building

(07:58):
these human like minds were more specifically, he couldn't get
to work building minds that could full humans into thinking
they were also human minds because the technology at the
time had hard limits, like to the point you make
know about the mechanical turk. Up until nineteen nine or so,
computers couldn't really store commands we can say commands or

(08:22):
decisions here. They could only execute them and this meant
that the computers we could build at that time were
unable to satisfy a key prerequisite of intelligence. They couldn't
remember past events, past information, and therefore they could not
use this memory to inform present commands or decisions. And
in this way, computers began at that tabula rasas state

(08:46):
that blank slate state that so many mystic, spiritualist and
philosophers spend their lives attempting to attain I think that's fascinating. Computers,
like people who practice hardcore meditation, should existed mentally only
in the present. Yeah, And we'll get back to that
concept in a little bit in terms of um some

(09:07):
of the newer computers and how they are able to
quote unquote figure things out some better than others. But
let's just go back to the fifties for a second,
when computers were insanely expensive. I think this is no secret.
They ran you. You at least one, you couldn't even
own it. And that wasn't a thing for about two
K a month, which I believe been in today's standards.

(09:29):
That would be about two million dollars for one month
of computer use. In two dollars, a lot of Netflix subscriptions,
my friend um. And only of course the most prestigious
universities and huge technology corporations could even you know, afford
the cost of entry UM. So for Turing and his

(09:49):
ilk um to actually be able to succeed in building
something resembling anywhere closer artificial intelligence, they would have to
be part of some network of high profile, very wealthy
and influential UM funders of research. Yeah. That would that
they would be able to receive just an absolute boatload

(10:13):
of money from to get this kind of work done.
And imagine, you know, that's a hard sell. It seems
really interesting now, but back yeah, yeah, but back then
it was literally walking up to people and saying, hey,
we're good at math, and do you remember the tin
Man from Wizard of Oz. We sort of want to
make that. Can we have all of the money. So
that's that's tough, but they soldiered on. In ninety five,

(10:35):
another groundbreaking event occurred. There was the premiere of a
program called The Logic Theorists. It was supposed to mimic
the problem solving skills of a human being, and it
was funded by one of our favorite shady boogeyman, the
Research and Development corporation known today as Rand Rand Corporation.
That sounds like something out of a sci fi movie.

(10:56):
You don't know itself, And it's still around um doing
pretty interesting and secretive work because I believe they ended
up having a relationship with the US. Oh yeah, the
government big time. Absolute. You're absolutely right. This Logic Theorist
is considered by many people to be the first technically
the first artificial intelligence program. And there was a big

(11:19):
conference in nineteen fifty six hosted by John McCarthy and
a guy named Marvin Minsky, and in this conference they
presented the Logic Theorist as a proof of concept. The
conference is called the Dartmouth Summer Research Project on Artificial Intelligence.
And you know, no, I I thought of you with
this because I know how how we both love acronyms.

(11:42):
So I feel like in this one the D would
be silence. I'm just gonna call it surpi. That's great.
That sounds better because the D, you know, you can't
really do it. D s sound So the conference itself
fell short of the original very um bitious aims of
the organizers. They wanted to bring together the world's best

(12:03):
and brightest subject matter experts and by God, make an
artificial mind just in a week of a weekend, right, right,
they said it took God seven days. Let's uh, what
do you say? We could do it and for or
something like that. But the problem was pretty much everybody
disagreed on how exactly you would make a human like intelligence.
And at this point they're still thinking in terms of

(12:25):
artificial intelligence being like humans, which is a huge assumption.
But they unanimously agreed for the very first time on
a single crucial point that it was possible to make
a I and this set the stage for the next
two decades of research. So, from nineteen fifty seven to
nineteen seventy four, artificial intelligence interest in it or research,

(12:49):
and it really flourished. Computers they could stick, you know,
they were improving by leaps and bounds. They could store
more information, which is crucial, and of course they became
faster and cheaper and more excess double um. Machine learning
algorithms also began to improve, and people got better at
knowing which algorithm to use for their particular purposes, which

(13:09):
was also important because it wasn't you know that there
was sort of an established um language of algorithms that
people could pick and choose from to suit their particular problems.
Early successes like the general problem solver, Um, Ben, tell
me a little bit more about the general problem solver.
It does what it says on the tin. He could
give it a variety of problems and generally speaking, it

(13:30):
would attempt to solve them, not necessarily going to give
you the most creative solution. But yeah, and again that's
that's like an early days thing. So at the time
it was very impressive, but of course it was just
the beginning. And you know, you had another example I
think of uh Ai application in that time. Yeah, this

(13:52):
is when Alexa was first invented. Oh yes, yes, I'm kidding.
It's called Eliza. Um. But this is like a spoken
language interpreter which I don't know. I wonder if Alexa
is a is a nod to Eliza, what do you
think man? That would be pretty cool. Only a couple
of letters off. It was a spoken language interpreter um
that helped convince the government to really, Okay, this is

(14:14):
something that we're into and this is really important for
our story today. Yeah. It convinced the government, specifically DARPA
here in the United States, to start funding artificial Intelligence DARPA.
As you know if you are a longtime listener of
this show, DARPA is the resident mad science Department of

(14:36):
the United States government, and it stands for Defense Advanced
Research Projects Agency. They're the ones who do all the
X files level or fringe level stuff you hear about
in the news or often it's it's gonna be there
forefront of it. And then there's also stuff like Matt's
favorite public private partnership, skunk Works. It's a good name,

(15:00):
I mean, if you can look it for nothing other
than that, but they do all the secret Air Force
projects and you know, spy planes and all kinds of
stuff that's kind of technology you think about that is
probably years ahead of anything we've seen out there in
the world today, and it's staking for UFOs right uh
In this made people very gas very optimistic, because that's

(15:23):
another very human thing that we haven't quite learned how
to program optimism. And uh In nine seventy Marvin Minsky,
that guy who co hosted this conference, he told Life
magazine from three day years, we will have a machine
with the general intelligence of an average human being. He
was wrong, But to your point about suppressed technology, Noll

(15:46):
who's wrong so far as we know. It's interesting though
too not to get too far ahead of ourselves. But
with everything that's going on with big public facing companies
like Google and Amazon and Apple and stuff, you start
to get a sense that maybe the secret government stuff
isn't quite as far ahead as it once was, right,
I mean, that's just my opinion. I don't know. I

(16:09):
think it's a it's a good opinion. Something we want
to hear from you about, ladies and gentlemen, because it's
proven that in terms of physical hardware, material of war,
weapons and aviation and stuff, it is proven that the
US and most other governments want to keep that stuff

(16:29):
under wraps absolutely. I guess what I'm saying is when
you're when you're looking at a company that's trying to
sell you something, and you see how far they advance
with each update every year, you kind of get a
sense that maybe this is about where they're actually at.
That's that's the point. So I bring up the aviation
stuff and the weapons of war stuff to contrast it here,
because what what we're seeing also is that the governments

(16:54):
of the world. This is true, the governments of the
world usually can't pay the best and brightest as much
as the private entities game. So they're grabbing some of
the best workers unless those people are severely ideological, and
then they're making most of the progress which they would
later sell to a government. So I think maybe there

(17:17):
is a little more transparency. I think so, which is
which is an interestingly positive thing. But let let's le's
let's get keep, let's keep, let's keep huming right along.
So the quest for humanlike artificial intelligence soldiered on, but
it still had tremendous obstacles. Although computers could now store information,
they couldn't store enough of it, and they could not

(17:37):
process it at a fast enough pace. So funding dwindled
until the eighties. And that's when AI experienced a renaissance,
which will get to after a word from our sponsor.
So in the eighties, new tools and methods gave I

(18:00):
the field of AI a that renaissance that that Ben mentioned,
that kick in the digital pant and a guy named
John Hopfield and another fellow named David Rommelhart popularized this
concept of deep learning, which is a technique that allowed
computers to learn using experiences, using um paying attention to surroundings.

(18:24):
For example, the the idea of the fact that our
phones are serving us ads because we're talking about stuff,
and it's taking that information in and using it to
do something and learning our our habits, right, and this
is a really important part of things like these personal
digital assistance that we talked about at the top of
the show, right and on the other side of the

(18:47):
brain here too, or I guess. In a parallel approach,
a guy named Edward vegan Ball introduced expert systems that
mimicked the decision making process of a human expert. So
what happens. The program will ask an expert in the
field like it would ask super producer Paul or Richard

(19:07):
Feynman a question about production or physics, and then they
would say, how do you respond in this given situation?
And it could and it would take this for every
situation it could soak up, and then non experts could
later receive advice from that program about that information. It
sounds basic now. It sounds like how a search engine

(19:27):
can know, with increasingly accurate levels of fidelity, what question
you mean to ask when you ask you a question.
Remember when Google used to be super passive aggressive and
say did you mean? Well, now it just fills it
in for you Google instantly, and the reason for that
is that it has access to every entree that anyone
has ever put into Google, and so it combines all

(19:49):
that information and makes the best guess as to what
it thinks you're probably searching for based on what everyone
else that has started searching for that kind of thing
has also done, which is that same deep learning stuff
that is coming into play, which makes predictive text so
funny it can be. I guess it depends on the
platform you're on, and we'll get that a little bit too,
because sometimes it can be just really boneheaded. And haven't

(20:12):
you figured this out yet, Sirie spoiler alert, No he hasn't. Um.
But this was all really put to the test in
the nineties and two thousand's when we really hit some
big landmark moments and high water mark moments in artificial intelligence,
like with with Gary Kasprov who I know that you
are fascinated with, although he is a problematic figure at times. Yeah, yeah, Regie,

(20:35):
anti semitism aside. Yeah. Chess grand masters, the human ones
at least right are um immensely even fractally compelling, because
there's always some other layer to their personality, and you
have to wonder about the purported correlation between mental instability

(20:57):
and high thresholds of intelligence, because well, that's a story
for another day. Which that episode, Yeah, it might get
a little close down for us, but we can do it. So. Yeah,
as you're seeing, he was defeated by Deep Blue, built
by IBM, a computer just built to play chess. This
was a John Henry moment for the human race. And

(21:20):
then a scientist named Cynthia Brazel created Kismet, a program
of recognizing and displaying emotion. Now, for all our techno
futurist philosophers in the crowd, does Kismet recognizing and displaying
emotion automatically mean that Kismet experiences emotion? Story for another day.

(21:43):
But these uh, person versus program one on one matches
didn't stop with the game of chess. There was also Jeopardy, right,
And then there was one other one that had tremendous
implications for the world of programming. Yeah, and that was
much more recently with Google's Alpha go. Um let's go

(22:06):
as in the ancient Chinese strategy game of Go, and
it successfully defeated a Go champion Kaiji, and Go is
is a notoriously complex game, very difficult to predict, and
and always have to be many many moves ahead of
your opponent. So that's pretty cool. That's almost a step further.

(22:28):
Wouldn't you say, Ben, that Go would be more challenging
for a computer to beat a human opponent than chess?
Even like this is like a real serious leap forward.
I would absolutely agree, And a lot of people were
even more skeptical watching that game. People understood Go. We

(22:49):
can also go out there and say, Paul, I don't
know if you're familiar with this, but are you Are
you a Go enthusiast? You play this game? No? Not really, Noel,
I've never played it. I remember it from the movie Pie, Remember, Yeah,
the one, one of Darren Aronowski's first movies, which people
kind of crap on. But I enjoyed it a lot

(23:10):
when I was young anyway. But that movie is about
a kind of a mad computer scientist who realizes that
there that the Kabbala contains some kind of code involving
the number for pie and gets goes down a crazy
rabbit hole of insanity and paranoia. But Go as a
recurring game in that movie, implying that it's all about

(23:35):
high level, very very high level thinking. Yeah, I think
that's I think that's excellent. You know, Pie is a
fantastic but jarring film, you know. And so we see
a common trend in these again, we'll call them John
Henry moments. And people generally know the story of John
Henry right now, you think they do? I know he

(23:55):
was a steel driving man. Give me some more, well,
John Henry, anyone who was outside of the US. And
I don't think we we know for sure how common
this story is in the States anymore. He was in folklore.
He was African American steel driving man. Esneal said, that's
absolutely true. And his job was the hammer a steel

(24:18):
drill into rock to make holes for explosives, right, and
they would blast rock constructing a railroad tunnel. And in
the legend he was pitted against a new fangled steam
powered hammer. It was a man against machine race. And
the historical aspect of whether this did or didn't happen

(24:42):
is that people go back and forth on it. But
what we're experiencing now with AI is a series of
increasingly high stakes John Henry moments. Key g said, Gary
kasprov was it was it? Ken Jennings who played Deep Blue. Yeah,
it was our own buddy colleague of omnibus fame, and

(25:04):
then other things. Yeah he does think yeah, yeah, yeah,
in addition to you know, his big tent item, which
is hanging hanging out with us. How stuff works. Right, So,
now we are on the cusp of a world that
is both brave to quote Watson, Watson, Watson, I said

(25:27):
Deep Blue, and that's the chess with the chess one.
Watson was the quiz the quiz bot also by IBM
though right, okay, yes, so Ken Jennings versus Watson, caspar
Off versus Deep Blue. And now we are on the
cusp of a new world that is both brave to
quote Altis Huxley, and strange. The average citizen in a

(25:51):
developed country interacts with some form of artificial intelligence on
an increasingly frequent basis, even if it's indirect. And think
about it, when is the last time you called a
large company and didn't first go through an automated line
A rough impersonation of a conversation with a computer. I
just always start mashing zero right away just to pass

(26:14):
them because they it's so annoying, because it's like if
I thought that, uh, that a computer simulation could help
me solve the problem. I probably would have just done
this online. The fact that I'm calling the company in
the first place not to derail it with my might
get off my lawn moment. Please do seriously. It's like
they're never that helpful. You always have to repeat yourself
over and over again, and you usually do not get

(26:36):
the result you're looking for. Um, which could change with
some of the new technology we were talking about today,
which I think we can get into right now. Possibly
maybe maybe it will change because one of the big
task for artificial intelligence right now, what what we use
these applications for now is big data. We have built

(26:58):
so many ways to scoop up in formation that are poor.
Old primate brains, which are built for foraging and living
in small bands and forests, cannot analyze and process all
this stuff ourselves, so we built machines and codes. In
this case, codes could just be standing for lines of
thought to decipher all this data for us. And the

(27:18):
application of artificial intelligence in this regard has already done
amazing things in industries like technology, banking, marketing, entertainments. It's
automation of like sweeping through giant sets of data so
that humans don't have to do it, and using particular
um what's the word algorithms, I guess is one way
of putting it or just rules saying look for this

(27:39):
thing in this set of data, parts out this file,
if then whatever. So that's kind of been the primary
use of that up to this point in all of
those things you said, So, um, what's next? Right right?
Because right now it doesn't matter if whether the algorithms
improve much, it doesn't matter whether there is a C chain.

(28:00):
The basic concept is there. And this massive computing approach,
this vacuum cleaner approach, for instance, that the n s
A uses, just allows artificial intelligence to learn through brute force.
In the future, we're going to see more variation in AI.
We'll see autonomous vehicles, will see predictive ordering services, which

(28:20):
I know most of us will hate, and soon it
will seem strange not to have some sort of artificial
intelligence existing in some aspect of your everyday life. But
today's question, and a spooky one, is how close to
human can these programs actually get? And I think we'll
get to that UM after just one more quick little

(28:43):
sponsor break. Here's where it gets crazy. Hi, Um, I'd
like to reserve a table for Wednesday seven for seven people. Um,

(29:05):
it's for four people? Well people when um next Wednesday
at six pm? Oh, actually we leave her for like April,
like a bye people for you? For people, you can come.
How long is the way usually to be seated? When

(29:26):
tomorrow or week or four next Wednesday the seventh? Oh no,
it's not too easy. You can come from all people. Okay,
oh I got you, thanks, bye bye. But then that
that that didn't sound like anything at all. That was
just a conversation between a young man and the proprietor

(29:48):
of a of a Chinese restaurant, right and who was?
Who is? Whoever has a great time making restaurant reservation?
Now you know because when we have open table for
of that? Am I? Right? Bend? I like many people
in our generation hate talking on the phone. It's true. Um,
there's even a term for that. I think was a
telephone a phobia? Yes, yes, that's a good one. But

(30:10):
now I felt, yeah, what I found that one I
was like, how can you make a can you just
make anything of phobia? And I you can. I think
it's very American English. Now I have microphone a phobia,
which is weird considering that I sit in front of
one all the time. Right now, I'm in utter terror.
But no, I was being coy. That clip was not
between two humans. I think you can probably guess which

(30:31):
side of the conversation was the non human, because if
you listen back to it, I think you'll notice a
couple of tells um and that there was a few
There were a few kind of wrenches thrown into that
that exchange where the person was trying to call to
make a reservation, and there was some confusion from the

(30:51):
person on the other end about how many, about what
day they wanted the reservation to be, how many people
were in the party, and it ultimately ended up with, well,
you don't really need a reservation. So the person on
the other on the phone was not equipped. It was
not something they would typically do. Reservations seemed like not
really a thing in this restaurant, and so the caller
kind of repeated himself specific things like the specific time

(31:16):
and day, and then there was a momentary pause when
they said we we only do reservations for groups of
five or more, gotcha, which still sounded very human did
it was not though it was a conversation between an
unsuspecting restaurant employee, as you mentioned, and a computer program.
This is an example of Google's Duplex system, a personal

(31:40):
assistant designed to make users lives easier by handling standard
phone calls, so like doctors, appointments, reservations, and so on,
breaking up with a loved one without troubling you or
possibly even letting the person on the other end of
the phone know that it's not actually you making the call. Yeah,
And we get into all kinds of ethical quandaries with

(32:02):
this that we'll get into first first and foremost, though,
I think it's interesting that the idea, it's it's inherently tricksy,
the whole the whole affair, right. The idea is to
not inconvenience either party. But you are you are genuinely
kind of tricking someone into believing they're talking to a person. Yes,
and you're giving up a lot of stuff. So at

(32:22):
first this seems amazing because, like we established earlier, telephone
and phobia, in addition to being really fun to say,
is a genuine thing. And I would argue increasingly people
are resorting to text. I mean, how often do you
talk on the phone if you're not talking to like
your mom or dad, like on a preappointed time. I mean,

(32:43):
if I'm doing interviews for a podcast, but that's a
very that's a very specific task, not for fun, never
for fun, and unless I just feel like I could
suck something out a lot quicker on the phone, real
quick than I could usually for business though, for work stuff,
because sometimes or conference calls, the dreaded ofference call. That's
also works thing that still happens. But yeah, for fun,

(33:04):
not very often, um, you know, but for other people
in the mix. There's totally a dark side to this
idea of the super convenient, amazing idea of my phone
being able to make calls for me and set up
my you know, my waxing appointment or whatever. Because the
implications are incredibly far reaching, aren't they, ben Yeah, they

(33:27):
are old. It's not just a it's not just an
automated phone line saying press one if you like to
make a payment, press two if you'd like to hear
mailing address. It's not just impersoning a generic human now,
it's a program capable of impersonating specific humans, namely you

(33:50):
capital y oh, you specifically you listening to this, and
it's got a lot of it's got a lot of
opponents already. Now we're not saying that it it mimics
your voice. That's not what this is implying, not yet,
not yet. But the idea is that it is. It
has access to all your information. It has access to
your date books and your phone numbers, and and knows

(34:13):
everything about you because it's tied into this little thing
you carry around with you that is basically your life
in a wallet form you know, more specifically with Google.
Uh this, this initial fear increases, or the potential for
misuse increases. This journalist who are an excellent article and
her name's Alex Krans to her, the dangers are threefold. First,

(34:36):
similar to what what you're proposing, all this is a
program from Google from Alphabet, and Google already knows a
ton of personal information about you, whether or not you
use an Android phone. This means that Duplex doesn't just
know what time you want to meet your tender date
at that tapas place. It also potentially knows every other

(34:57):
single thing Google knows about your life. And Google is
so widespread that you do not have to have a
Google account for this to touch you. Second, a criminal,
a government stalker, prankster, some internet roll could potentially cheat
Duplex out of giving up your information in a phone call.

(35:18):
Imagine Duplex somehow being fooled into thinking it's calling a
restaurant to make a delivery order, and then boom, somebody
has your credit card information, expiration date, and security code.
They're actually a really great a whole bunch of great
comments in these in the comment section on this Gizmodo
article by Alex Krans, and one of them was the

(35:39):
idea of, like how this technology could be used by
this term I love bad actors, you know, So for example,
like what if you had a pizza place that wanted
to use this kind of technology to flood arrival pizza
company with fake calls and keep their phones tied up
and and I don't know that was the particular one

(35:59):
and sort far feeshied one, but the idea, the implications
being that you could use this for scamming easily, big time,
especially with older folks that maybe aren't going to key
into the fact that this is not a real person,
and you could possibly automate and scale scamming in a
way that would far surpass the way it's done now,

(36:21):
because even the least tech savvy people can tell that
you're hearing a automated voice, you know, spamming you to
try to you know, sign up for a cruise or
something like that. Absolutely, And and to follow up on
Crans is because you said it was threefold, So to
follow up on the third thing, which I found the
perhaps the most disturbing here, No forget, perhaps it is

(36:45):
the most disturbing thing. A person could hijack your duplex
account and essentially function as you until such a point
as they're caught. So imagine learning your credit and savings
have been wiped out because you're duplex called in a
series of untraceable transfers to some Caribbean islands, some offshore account.

(37:06):
There's nothing that anyone could do because legally that thing
would be functioning as you. Crans and other critics of
this technology, you're also concerned that at the initial presentation,
Google build this as another Needo feature and they didn't
say anything about privacy or concerns. It was very much,
look what we can do, not hey should we just

(37:27):
like like it's like that line from Jurassic Park. Your
your scientists were spending so much time thinking whether they could.
They never thought if they should, and crime finds a way.
It does. It does. But but you know, and you
may think this is an alarmist, but it's just like,
we're just a hop, skip and a jump away from
this stuff. And obviously this technology is not ready for marketing. No,

(37:48):
no one's saying that this is just kind of like
a Needo party trick they did at their io Developers conference.
But yeah, you're right, because if if, if it's literally
the voice inside your phone, then it's going to have
access to all the data that's in your phone. And
depending on how careful you are with this stuff, it
could have every bit of identifying information that a human

(38:09):
banker would ask you to give to confirm this crazy
wire transfer that you want to do. What do they
ask you for? Ben they ask you for like the
last four digits of your social Security number, your mailing address,
your name, maybe your mother's maiden name, or some secret word.
Questions gonna be on your phone in some way, shape
or form likely, and many people email themselves those answers

(38:31):
so that when they're online they can check back, including passwords.
So it's a very important point to make that it
has this duplex stuff has not been assigned any kind
of rollout date. People are still speculating just how close
or how far Google is from making this technology viable
outside of testing situations and later available to the public.

(38:52):
But the problem is this is not the only game
in town. Sure, audio impersonation is spooky, but what about video.
Do you remember back during the hunt for Osama bin
Laden where various people were arguing that the bin Laden
and propaganda videos wasn't the real guy, but someone else
impersonating him or something like that, or like a Snapchat filter. Right,

(39:13):
he's kidding, but I'm not because all of the crazy
tech blows my mind. How much research and development they
put into the Snapchat filters, And the more you see them,
the more realistic they get, and the more I could
see them turning into some pretty nefarious ways of mapping
people's faces and you know, making you look like someone
else entirely and making it not just a silly mustache
but a totally believable doppel gang. Yeah, and we've seen Look,

(39:38):
visual manipulation is a tale as old as journalism. We've
seen doctored photos of plenty purporting to be evidence of UFOs, giants,
skeleton's ghosts, and we has a species. Have known for
a long time that photos can be faked. Someone with
a handy knack for photoshop, Paul can really work wonders,

(39:59):
and now days we've gone beyond the touch ups of
still images. Have you seen that doctor and stuff? They
don't want you to know? Um logo, there's some something
fishy gone on with that hand suspicious indeed, but again
no spoilers, right, So, uh, if you're wondering fans of
West World, yes, that Mays is for you. As The

(40:20):
Verge reported in July of last year, seen we're recording
this July, researchers at the University of Washington invented a
tool that takes audio files, converts them to realistic mouth movements,
and then graphs those onto existing video. The end result
of this is someone saying something or appearing to say
something that they never actually said. And the scary part

(40:42):
is it's really convincing. It's not like meme level give
Funny a lot of their earlier examples used footage of
former President Barack Obama. Did you see this stuff? Well, dude,
there I mean, and I don't not keep harping on this,
But there was a Barack Obama Snapchat filter and it
it does have that Uncanny Valley look. But first of all,

(41:02):
problematic for a lot of reasons to put on the
visage of someone that's a different race than you and
play a character, and you know, just Snapchat users were
using it. It was literally a Snapchat field. There's also
another one that was Bob Marley, you know, with like
but but it's pretty damn realistic. Other than there's things
they probably did on purpose to make it less realistic.

(41:25):
But that kind of mapping quality that you're talking about,
the implications there are nutty. Yeah, And luckily for everybody
just got spooped out, we have some good good news here. First,
they didn't just choose Barack Obama because they had some
sort of ideological thing or they were like we like
this guy, we hate this guy, or anything. They did

(41:46):
it because a high profile individual like a celebrity or
president will have a ton more high quality video and
audio footage to pull from. And also, this is this
thing takes uh, it's a huge wish in process. They
had to have at least seventeen hours of footage just
to get started and they say, so there's another neural

(42:07):
net kind of learning exactly, learning stuffing, So this is
a problem. They say that their goals are wholly good hearted.
I've got a quote here. The team behind the works
say they hope it could be used to improve video
chat tools like Skype. Users could collect footage of themselves speaking,
use use it to train the software, and then when

(42:27):
they need to talk to someone video on their side
would be generated automatically, just using their voice. This would
help in situations where someone's Internet is shaky, or if
they're trying to save mobile data. But that's what Google
is saying about Duplex. It's just a nifty, little handy
dandy tool to help you book your hair appointment so
you don't have to talk to people on the phone.

(42:49):
They never talk about the yeah, and this, yeah exactly
and this look. We can't we can't ascribe motive. We
can't say these people are lying to you, but we
can a that this sort of technology is a Pandora's jar,
and once that lid is unscrewed, there's no there's absolutely
no realistic way to prevent both the spread of these

(43:11):
faked segments and the spread of things being accused unjustly
of fake segments. This this poses some inherent dangers for journalism.
We already see how easily a completely fake story can
proliferate on Facebook, a bot generated story at times, like
literally an AI generated Twitter account, for example, that can

(43:32):
mimic the style of someone of a particular ideology or whatever.
Um here's another aspect of it. There's an article from
the information dot Com. Google's controversial voice assistant could talk
its way into call centers because, like we're saying earlier,
if you if you get an automated voice on the phone,
you know it's not going to be actually very helpful

(43:54):
at all. You're probably just gonna blast past it. And
they know that. And there are a lot of people
that have their jaws being that person that it gets
passed off too. So if there were a more successful
voice recognition and communication tool like this, it could put
a lot of people out of jobs. That's a very
good point. It could get us closer to not the

(44:14):
post work economy, but the post worker economy. And up
to now we've talked about these two forms of impersonation
is discreeting different things, right, audio on one side, video
on another. But what if they become combined. What if
a digital impersonation of a human being, even you Know
or even you Paul, or even me, could exist online

(44:37):
with no one but you and the people you meet
in person knowing the difference. I mean, let's think about it.
It would sound like us, it would look like us,
and if it pulled from our online data footprint, it
would also know a lot of stuff about us, including
the relationships we already have with other people and how
we interact with them. So it's possible, for instance, that uh,

(45:00):
fake version of Matt Frederick rights to us from Massachusetts
foreshadowing there and responds to the three of us the
way that the real Matt would, and we wouldn't know
the difference. We'd be sending weird memes and thumbs up
and doing our inside jokes, and it would already know
all of those. And let's be honest, I mean, a

(45:21):
lot of times text conversation and email conversations are already
kind of in shorthand or in some kind of a
little bit more terse, not mean, but just kind of
like we're trying to get it done. That's why we
depend on that method of communication over talking, because it's
a lot more boom boom boom, let's get it done
and move on. I would think it would be easy
ish for an AI to mimic those kinds of little

(45:41):
quick exchanges and not raise a red flag for you
or I. That's a great point, because even if they
get something wrong, we would just think, oh, Matt must
be in a hurry typo or you know, fat finger whatever,
and or auto correct correct exactly like I've I've I've
sent texts and like seeing how mangled they were, but
be like, oh, he'll know what I mean. Yes, Like

(46:02):
let me let it ride, yeah, yeah, yeah, because again
it goes the time. Now, it would be alarmist for
us to say this would plausibly happen to the average
person in the near future, but for a high value
target like a politician, a celebrity, controversial business person and
so on, it's completely within the realm of possibility. And
speaking of alarmists, let's open wide the doors of science

(46:23):
fiction and do a bit of speculation here, completely unfounded.
Imagine a world where the only communication you can trust
becomes face to face in person. Imagine a world where
you are accused of a crime you didn't commit, but
you don't have a rock solid alibi of your activities
at the time of the alleged crime, and surprise, surprise,

(46:44):
they have you on camera committing it and then confessing
to it, and you cannot prove it is not you.
Would this mean that eventually video evidence becomes inadmissible in court?
Does this technology pose an existential threat to the fabric
of digital reality? Is it really us talking to you
right now? Yeah? Man, I don't even know anymore. I

(47:05):
don't know. And and to to answer your hypothetical question earlier, yes,
I think this is absolutely a threat, and we rely
on things like video evidence. But I could see far
enough removed from this particular technological time and place that
could be something that is just a thing of the past. Man,
you know what I mean? Can can can you picture it?
Like I'm trying to think of a of an analog

(47:25):
of something that used to be infallible and now is
like completely up for grabs, like I don't know, polygraphs, Yeah,
polygraphs exactly a great example. I don't know. This is
a little bit more of an ephemeral example. But even
something like the press or like the news you know
now you know, used to to be able to depend
on some level of rigor and truth in any kind
of news reporting you see. And now, as you say,

(47:47):
because of the Internet, we can see stuff that's completely
generated by artificial intelligence that tricks people all the time
into believing that it's you know, God's truth, and it's insane.
So I wonder you know, I have questions for you
to know. I don't want to put you on the spot,
but this is this is one of the only conversations

(48:08):
we can know is actually happening between us, right, I
mean we are sitting here and looking at each other
in the eyes, so we are definitely both humans because
as far as I know, we don't have human stand
ins that are believable enough to to trick either one
of us. It's like in Terminator, the first ones you
could tell the difference, right, So, so what do you

(48:29):
have any thoughts on the likelihood of this kind of
technology progressing or being used to disrupt the spread of
reliable information. Do you think it's a definite Do you
think it might happen. Do you think it's a little
bit sensationalistic? What do you think? I think it absolutely
could happen. I think it's another one of those things
that's kind of um dangled out there like a piece

(48:54):
of bait on a fishing hook for the American consumer
to bite onto and be, oh, this will change my
life for the better. This will make it so easy
for me to not have to make hair appointments or
schedule oil changes or what have you. And then you,
you know, once the buy in happens, then it started.
It opens that Pandora's jar that you keep talking about.
So yeah, no, there's no doubt about it. The more

(49:15):
access we give our devices and the more centralized our
information is. And I guess by centralized, I just mean
within a particular service where it has access to all
of it um. And that's that's what That's what it takes. Right,
Why does series suck? Okay, let'ten. I just want to
get into this because I want to rag on Apple
for a minute. Siri is not a good personal assistant

(49:35):
because it doesn't learn, it doesn't keep track of what
you say. It's starting from scratch every time you ask
it a question. Right, This is sirie Apple's personal assistant,
which is one of the first ones to hit the
market but fizzled ridiculously because of Google's UM and Amazon's
Alexa and stuff, because those actually learn and are connected

(49:57):
to the Internet and learn your vocal patterns and learn
your preferences and have access to that bigger network, whereas
Siri is just kind of dumb. It doesn't really do that.
They can tell you stuff about Apple, you can tell
you a lot of stuff about the history of the company,
and it can do Google searches for you. But if
you said, Kay, Sirie, what time is this movie? It

(50:17):
will probably give you some tangentially related to answers, but
not the exact answer you want. Time was invented shortly
after The Big Bang, as I Siri tell me a joke.
That's always fun, but no, but Google Assistance will do
that because it learns your habits and figures out what
kinds of questions you're asking. So that's what I'm saying.
When we start getting into that where we're tapped into

(50:39):
this larger network, it does seem like a slippery slope.
But yeah, people, even the most paranoid of my friends,
some of them are all about these home assistance and
it's this thing, this conversation we always have where it's
like I'll give it up if I can tell little
box to turn on my lights for me. Uh yeah,
I uh. I was given one of those things and
I plug it in what I'm getting and unplug it afterwards.

(51:01):
So it probably thinks that I am always cooking or
just don't have power at my house, which I'm fine with.
But we want to hear from you, assuming again to
the earlier question, is it really us talking to you now?
What would you like to tell us through the robot
versions of us impersonating the biological versions of us? Do

(51:21):
you have a home assistant? Do you use this facial
recognition software that is so often marketed as a free
recreational pursuit. Do you believe that the future will be
fraught with things that make us question digital reality? Is
there a way to combat it? Should there be? Uh?

(51:43):
You can find us on Instagram, you can find us
on Facebook. You can find us on Twitter. You can
find some version of us. I feel like now we
have to say you can find some version of us
on there. You can also, of course, visit our one
of our favorite places on the internet, our Facebook page.
Here's where it It's crazy, where you can see Matt
Noel and I, Um, well, what do we do there? Where?

(52:06):
On the Internet, and here's where it gets crazy, all
kinds of stuff. We lurk sometimes sometimes we respond and
hop around in the threads and post our own little
gifts and memes, and we have a good time in there.
It's a lot of fun. And if none of those
are quite the badger for your bag, you are in luck.
You can contact us directly. We have an email address.

(52:29):
It is conspiracy at how stuff works dot com.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.