Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Folks, Tonight's classic episode is one that is near and
dear and frightening to all of us. It's a question,
will Google start impersonated you?
Speaker 2 (00:11):
Yes?
Speaker 3 (00:12):
Wow, Oh the days of July twenty eighteen, we were
so naive.
Speaker 2 (00:17):
Oh the salad days of twenty eighteen. Well, we were so.
Speaker 1 (00:21):
Young, right, We had no idea what was going to happen,
because even DARPA can't fully predict the future yet. But
you know, I remember going back to because I lived
really close to our old office for a while. I
remember going back in during the pandemic and there was
this scary moment where I saw somewhat one of our
(00:44):
coworker's calendars left on like March thirteenth, or like the
day everything shut down.
Speaker 2 (00:52):
That's like zombie movie stuff. That's eerie. I think.
Speaker 3 (00:55):
Was it on Robert's desk, I don't. I just remember
seeing that too, Ben like, oh wow.
Speaker 1 (01:01):
It close sped. It was our buddy Seth, who yeah,
for a long time worked on stuff to blow your mind.
But yeah, it's the dystopian. It's the very last of us.
Speaker 2 (01:13):
And you know, what we're talking about in this episode
is a tech that I don't know if it made
it all the way through to fruition. It certainly not
what we associate with this kind of stuff, But Google
was developing something called duplex that was a tech for
conducting natural conversations to carry out real world talks over
the phone. Now, of course the name of the game
for that is chat GPT and all of the many
(01:36):
worms that come in that bag, that worm bag, badger bag.
But this is kind of like almost like a peak,
a sneak peek into like maybe what could be a
precursor to that, right, guys.
Speaker 1 (01:46):
One hundred percent yeah, Because this this idea of automating
ones day to day routine interactions. It feels cool. But
I am certain that in episodes like this one of
us will will all inevitably reference that scary scene with
(02:07):
Mickey as a wizard in Fantasia. You know what I mean.
You get a mop and at first it's easy, it's
a lot of fun.
Speaker 2 (02:14):
Just brooms were psychotic.
Speaker 3 (02:17):
So come with us back to those happy days of
twenty eighteen, and let's discuss what AI might become.
Speaker 2 (02:24):
Knowing what it has become.
Speaker 1 (02:26):
From UFOs to psychic powers and government conspiracies, history is
riddled with unexplained events. You can turn back now or
learn this stuff they don't want you to know.
Speaker 2 (02:50):
Hello, and welcome back to the show. I'm Noel standing
in Formatt with that line, because whenever Matt's not here,
Ben and I always look at each other for a
few minutes and realize that we're not quite sure how
to start the show.
Speaker 1 (03:03):
However, Never fear Met is off on a very special
secret project that we cannot wait to tell you about.
We will have to tell you sometime soon, so look
forward to that. No spoilers. In the meantime, they call
me Ben. We are joined with our super producer Paul
the Personal Digital Assistant decade. Yeah, PDA, but most importantly,
(03:24):
you are you. You are here, and that makes this
stuff they don't want you to know. We're pretty excited
about today's episode.
Speaker 2 (03:32):
What did PDA stand for when it was like a
pom pilot? Was it personal digital assistant?
Speaker 3 (03:37):
Uh?
Speaker 1 (03:39):
Yes, I believe or was it Okay, so there was
public display of affections?
Speaker 2 (03:43):
Well there's that, But they used to call like BlackBerrys
PDAs back before the advent of the iPhone and the
more tablet like device.
Speaker 1 (03:52):
Personal digital assistant. That's what it has to be.
Speaker 2 (03:54):
Isn't that funny though, because that's kind of become a
new thing, because today we're talking about the kinds of
personal didit assistance that you can talk to and can
potentially talk back.
Speaker 1 (04:05):
Yes, So let's look at some of the context. It's
a very common trope in science fiction robots impersonating human
beings with increasing levels of fidelity, and we see it
in pop culture all the time, and some stories, like
in every episode of The Terminator everything related to the
Terminator franchise, Machine Consciousness tries to mimic humanity exclusively as
(04:28):
a means of waging war, and in other places or
other series such as The Matrix, Artificial Intelligence or AI
attempts to surround us meat bags with an impersonation of reality,
complete with individual machine minds that can pass for humans.
And then in other cases, humans work together to build
technologies that can impersonate other human beings in any number
(04:51):
of ways.
Speaker 2 (04:51):
Yeah, sometimes for sexy times reasons, sure, sometimes for just companionship,
and we've seen that go awry. I mean the Terminator.
I think the whole point of that series was that
humans created Skynet or whatever for their own to serve
their own purposes. And then Skynet said, what up, humans,
We're done with you or going to do our own thing,
or like in West World m hm.
Speaker 1 (05:12):
And this goes, like you said, in any number of ways,
from increasingly lifelike androids to entities that exist purely in
the digital sphere, able to hold genuine, seeming conversations and
functioning at or above what we would consider the average
level of human intelligence. As history has proven, science fiction
(05:34):
is often prescient, and it's not uncommon for authors to
spin fantastic tales, only for those tales to move years
decades later from the realm of science fiction to the
world of science fact. And the quest for this human
like technology in real life is no different.
Speaker 2 (05:53):
Yeah, I mean, I actually saw a YouTube video a
bunch of clips of times that science fiction films have
predicted technology that it totally a thing right now. Like
Star Trek, for example, there's little communicators, they're basically iPhones.
And remember that scene in Total Recall where they're walking
through a full body scanner at the airport and you
see like their skeletons and you can see the weapons
(06:15):
hiding and stuff. That's pretty much what we do now
at the airport. We got to put our hands over
our heads and stand in those full body scanners where
hopefully they just draw a little cartoon of us with
the naughty bits scrubbed out. But in theory they can
see everything. So too is AI and the kinds of
things we're talking about today.
Speaker 1 (06:32):
Yeah, and the concept of what we call artificial intelligence.
Longtime listeners, you may recall, as we use this phrase,
you may recall that some former guests of ours have
objected to the term artificial intelligence with a very good question,
what makes it artificial? What makes it if we're talking
about consciousness, what makes it any less of a consciousness
(06:53):
than our own?
Speaker 2 (06:54):
Right?
Speaker 1 (06:54):
And we think that's a good We think that's a
very good and valid point. We tend to agree with it.
But for the sake of brevity, we're just going to
go with AI as a non pejorative thing. It's just
easier to say it that way. The concept of AI
has surprisingly old roots in our culture, especially if we
consider those ancient tales of non human entities impersonating human beings,
(07:20):
the fairy stories of changelings switched out at birth, or
God's changing shape to breed with animals or people, or
shape shifters. In the twentieth century, the concept of this artificial,
inorganic thinking life form was popularized, just like the point
we made with science fiction through culture and fiction. If
(07:41):
you think of one of the earliest artificial intelligences that
blew up in the Western world, it's the tin man
in The Wizard of Oz.
Speaker 3 (07:49):
Yeah.
Speaker 2 (07:49):
I never thought of him like that, but I guess
it's true. It's a good stand in for that because
even the heart that he gets at the end spoiler
alert for it, the Wizard of Oz sure is like
a clockwork hert, you know, it's like something to add
to his mechanations. It's not actually a physical heart. So
he's absolutely meant to be like an automaton, which is
kind of the earliest form of robotics that go back
to ancient times, even where you have these incredibly intricate
(08:12):
creations that move through a series of gears and pulleys
and what have you, but aren't necessarily imbued with any
kind of like ability to make choices. But there are
some that can even like play games. I think there's
one that's like a writer.
Speaker 1 (08:25):
The mechanical Turk Well yeah, yeah, yeah, And they were
lauded for their ability to appear to do human esque
things right, but people generally did not think they had
a soul for sustenance.
Speaker 2 (08:39):
And it kind of goes back to you were mentioning
we'd had some folks say you had to take issue
with the term of artificial intelligence. We've also had some
folks right and taking issue with the term the idea
of machine learning, because you know, it is a matter
of we're still at a place where we have to
program machines to do what we want. We certainly have
have yet to fully experience, and it's this idea of
(09:01):
a singularity where the machine takes what we've imbued it
with and develops its ability to go outside of that
or like make decisions outside of the parameters that we've
read programmed. Very rarely have we seen that, and when
we do see it, it typically gets shut down.
Speaker 1 (09:16):
It's yeah, metacognition, right, so thinking about thinking rather so.
By the nineteen fifties, scientists and mathematicians and philosophers were
familiar with this concept of quote unquote artificial intelligence, and
our species began to cognitively migrate from this world of
fantasy toward a world increasingly grounded in fact and we
(09:39):
can hit some of the high points of artificial intelligence here.
In nineteen fifty, the famous codebreaker Alan Turing made one
of the most significant early steps in real life AI
when he and some of his colleagues created what we
now commonly call the Touring Test. It's named after him,
of course. He wrote this paper called Computing Machine Intelligence
(10:01):
that laid the groundwork both for the means of constructing
AI and for the ways in which we could measure
the intelligence of that AI or our success building it,
which might kind of be two different things. And sadly, well,
amazingly and sadly this line of thought was ahead of
the curve. Turing could not get right to work building
(10:24):
these human like minds, or more specifically, he couldn't get
to work building minds that could fool humans into thinking
they were also human minds, because the technology at the
time had hard limits, like to the point you make
know about the mechanical turk. Up until nineteen forty nine
or so, computers couldn't really store commands we can say
(10:47):
commands are decisions here. They could only execute them. And
this meant that the computers we could build at that
time were unable to satisfy a key prerequisitive intelligence. They
couldn't remember past events, past information, and therefore they could
not use this memory to inform present commands or decisions.
(11:07):
And in this way, computers began at that tabula rasa state,
that blank slate state that so many mystic, spiritualist and
philosophers spend their lives attempting to attain. I think that's fascinating. Computers,
like people who practice hardcore meditation, existed mentally only in
(11:28):
the present.
Speaker 2 (11:29):
Yeah, And we'll get back to that concept in a
little bit in terms of some of the newer computers
and how they are able to quote unquote figure things
out some better than others. But let's just go back
to the fifties for a second, when computers were insanely expensive.
I think this is no secret they ran. You would
least one, you couldn't even own it, and that wasn't
(11:50):
a thing for about two hundred K a month, which
I believe been in today's standards. That would be about
two million dollars for one month of computer use. Yeah,
eighteen dollars a lot on Netflix subscriptions, right, my friend,
And only, of course, the most prestigious universities and huge
technology corporations could even you know, afford the cost of entry.
(12:14):
So for Turing and his ILK to actually be able
to succeed in building something resembling anywhere close to artificial intelligence,
they would have to be part of some network of
high profile, very wealthy and influential funders of research. Yeah,
(12:35):
that would they would be able to receive just an
absolute boatload of money from to get this kind of
work done.
Speaker 1 (12:41):
And imagine, you know, that's a hard sell. It seems really
interesting now, but back on what we know, Yeah, yeah,
but back then it was literally walking up to people
and saying, hey, we're good at math, and do you
remember the tin man from Wizard of Oz. We sort
of want to make that. Can we have all of
the money. So that's that's tough, but the soldiers on.
In nineteen fifty five, another groundbreaking event occurred. There was
(13:03):
the premiere of a program called The Logic Theorist. It
was supposed to mimic the problem solving skills of a
human being, and it was funded by one of our
favorite shady boogeymen, the Research and Development Corporation known today
as RAND.
Speaker 2 (13:19):
A RAND corporation that sounds like something out of a
sci fi movie in and of itself, and it's still around.
I'm doing pretty interesting and secretive work because I believe
they ended up having a relationship with the US. Oh yeah, government,
big time.
Speaker 1 (13:33):
You're absolutely right. This Logic Theorist is considered by many
people to be the first technically the first artificial intelligence program.
And there was a big conference in nineteen fifty six
hosted by John McCarthy and a guy named Marvin Minsky,
and in this conference they presented the Logic Theorist as
(13:55):
a proof of concept. The conference is called the Dartmouth
Summer Research Project on Artificial Intelligence. And you know, Noel,
I thought of you with this because I know how
how we both love acronyms.
Speaker 2 (14:08):
Well, I feel like in this one, the D would
be silence. So I'm just going to call it SURPI.
Speaker 1 (14:11):
That's great. That sounds better because the DED, you.
Speaker 2 (14:14):
Know, you can't really do a DS sound.
Speaker 1 (14:20):
So the conference itself fell short of the original very
ambitious aims of the organizers. They wanted to bring together
the world's best and brightest subject matter experts and by god,
make an artificial mind of a weekend, right right, They
said it took god seven days, let's uh, what do
you say? We could do it in four something like that.
(14:42):
But the problem was pretty much everybody disagreed on how
exactly you would make a human like intelligence. And at
this point they're still thinking in terms of artificial intelligence
being like humans, which is a huge assumption. But they
unanimously agreed for the very first time on a single
crucial point that it was possible to make AI, and
(15:04):
this set the stage for the next two decades of research.
Speaker 2 (15:08):
So, from nineteen fifty seven to nineteen seventy four, artificial
intelligence interest in it or research and it really flourished.
Computers they could stick, you know, they were improving by
leaps and bounds. They could store more information, which is crucial,
and of course they became faster and cheaper and more accessible.
Machine learning algorithms also began to improve, and people got
(15:30):
better at knowing which algorithm to use for their particular purposes,
which was also important because it wasn't you know that
there was sort of an established language of algorithms that
people could pick and choose from to suit their particular problems.
Early successes like the General problem solver Ben tell me
a little bit more about the general problem solver.
Speaker 1 (15:51):
It does what it says under ten. You could give
it a variety of problems, okay, and generally speaking it
would attempt to solve them.
Speaker 2 (15:58):
Not necessarily going to give you the most creative solution,
but that Yeah.
Speaker 1 (16:01):
And again that's that's like an early day's thing. So
at the time it was very impressive, but of course
it was just the beginning. And you know, you had
another example I think of AI application in that time.
Speaker 2 (16:18):
Yeah, this is when Alexa was first invented.
Speaker 1 (16:21):
Oh yes, yes, I'm kidding.
Speaker 2 (16:22):
It's called Eliza. But this is like a spoken language interpreter,
which I don't know. I wonder if Alexa is a
is a nod to Eliza. What do you think, Ben,
that would be pretty cool. Only a couple letters off.
It was a spoken language interpreter that helped convince the
government to really, Okay, this is something that we're into
and this was really important for our story today.
Speaker 1 (16:44):
Yeah, it convinced the government, specifically DARPA here in the
United States to start funding artificial intelligence DARPA. As you
know if you are a longtime listener of this show,
DARPA is the the resident mad Science Department of the
United States government and it stands for Defense Advanced Research
(17:08):
Projects Agency. They're the ones who do all the X
files level or fringe level stuff you hear about in
the news or often it's going to be. They're a
forefront of it. And then there's also stuff like Matt's
favorite public private partnership, skunk Works.
Speaker 2 (17:25):
It's just a good name, I mean, if you can
like it for nothing other than that, But they do
all the secret Air Force projects and you know, spyplanes
and all kinds of stuff. It's kind of technology you
think about that is probably years ahead of anything we've
seen out there in the world today.
Speaker 1 (17:42):
It's mistaken for UFOs, right right. This made people very gassed,
very optimistic, because that's another very human thing that we
haven't quite learned how to program optimism. In nineteen seventy,
Marvin Minsky, that guy who co hosted this conference, he
told Life magazine from three d eight years, we will
(18:03):
have a machine with the general intelligence of an average
human being. He was wrong, But to your point about
suppressed technology novel, who's wrong? So far as we know.
Speaker 2 (18:15):
It's interesting though too not to get too far ahead
of ourselves. But with everything that's going on with big
public facing companies like Google and Amazon and Apple and stuff,
you start to get a sense that maybe the secret
government stuff isn't quite as far ahead as it once was, right.
I mean, that's just my opinion. I don't know.
Speaker 1 (18:34):
I think it's a good opinion. It's something we want
to hear from you about, ladies and gentlemen, because it's
proven that in terms of physical hardware, material of war,
weapons and aviation and stuff, it is proven that the
US and most other governments want to keep that stuff
(18:55):
under wraps.
Speaker 2 (18:56):
Absolutely. I guess what I'm saying is when you're looking
at a company that's trying to sell you something and
you see how far they advance with each update every year, right,
you kind of get a sense that maybe this is
about where they're actually at.
Speaker 1 (19:08):
That's that's the point. So I bring up the aviation
stuff and the weapons of war stuff to contrast it here,
because what we're seeing also is that the governments of
the world. This is true, the governments of the world
usually can't pay the best and brightest as much as
(19:29):
the private entity skin, so they're grabbing some of the
best workers unless those people are severely ideological and then
they're making most of the progress which they would later
sell to a government. So I think maybe there is
a little more transparency.
Speaker 2 (19:44):
I think so, which is a which is an interestingly
positive thing.
Speaker 1 (19:47):
But let's let's get share.
Speaker 2 (19:48):
Yes, keep, let's keep, let's keep humming right along.
Speaker 1 (19:50):
So the quest for human like artificial intelligence soldiered on,
but it still had tremendous obstacles. Although computers could now
store information, they couldn't store enough of it, and they
could not process it at a fast enough pace. So
funding dwindled until the eighties. And that's when AI experienced
(20:11):
a renaissance, which we'll get to after a word from
our sponsor.
Speaker 2 (20:21):
So in the nineteen eighties, new tools and methods gave AI,
the field of AI, that renaissance that Ben mentioned, that
kick in the digital pant and a guy named John
Hopfield and another fellow named David Rommelhart popularized this concept
of deep learning, which is a technique that allowed computers
(20:43):
to learn using experiences, using paying attention to surroundings. For example,
the idea of the fact that our phones are serving
us ads because we're talking about stuff, and it's taking
that information in and using it to do something and
learning our habits right, and this is a really important
(21:04):
part of things like these personal digital assistance that we
talked about at the top of.
Speaker 1 (21:09):
The show, right m and on the other side of
the brain here too, or I guess. In a parallel approach,
a guy named Edward Figenbaum introduced expert systems that mimicked
the decision making process of a human expert. So what
happens is the program will ask an expert in the
(21:29):
field like it would ask super producer Paul or Richard
Feinman a question about production or physics, and then they
would say, how do you respond in this given situation?
And it could and it would take this for every
situation it could soak up, and then non experts could
later receive advice from that program about that information. It
(21:51):
sounds basic now. It sounds like how a search engine
can know, with increasingly accurate levels of fidelity, what question
you mean to ask when you ask question? Remember when
Google used to be super passive aggressive yeah, and say
did you mean?
Speaker 2 (22:05):
Well, Now it just fills it in for you, yeah,
Google instantly. And the reason for that is that it
has access to every entry that anyone has ever put
into Google, and so it combines all that information and
makes the best guess as to what it thinks you're
probably searching for based on what everyone else that has
started searching for that kind of thing has also done,
which is that same deep learning stuff that is coming
(22:27):
into play.
Speaker 1 (22:28):
Which makes predictive text so funny.
Speaker 2 (22:30):
It can be. I guess it depends on the platform
you're on, and we'll give that a little bit too,
because sometimes it can be just really boneheaded. And haven't
you figured this out yet, Siri spoiler alert, No he hasn't.
But this was all really put to the test in
the nineties and two thousands when we really hit some
big landmark moments and high water mark moments in artificial intelligence,
(22:54):
like with Gary Kasprov, who I know that you are
fascinated with, although he is a problematic figure at times.
Speaker 1 (23:00):
Yeah, yeah, raging anti semitism aside. Yeah, chess grandmasters. The
human ones at least right are immensely, even fractally compelling,
because there's always some other layer to their personality and
you have to wonder about the purported correlation between mental
(23:23):
instability and high thresholds of intelligence, because well, that's a
story for another day. We should do an episode. Yeah,
might get a little close to home for us, but
we can do it.
Speaker 3 (23:35):
So.
Speaker 1 (23:36):
Yeah, as you're seeing, he was defeated by Deep Blue,
built by IBM, a computer just built to play chess.
This was a John Henry moment for the human race.
And then a scientist named Cynthia Brazel created Kismet, a
program of recognizing and displaying emotion. Now, for all our
(23:56):
techno futurists philosophers in the crowd, does Kismet recognizing and
displaying emotion automatically mean that Kismet experiences emotion? Story for
another day. But these person versus program one on one
matches didn't stop with the game of chess. There was
(24:18):
also Jeopardy, right, And then there was one other one
that had tremendous implications for the world of programming.
Speaker 2 (24:28):
Yeah, and that was much more recently with Google's Alpha Go.
That's Go, as in the ancient Chinese strategy game of Go,
and it successfully defeated a Go champion Kaig and Go
is a notoriously complex game, very difficult to predict, and
(24:49):
always have to be many many moves ahead of your opponent.
So that's pretty cool. That's almost a step further. Wouldn't
you say, Ben that Go would be more challenging for
a computer too, who beat a human opponent? Then chess?
Even Like, Yes, this is like a real serious leap forward.
Speaker 1 (25:06):
I would absolutely agree, And a lot of people were
even more skeptical watching that game. People understood Go. We
can also go out there and say, Paul, I don't
know if you are familiar with this, but are you
Are you a Go enthusiast? Do you play this game? Not?
Speaker 2 (25:25):
Really, Noel, I've never played it. I remember it from
the movie Pie. Remember Pie, Yeah, the one of Darren
Aronofski's first movies, which people kind of crap on. But
I enjoyed it a lot when I was young anyway.
But that movie is about a kind of a mad
computer scientist who realizes that the Kabbala contains some kind
(25:48):
of code involving the number for pie and gets goes
down a crazy rabbit hole of insanity in paranoia. But
Go as a recurring game in that movie, implying that
it's all about high level, very very high level thinking.
Speaker 1 (26:03):
Yeah, I think that's I think that's excellent. You know,
Pie is a fantastic but jarring film, you know, and
so we see a common trend in these again, we'll
call them John Henry moments. And people generally know the
story of John Henry, right, nol you think they do.
Speaker 2 (26:22):
I know he was a steel driving man, right, give
me some more.
Speaker 1 (26:25):
Well John Henry for anyone who is outside of the US.
And I don't think we know for sure how common
this story is in the States anymore. He was in folklore.
He was an African American steel driving man, as Noel said,
that's absolutely true. And his job was to hammer a
steel drill into rock to make holes for explosives, right,
(26:48):
And they would blast rock constructing a railroad tunnel. And
in the legend he was pitted against a new fangled
steam powered hammer. It was a man against machine race.
And the historical aspect of whether this did or didn't
happen is did people go back and forth on it?
(27:11):
But what we're experiencing now with AI is a series
of increasingly high stakes John Henry moments. Key G you said,
Gary kasprov was it was it Ken Jennings who played
Deep Blue? Yeah, it was our own buddy.
Speaker 2 (27:27):
Colleague, Ken Jennings of Omnibus fame, and then other things. Yeah,
he does.
Speaker 3 (27:35):
Yeah.
Speaker 1 (27:35):
Yeah, in addition to you know, his big tent item,
which is hanging hanging out with us at how stuff works.
Speaker 2 (27:42):
Right.
Speaker 1 (27:43):
Uh So, now we are on the cusp of a
world that is both brave to quote Alta Watson, Watson, Watson,
I said, deep Blue, and that's the chess.
Speaker 2 (27:55):
On the chess one. Watson was the quiz the quiz
bot also by IBM though, right, that's correct.
Speaker 1 (28:00):
Okay, yes, so Ken Jennings versus Watson, caspar Off versus
Deep Blue, and now we are on the cusp of
a new world that is both brave to quote Aldus Huxley,
and strange. The average citizen in a developed country interacts
with some form of artificial intelligence on an increasingly frequent basis,
(28:25):
even if it's indirect. I mean, think about it. When
is the last time you called a large company and
didn't first go through an automated line, a rough impersonation
of a conversation with a computer.
Speaker 2 (28:36):
I just always start mashing zero right away. Just need
to pass because they it's so annoying, because it's like,
if I thought that a computer simulation could help me
solve the problem, I probably would have just done this
online right in the fact that I'm calling the company
in the first place, not to derail it with my
sure might get off my lawn moment. No, please do seriously,
(28:57):
It's like they're never that helpful. You always have to
repeat yourself over and over again, and you usually do
not get the result you're looking for, which could change
with some of the new technology we're talking about today,
which I think we can get into right now.
Speaker 1 (29:12):
Possibly maybe maybe it'll change because one of the big
tasks for artificial intelligence right now, what we use these
applications for now is big data. We have built so
many ways to scoop up information that are poor. Old
primate brains, which are built for foraging and living in
small bands and forests, cannot analyze and process all this
(29:35):
stuff ourselves, so we built machines and codes. In this case,
codes could just be standing for lines of thought to
decipher all this data for us. And the application of
artificial intelligence in this regard has already done amazing things
in industries like technology, banking, marketing, entertainment.
Speaker 2 (29:53):
Well, it's automation of like sweeping through giant sets of
data sure so that humans don't have to do it,
and using particular what's the word algorithms, I guess is
one way of putting it, or just rules saying, look
for this thing in this set of data, parse out
this file. If then whatever. So that's kind of been
the primary use of that up to this point. In
all of those things. You said, Yeah, so what's next?
(30:17):
Right right?
Speaker 1 (30:18):
Because right now it doesn't matter if whether the algorithms
improve much, it doesn't matter whether there is a c change.
The basic concept is there. And this massive computing approach,
this vacuum cleaner approach, for instance, that the NSA uses,
just allows artificial intelligence to learn through brute force. In
(30:39):
the future, we're going to see more variation in AI.
We'll see autonomous vehicles, we'll see predictive ordering services, which
I know most of us will hate, and soon it
will seem strange not to have some sort of artificial
intelligence existing in some aspect of your everyday life. But
today's question, and a spooky one, is how close to
(31:02):
human can these programs actually get?
Speaker 2 (31:06):
And I think we'll get to that after just one
more quick little sponsor break.
Speaker 1 (31:17):
Excuse where it gets crazy.
Speaker 2 (31:22):
See how I hear you? Hi, I'd like to reserve
a table for Wednesday.
Speaker 1 (31:26):
The seventh.
Speaker 2 (31:29):
For seven people. It's for four people, four people when
Wednesday at six pm? Oh, actually we leave her for
like apra, like a bye people for before you can come.
How long is the wait usually to be seated for
(31:53):
when tomorrow or weekey or for next Wednesday the seventh?
Speaker 3 (32:00):
Oh no, it's not too easy.
Speaker 2 (32:02):
You can com okay, Oh I gotcha. Thanks. But then
that that didn't sound like anything at all. That was
just a conversation between a young man and the proprietor
of a Chinese restaurant.
Speaker 1 (32:17):
Right, and who was who is? Whoever has a great
time making restaurant reservation?
Speaker 2 (32:24):
Now you know, no, because then we have open table
for that, am I? Right? Ben?
Speaker 1 (32:28):
Right? I, like many people in our generation, hate talking
on the phone.
Speaker 2 (32:32):
It's true. There's even a term for that. I think
was a telephoneophobia. Yes, yes, that's a good one.
Speaker 1 (32:37):
But no I found Yeah, when I found that one,
I was like, how can you make a can you
just make anything a phobia? And if you can, I
think it's very American English.
Speaker 2 (32:45):
Now I have microphoneophobia, which is weird considering that I
sit in front of one all the time. Right now,
I'm in utter terror. But no, I was being coy.
That clip was not between two humans. I think you
can probably guess which side of the conversation as the
non human yep, because if you listen back to it,
I think you'll notice a couple of tells in that
(33:07):
there was a few There were a few kind of
wrenches thrown into that exchange where the person was trying
to call to make a reservation, and there was some
confusion from the person on the other end about how many,
about what day they wanted the reservation to be, how
many people were in the party, and it ultimately ended
up with, well, you don't really need a reservation. So
(33:29):
the person on the other of the phone was not equipped.
It was not something they would typically do. Reservations seemed
like not really a thing in this restaurant, and so
the caller kind of repeated himself.
Speaker 1 (33:41):
Specific things like the specific time and day, and then
there was a momentary pause when they said we we
only do reservations for groups of five those ah, gotcha,
which still sounded very human.
Speaker 2 (33:54):
It did.
Speaker 1 (33:55):
It was not, though. It was a conversation between an
unsuspected restaurant employee, as you mentioned, and a computer program.
This is an example of Google's Duplex system, a personal
assistant designed to make users' lives easier by handling standard
phone calls, so like doctor's appointments, reservations, and so on,
(34:17):
breaking up with a loved one without troubling you or
possibly even letting the person on the other end of
the phone know that it's not actually you making the call.
Speaker 2 (34:27):
Yeah, and we get into all kinds of ethical quandaries
with this that we'll get into first, first and foremost, though,
I think it's interesting that the idea, it's inherently tricksy
the whole affair, right, The idea is to not inconvenience
either party, but you or you are genuinely kind of
tricking someone into believing they're talking to a person.
Speaker 1 (34:47):
Yes, and you're giving up a lot of stuff. So
at first this seems amazing because, like we established earlier, telephoneophobia,
in addition to being really fun to say, is a
genuine thing. I would argue, increasingly people are resorting to text.
Speaker 2 (35:03):
Well, I mean, how often do you talk on the
phone if you're not talking to like your mom or dad,
like on a preappointed time.
Speaker 1 (35:09):
I mean, if I'm doing interviews for a podcast.
Speaker 2 (35:12):
Well that's true, but that's a very specific task. Not
for fun, never for fun, and unless I just feel
like I could suss something out a lot quicker on
the phone, real quick than I could usually for business though,
for work stuff sometimes or conference calls, the dreaded conference
call that's.
Speaker 1 (35:28):
Also works, the thing that still happens.
Speaker 2 (35:30):
But yeah, for fun, not very often, you know, But
for other people in the mix. There's totally a dark
side to this idea of this super convenient, amazing idea
of my phone being able to make calls for me
and set up my you know, my waxing appointment or whatever,
(35:50):
because the implications are incredibly far reaching, aren't they been?
Speaker 1 (35:54):
Yeah, they are. Noal it's not just a it's not
just an automated phone online saying press one if you
like to make a payment, press two if you'd like
to hear a mailing address. It's not just impersoning a
generic human. Now, it's a program capable of impersonating specific humans,
(36:16):
namely you, capital y oh, you specifically you listening to this,
and it's got a it's got a lot of opponents already.
Speaker 2 (36:24):
Now, we're not saying that it mimics your voice. That's
not what this is. Implying not yet. Not yet, But
the idea is that it is. It has access to
all your information right It has access to your date
books and your phone numbers, and knows everything about you
because it's tied into this little thing you carry around
(36:44):
with you that is basically your life in a wallet
form you.
Speaker 1 (36:47):
Know more specifically with Google. This this initial fear increases,
or the potential for misuse increases. This journalist, who wrote
an excellent article in her name's Alex Kranz. To her,
the dangers are threefold. First, similar to what you're proposing,
this is a program from Google from Alphabet, and Google
(37:10):
already knows a ton of personal information about you, whether
or not you use an Android phone. This means that
Duplex doesn't just know what time you want to meet
your tender date at that Tapas place. It also potentially
knows every other single thing Google knows about your life.
And Google is so widespread that you do not have
(37:30):
to have a Google account for this to touch you. Second,
a criminal, a government, a stalker, a prankster, some Internet
troll could potentially cheat Duplex out of giving up your
information in a phone call. Imagine Duplex somehow being fooled
into thinking it's calling a restaurant to make a delivery
(37:51):
order and then boom, somebody has your credit card information,
expiration date, and security code.
Speaker 2 (37:57):
There were actually a really great, a whole bunch of
great comments in the comment section on this gizmoto article
by Alex Kranz, and one of them was the idea of, like,
how this technology could be used by this term I
love bad actors, you know, so for example, like what
if you had a pizza place that wanted to use
this kind of technology to flood arrival pizza company with
(38:21):
fake calls and keep their phones tied up? And I
don't know that was a particularly one in sort of
a far fetched one, but the idea, the implications being
that you could use this for scamming easily, big time,
especially with older folks that maybe aren't going to key
in to the fact that this is not a real person,
and you could possibly automate and scale scamming in a
(38:45):
way that would far surpass the way it's done now,
because even the least tech savvy people can tell that
you're hearing a automated voice, you know, spamming you to
try to, you know, sign up for a cruise or
something like that.
Speaker 1 (38:59):
Absolutely, and to follow up on Kranz's because we said
it was threefold, So to follow up on the third thing,
which I found perhaps the most disturbing here, No forget,
perhaps it is the most disturbing thing. A person could
hijack your duplex account and essentially function as you until
(39:22):
such a point as they're caught. So imagine learning your
credit and savings have been wiped out because your duplex
called in a series of untraceable transfers to some Caribbean island,
some offshore account. There's nothing that anyone could do because
legally that thing would be functioning as you. Crans and
other critics of this technology are also concerned that at
(39:43):
the initial presentation, Google build this as another NEDO feature
and they didn't say anything about privacy or concerns. It
was very much, look what we can do, not hey
should we Just.
Speaker 2 (39:54):
Like it's like that line from Jurassic Park. Your scientists
were spending so much time thinking whether they could, They
never thought if they should.
Speaker 1 (40:02):
And crime finds a way.
Speaker 2 (40:04):
It does. It does. But you know, and you may
think this is an alarmist, but it's just like we're
just a hop skipping a jump away from this stuff.
And obviously this technology is not ready for market. No, no, no,
one's saying that this is just kind of like a
Nido party trick they did at their Io Developers conference.
But yeah, you're right, because if it's literally the voice
inside your phone, then it's going to have access to
(40:28):
all the data that's in your phone. And depending on
how careful you are with this stuff, it could have
every bit of identifying information that a human banker would
ask you to give to confirm this crazy wire transfer
that you want to do. What do they ask you for?
Ben they ask you for like the last four digits
of your social security number, your mailing address, your name,
(40:48):
maybe your mother's maiden name, or some secret word some
security going to be on your phone in some way,
shape or form.
Speaker 1 (40:55):
Likely, and many people email themselves those answers so when
they're online and they can check back, including passwords. So
it's a very important point to make that it has
this duplex. Stuff has not been assigned any kind of
roll out date. People are still speculating just how close
or how far Google is from making this technology viable
(41:15):
outside of testing situations and later available to the public.
But the problem is this is not the only game
in town. Sure, audio impersonation is spooky, but what about video.
Do you remember back during the hunt for Osama bin
Laden where various people were arguing that the bin Laden
and propaganda videos wasn't the real guy, but someone else
(41:36):
impersonating him or.
Speaker 2 (41:37):
Something like that. Sure, or like a Snapchat filter. Right, Yes,
it's kidding, but I'm not because all of the crazy tech.
It blows my mind how much research and development they
put into the Snapchat filter. Oh yeah, and the more
you see them, the more realistic they get, and the
more I could see them turning into some pretty nefarious
ways of mapping people's faces and you know, making you
look like someone else entirely and making it not just
(41:58):
a silly mustache, but totally believable doppelganger.
Speaker 1 (42:02):
Yeah, and we've seen Look, visual manipulation is a tail
as old as journalism. We've seen doctor photos of plenty
purporting to be evidence of UFOs, giant skeletons, ghosts, and
we as a species have known for a long time
that photos can be faked. Someone with a handy knack
for photoshop, Paul can really work. Wonders and nowadays, we've
(42:27):
gone beyond the touch ups of still images.
Speaker 2 (42:30):
Have you seen that doctored stuff They don't want you
to know logo there's some something fishy going on with
that hand.
Speaker 1 (42:37):
Suspicious indeed, but again no spoilers, right, So if you're
wondering fans of Westworld, yes, that maze is for you.
As The Verge reported in July of last year, twenty seventeen,
we're recording this twenty eighteen July, researchers at the University
of Washington invented a tool that takes audio files, converts
(42:58):
them to realistic mouth movements, and then graphs those onto
existing video. The end result of this is someone saying
something or appearing to say something that they never actually said.
And the scary part is it's really convincing. It's not
like meme level give Funny a lot of their earlier
examples used footage of former President Barack Obama. Did you
(43:18):
see this stuff?
Speaker 2 (43:19):
Well, dude, there, I mean, and I don't know, I
keep harping on this, but there was a Barack Obama
snapchat filter and it it does have that Uncanny Valley look.
But first of all, problematic, you know, for a lot
of reasons sure to put on the visage of someone
that's a different race than you and play a character
and you know.
Speaker 1 (43:38):
Oh, just Snapchat users were using it.
Speaker 2 (43:40):
That's what I'm saying. It was literally a Snapchat film.
There's also another one that was a Bob Marley you know,
with like but but it's pretty damn realistic. Other than
there's things they probably did on purpose to make it
less realistic. But that kind of mapping quality that you're
talking about, the implications there are nutty.
Speaker 1 (43:57):
Yeah, And luckily for everybody I just got spooked out,
We have some good news here. First, they didn't just
choose Barack Obama because they had some sort of ideological
thing or they were like we like this guy or
we hate this guy or anything. They did it because
a high profile individual, like a celebrity or a president
will have a ton more high quality video and audio
(44:20):
footage to pull from. And also, this is this thing
takes it's a huge attrition process. They had to have
at least seventeen hours of footage just to get started.
And they say, so.
Speaker 2 (44:33):
There's another neural net kind of learning.
Speaker 1 (44:35):
Exactly yap learning stuffing. So this is a problem. They
say that their goals are wholly good hearted. We've got
a quote here the team behind the works say they
hope it could be used to improve video chat tools
like Skype. Users could collect footage of themselves speaking, use
it to train the software, and then when they need
(44:55):
to talk to someone video on their side would be
generated automatically, just use their voice. This would help in
situations where someone's Internet is shaky, or if they're trying
to save mobile data.
Speaker 2 (45:07):
But that's what Google's saying about Duplex. It's just a
nifty little, handy dandy tool to help you book your
hair appointment so you don't have to talk to people
on the phone. They never talk about the.
Speaker 1 (45:18):
Yeah, yeah, exactly, and this look can't. We can't ascribe motive.
We can't say these people are lying to you, but
we can say that this sort of technology is a
Pandora's jar, And once that lid is unscrewed, there's no
There is absolutely no realistic way to prevent both the
spread of these faked segments yeah, and the spread of
(45:41):
things being accused unjustly of fake segments. This poses some
inherent dangers for journalism. We already see how easily a
completely fake story can proliferate on Facebook.
Speaker 2 (45:52):
A bot generated story, yeah times like literally an AI
generated Twitter account, for example, that can mimic the style
aisle of someone of a particular ideology or whatever. Here's
another aspect of it. There's an article from the Information
dot Com. Google's controversial voice assistant could talk its way
into call centers because, like we're saying earlier, if you
(46:17):
get an automated voice on the phone, you know it's
not going to be actually very helpful at all. You're
probably just going to blast past it. And they know that.
And there are a lot of people that have their
jobs being that person that it gets passed off too.
So if there were a more successful voice recognition and
communication tool like this, it could put a lot of
(46:37):
people out of jobs.
Speaker 1 (46:39):
That's a very good point. It could get us closer
to not the post work economy, but the post worker economy.
And up to now we've talked about these two forms
of impersonation is discrete in different things, right, audio on
one side, video on another. But what if they become combined.
What if a digital impersonation of a human being, even
(47:00):
you knowl or even you Paul, or even me could
exist online with no one but you and the people
you meet in person knowing the difference. I mean, let's
think about it. It would sound like us, it would
look like us, and if it pulled from our online
data footprint, it would also know a lot of stuff
about us, including the relationships we already have with other
(47:22):
people and how we interact with them. So it's possible,
for instance, that a fake version of Matt Frederick writs
to us from Massachusetts foreshadowing there and responds to the
three of us the way that the real Matt would,
and we wouldn't know the difference. We'd be sending weird
(47:42):
memes and thumbs up and doing our inside jokes, and
it would already know all of those.
Speaker 2 (47:47):
Well, let's be honest. I mean a lot of times
text conversation and email conversations are already kind of in
shorthand or in some kind of a little bit more terse,
not mean, but just kind of like we're trying to
get it done. That's why we depend on that method
of communication over talking, because it's a lot more boom
boom boom, let's get it done and move on. I
would think it would be easy ish for an AI
to mimic those kinds of little quick exchanges and not
(48:10):
raise a red flag for you or I.
Speaker 1 (48:12):
That's a great point because even if they get something wrong,
we would just think, oh, Matt must.
Speaker 2 (48:15):
Be in a hurry typo or you know, fat finger whatever.
Speaker 1 (48:19):
Yeah, and auto correct.
Speaker 2 (48:21):
Autocorrect exactly like I've sent texts and like seen how
mangled they were, But be like, oh, he'll know what
I mean. Yes, like let let it ride.
Speaker 1 (48:30):
Yeah, yeah, yeah. Because again it goes to time. You know,
it would be alarmist for us to say this would
plausibly happen to the average person in the near future,
but for a high value target like a politician, a celebrity,
a controversial business person and so on, it's completely within
the realm of possibility. And speaking of alarmists, let's open
wide the doors of science fiction and do a bit
(48:51):
of speculation here, completely unfounded. Imagine a world where the
only communication you can trust becomes face to face in person.
Imagine a world where you are accused of a crime
you didn't commit, but you don't have a rock solid
alibi of your activities at the time of the alleged crime,
and surprise, surprise, they have you on camera committing it
(49:13):
and then confessing to it, and you cannot prove it
is not you would this mean that eventually video evidence
becomes inadmissible in court? Does this technology pose an existential
threat to the fabric of digital reality? Is it really
us talking to you right now?
Speaker 2 (49:30):
Yeah, man, I don't even know anymore.
Speaker 1 (49:32):
I don't know.
Speaker 2 (49:33):
And to answer your hypothetical question earlier, yes, I think
this is absolutely a threat, and we rely on things
like video evidence. But I could see far enough removed
from this particular technological time and place that could be
something that is just a thing of the past. Man,
you know what I mean?
Speaker 1 (49:49):
Yeah?
Speaker 2 (49:49):
Can you picture it like I'm trying to think of
of an analog of something that used to be infallible
and now as a completely up for grabs, like.
Speaker 1 (49:58):
I don't know, polygraphs.
Speaker 2 (49:59):
Yeah, exactly, a great example. I don't know. This is
a little bit more of an ephemeral example. But even
something like the press, or like the news you know now,
you know, used to to be able to depend on
some level of rigor and truth in any kind of
news reporting you see, And now, as you say, because
of the Internet, we can see stuff that's completely generated
by artificial intelligence that tricks people all the time into
(50:22):
believing that it's you know, God's truth.
Speaker 1 (50:24):
And it's insane. So I wonder you know, I have
questions for you to No, I don't want to put
you on the spot, but this is this is one
of the only conversations we can know is actually happening
between us, right.
Speaker 2 (50:37):
I mean, we are sitting here looking at each other
in the eyes. We are definitely both humans because as
far as I know, we don't have human stand ins
that are believable enough to trick either one of us.
Speaker 1 (50:50):
It's like in Terminator, the first ones you could tell
the difference, right, So, so what do you have any
thoughts on the likelihood of this kind of technolog progressing
or being used to disrupt the spread of reliable information.
Do you think it's a definite Do you think it
might happen? Do you think it's a little bit sensationalistic?
(51:12):
What do you think?
Speaker 2 (51:12):
I think it absolutely could happen. I think it's another
one of those things that's kind of dangled out there
like a piece of bait on a fishing hook for
the American consumer to bite onto and be oh, this
will change my life for the better. This will make
it so easy for me to not have to make
hair appointments or schedule oil changes or what have you. Yeah,
(51:34):
and then you you know, once the buy in happens,
then it started. It opens that Pandora's jar that you
keep talking about. So yeah, no, there's no doubt about it.
The more access we give our devices and the more
centralized our information is. And I guess by centralized, I
just mean within a particular service where it has access
to all of it. And that's what that's what it takes. Right,
(51:56):
Why does series suck? Okay, let's just I just want to get
into this because I was a rag on Apple for
a minute. Siri is not a good personal assistant because
it doesn't learn. It doesn't keep track of what you say.
It's starting from scratch every time you ask it a question. Right,
This is Siri, Apple's personal assistant, which is one of
the first ones to hit the market but fizzled ridiculously
(52:17):
because of Google's and Amazon's Alexa and stuff. Because those
actually learn and are connected to the Internet and learn
your vocal patterns and learn your preferences and have access
to that bigger network, whereas Siri is just kind of dumb.
It doesn't really do that.
Speaker 1 (52:34):
They can tell you stuff about Apple, you can.
Speaker 2 (52:36):
Tell you a lot of stuff about the history of
the company, and it can do Google searches for you.
But if you said, hey, Siri, what time is this movie?
It'll probably give you some tangentially related answers, but not
the exact answer you want.
Speaker 1 (52:49):
Time was invented shortly after The Big Bang.
Speaker 2 (52:52):
I say, Siri, tell me a joke. That's always fun.
But no, but Google assistants will do that because it
learns your habits and figure years out what kinds of
questions you're asking. So that's what I'm saying. When we
start getting into that where we're tapped into this larger network,
it does seem like a slippery slope. But yeah, people,
even the most paranoid of my friends, some of them,
(53:13):
are all about these home assistants. And it's thing, this
conversation we always have where it's like, I'll give it
up if I can tell a little box to turn
on my lights for me.
Speaker 1 (53:22):
Ah. Yeah, I was given one of those things, and
I plug it in when I'm cooking and unplug it afterwards.
So it probably thinks that I am always cooking, or
just don't have power at my house, which I'm fine with,
But we want to hear from you, assuming again to
the earlier question, is really us talking to you now?
(53:42):
What would you like to tell us? Are the robot
versions of us impersonating the biological versions of us? Do
you have a home assistant? Do you use this facial
recognition software that is so often marketed as a free
recreational pursuit. Do you believe that the future will be
fraught with things that make us question digital reality? Is
(54:07):
there a way to combat it? Should there be? You
can find us on Instagram. You can find us on Facebook.
You can find us on Twitter. You can find some
version of us. I feel like now we have to
say you can find some version of us on there.
You can also, of course, visit one of our favorite
places on the internet, our Facebook page. Here's where it
gets crazy, where you can see Matt Knowle and I, well, what.
Speaker 2 (54:31):
Do we do there? Where on the internet?
Speaker 1 (54:34):
Here's where it gets great?
Speaker 2 (54:35):
Oh, all kinds of stuff. We lurk sometimes sometimes we
respond and hop around in the threads and post our
own little gifts and memes, and we have a good
time in there. It's a lot of fun and.
Speaker 3 (54:48):
That's the end of this classic episode. If you have
any thoughts or questions about this episode, you can get
into contact with us in a number of different ways.
One of the best is to give us a call
or no is one eight three three st d WYTK.
If you don't want to do that, you can send
us a good old fashioned email.
Speaker 1 (55:07):
We are conspiracy at iHeartRadio dot com.
Speaker 3 (55:12):
Stuff they Don't want you to Know is a production
of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.