Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to this week's classic episode. Let's play conspiracy Jeopardy.
The category is Google the Pentagon and AI.
Speaker 2 (00:08):
I don't even know what to say about this one. Guys, like,
this is again, this is twenty eighteen. We didn't even understand,
We didn't even understand episode.
Speaker 3 (00:19):
No, it's probably a really teachable moment for all of us.
We should go back and listen to the whole thing too,
and learn from our past selves. That's one of the
things that's weird about doing podcasts, you guess ever think
about that there's this weird version this history of us
being kind of varyingly behind and ahead of the eight
ball over the years, and sometimes it terrifies me to
(00:40):
go back and revisit it. So y'all out there in
conspiracy realists think can do it for us.
Speaker 1 (00:44):
From UFOs to psychic powers and government conspiracies, history is
riddled with unexplained events. You can turn back now or
learn this stuff they don't want you to know.
Speaker 2 (01:08):
Welcome back to the show.
Speaker 4 (01:09):
My name is My name is Noah.
Speaker 1 (01:11):
They call me Ben. We are joined with our super
producer Paul Big Apple decand most importantly, you are you.
You are here, and that makes this stuff they don't
want you to know. Today, we're going to talk about
a company called Google, commonly known as Google. That's its
street name. That's such a popular name, that's become a
(01:33):
verb in American English. They have a parent company who
could call it, the real company, which is known as Alphabet.
And we've talked about this a little bit in the past,
but we want to whatever we mentioned Google establish that
from the jump at the top, because it's strange how
(01:54):
relatively obscure that fact remains. And now, while Google O'tabet
whatever you want to call it may not be the
largest company in the world in terms of capital or
territory or material possessions, it is undoubtedly one of the
most influential organizations on the planet, at least in terms
(02:15):
of its connection to human beings or the ways in
which it influences networks or organizations or webs of human beings.
Most people on this planet, literally most people think about
that have heard of Google, even if they have somehow
not directly interacted with it, And arguably, on a slightly
(02:38):
more controversial note, most people have in some way to
some degree been affected by the actions of the US military,
whether that's something that happened generations ago in their own
country or an adjacent country, or to an ancestor, or
whether something is happening to them now as we record
this episode, and so today begins the question spoiler alert.
(03:01):
It also ends with a question, and our beginning question
is this what happens when these groups Google Alphabet, whatever
you want to call it, and the US military or
Uncle Sam, what happens when they join forces? Because they
have so, they have vultrons up, and they have vultroned
(03:24):
up in a again controversial way, it involves unmanned aerial systems,
also known as drones.
Speaker 4 (03:35):
You are the ones that like deliver your Amazon packages, right.
Speaker 1 (03:37):
Yeah, and some variation of them. Yeah, they're commonly called drones,
of course, but unmanned aerial vehicles or UAVs have been
a long running trope in science fiction, and surprisingly for
many people, some version of this technology existed long before
the modern day. There wasn't a history rattling moment when
(03:59):
the first drone ever went airborne. And that's in large
part because it's very difficult for us to isolate exactly
when the early version of an invention ends and the
first modern versions begin. Matt when you and I worked
on the invention show Stuff of Genius. We ran into
this pretty often. Somebody comes up with a rough approximation
(04:23):
of an idea and then over a course of years, decades,
or even centuries in some cases, people continually make small
improvements until it becomes the thing we know. Most inventions
are not a Eureka moment in reality. I mean, they
may be a Eureka moment in terms of the idea
(04:45):
or the epiphany, but most inventions in reality, and when
we get to the hardware phases are going to be
created by people standing on the shoulders of giants.
Speaker 2 (04:57):
Yeah, it ends up being the person who makes it
marketable rather than just creates it.
Speaker 1 (05:02):
Oh good call, and a very sad and capitalist call. Right,
it's the person who popularizes it. So if we look
at the first modern version of a drone, an unmanned vehicle,
would it be one of those bombs in the eighteen
hundreds that was strapped to a balloon and then just
sort of put off like something bad will happen, We
(05:22):
can't steer it. There wasn't a person on that and
it was being used as a weapon. Or was it
the initial V one rockets that Germany deployed during World
War Two. In the early nineteen hundreds, military groups did
use drones. They had radio controlled versions that were just
for target practice. Engineers developed unmanned aircraft that again had
(05:43):
munitions of some sort. These weren't drones the way we
think of them in the current age. But there's you know,
there are other examples, like cruise missiles. These things were
the first cruise missiles, often called flying torpedoes. They were
not meant to return to base, and drones are meant
to function like very very smart, sometimes very very deadly boomerangs.
(06:08):
They come out, they ain't come back, and somebody.
Speaker 2 (06:10):
Claps yeah, generally not the people being targeted.
Speaker 1 (06:15):
Generally, yes, yes, generally not. You are correct, my friend.
During the Cold War and the Vietnam War or the
series of conflicts collectively known as the Vietnam War here
in the US, Uncle Sam ramped up research on drones.
It was no longer just an interesting idea posed by
a few eccentric scientists and engineers. In fact, by the
(06:36):
time of the Vietnam War, unmanned drones flew thousands of
high risk reconnaissance missions, and they got shot down all
the time. But In that process, they saved the lives
of human pilots who otherwise would have been a board.
And I thought about I thought about you, Matt when
I was looking into this because another precedent for drones,
(06:58):
which we won't go into in this episode, another president
for drones where weaponized bats. Did you ever hear about this?
Speaker 4 (07:06):
The oneted to drop the little bombs?
Speaker 1 (07:07):
Yeah, well, yes, that's a version of it. Yeah, they
said bats can fly, we have access to them.
Speaker 2 (07:13):
Well yeah, I mean you could go back to carrier
pigeons as being a type of drone as well.
Speaker 1 (07:18):
Yeah, I mean there's not a person aboard.
Speaker 5 (07:21):
There's a really cool episode of the podcast of the
Memory Palace about the bats that's called idy Bitty Bombs.
Speaker 4 (07:27):
It's quite good.
Speaker 1 (07:28):
Yeah, yeah, so check that one out for more information.
Around the same time Vietnam War, it goes this far back.
Around the same time of Vietnam War, engineers started adding
real time surveillance ability to drones, so cameras. Right today
we look at I guess the closest thing we have
(07:49):
to a military version of the first drone would be
the MQ one Predator, was first unveiled in nineteen ninety five,
which is weird when you think about it because that
was such a long time ago. And in two thousand
and two, the CIA used the MQ one Predator to
make its first successful kill of a quote unquote enemy
(08:09):
combatant in Afghanistan. Since then, since just two thousand and two,
this technology has grown by cartoonishly extreme leaps and bounds,
and it's currently on the bleeding edge of scientific advancement. Yeah.
Speaker 2 (08:22):
And if that MQ one Predator drone, that's the drone
that you have seen before, that's the one that is
in pictures all over the place, that is the I mean,
I guess being one of the first, it just became
iconic in that way. Just gonna want to throw that
in there.
Speaker 1 (08:34):
Yeah, it looks like a windowless commercial plane with an
upside down tail fin, right, two upside down tail fins.
Speaker 2 (08:43):
Yeah, it's kind of The shape is slightly slightly odd,
but yeah, it's it's pretty cool.
Speaker 1 (08:49):
And as these drones become more and more common in
theaters of war or in commercial industries like Amazon using them,
or recreational use like you buy a when one is
a gift, they're also doing the inorganic version of Specie
eighting you know, speci eating is when a relatively common
(09:09):
ancestor through generations and generations, produces different species of the
same template. Right, So these drones are becoming increasingly specialized.
They have varying ranges of abilities and odds. Are you've
seen one, right, We've all seen toy drones.
Speaker 4 (09:28):
The little quad copter guys.
Speaker 5 (09:30):
Yeah, and you gotta wonder too, like how does that
work patent wise? Like did someone come up with that
design and it was just generic enough to be copied?
Like I always wonder about things like that, like the
fidget spinner, Like did that guy just do a bad
job of patenting the original fidget spinner? Or is it
so generic that that doesn't even apply. I'm wondering that
that's the same thing about the design of those quad
(09:51):
toy drones.
Speaker 1 (09:52):
Entirely speculative, but it's probably at the very least a
series of patents. Yeah, they have to multiple things in play,
I would imagine.
Speaker 5 (10:00):
Pro tip speaking of a play, if you're gonna get
one of those for your kids, get a good one
because the cheap ones you might think you got a deal,
they just don't fly, They suck.
Speaker 1 (10:08):
And they're battery vampires too.
Speaker 5 (10:12):
Oh, my god, that was the thing, Like we got one, like, oh,
it flies for about four minutes, four minutes, Like that's
not fun tops tops. Yeah, but what kind of batteries
do the big boys use? Are they fueled or are
they do you know that's classified?
Speaker 2 (10:28):
Yeah, sorry, we can't talk about it.
Speaker 4 (10:29):
You can't tell me.
Speaker 2 (10:30):
No, I thought we were pals. Google it all right.
Speaker 1 (10:33):
Oh yeah, so some of the on a serious note,
some of those power systems would vary, and by the
time we dive into one and this episode comes out,
it may have changed. Interesting, that's how quickly they're moving
on these. So we've seen these, right, and we've probably
(10:55):
seen photographs or videos of the larger of these tiny drones,
things like the Predator, like Matt just described. You can
find a photograph of it pretty easily. The differences are
stark and astonishing, and missiles are not the biggest difference.
The fact that they're weaponized. While that is incredibly dangerous
(11:17):
and possibly fatal, that is not the biggest difference. Some
of the bigger differences are going to be on the
inside of the drone. We're talking things like GPS, real
time satellite feeds, super complex microprocessors, and more, they're putting
brains on board of these things. And perhaps most disturbingly,
some drones are edging closer and closer and closer to
(11:39):
what's commonly known as artificial intelligence or technically known as
machine consciousness. And we're close to an old science fiction
question becoming a real question. Well, the first true artificial intelligence,
the first true machine consciousness be a weapon of war.
Speaker 2 (12:00):
Yes, highly likely, definitely.
Speaker 1 (12:04):
Yeah, why is that met?
Speaker 2 (12:06):
That's because innovation through the military is one of the
ways that innovations occur. The biggest innovations and inventions occur
because there's funding from a military somewhere, and it's not
like it's the military itself doing these things. It is
usually a private company or someone who's being given a grant,
(12:26):
a government grant to do it.
Speaker 5 (12:27):
We see this time and time again with many of
the stories that we've talked about, where the military is
also always like decades ahead of what consumers will ever
get their hands on, and it might be ten twenty
years before it trickles down into the public where maybe
we can have a you know, terabyte thumb drive, just
as a silly example, where they've had that technology probably
for a decade at this point.
Speaker 2 (12:48):
I would say the only way for it, not for
the first machine consciousness, true machine consciousness, to not be
a weapon, is for every private company on the planet
to say I'm not going to do.
Speaker 1 (12:58):
That, right, Yeah, for us all kumbayah and do some
hands across America on the tech sphere, right, or hands
across the world. So with this in mind, we're also
hitting on a larger phenomenon, this kind of kind of unfortunate.
We're a family show, but we also endeavor at our
(13:19):
best to be an honest when the great leaps and
technology are not almost always, but the majority of the time,
the great leaps and technology are driven by human vices.
A desire to win a war, a desire to have
more you know, social attention or more financial access than
(13:40):
a rival.
Speaker 2 (13:41):
Or how do we compress all of this?
Speaker 1 (13:43):
Porn Right, that's the other one. That's the third one,
a desire for us because in a very very real
and concrete and provable way, pornography is the reason that
you or your parents had a VHS instead of a
Beta MAX. It's the reason, Like we talked about this
I think previously. Yes, yeah, so often we all too
(14:06):
often we as a species are driven by non noble motivations,
but that doesn't mean that we're not smart. We are
a jar of very very smart, very belligerent cookies, and
we've been putting some of the smartest cookies of our
(14:26):
species into the search for artificial intelligence and machine learning,
a search that continues as we record this episode.
Speaker 2 (14:37):
So just a little while ago, in July of twenty seventeen,
there were these two researchers named Greg Allen and Taniel
Chan and they published a study which was called Artificial
Intelligence and National Security, and in this they put forward
goals for developing law policies specifically that would that would
(14:58):
put ideas forth in how do you with AI when
it comes to national security military spending? Like what should
we as a species do when it comes to these
two things colliding? And here are some of their findings.
They again said the most transformative innovations in the field
of AI are coming from the private sector and academic world,
not from the government in military. So currently, right now
(15:22):
AI is something that's happening outside of the realm of
the military, which in July twenty seventeen, yay, that's a
good thing, maybe because you know, there could be some
nefarious private sector humans as well, But that's point A.
The second one is that the current capabilities of AI
have quote significant potential for national security, especially in the
(15:44):
areas of cyber defense and satellite imagery analysis. And then
the last thing, I'm just going to read this whole
thing as a quote. Future progress in AI has the
potential to be a transformative national security technology on par
with nuclear weapons, aircraft, computers, and biotech, so some of
the most influential technologies on weaponry on war. They're saying
(16:08):
AI is going to be the next big thing.
Speaker 1 (16:12):
And they're not just talking the talk, right.
Speaker 2 (16:14):
No, they are not just talking to the talk at all.
According to the Wall Street Journal, the Defense Department spent
seven point four billion dollars on artificial intelligence related areas
in twenty seventeen. Seven point four billion dollars on AI.
Speaker 1 (16:31):
And you know, not to not to harp too much
on Eisenhower, but I believe he's the one who had
a quote that said every rocket the US builds represents
this many schools that we could have built but didn't.
And there are a couple of things, if we're being
entirely fair about that number, that we have to acknowledge.
(16:51):
First is that this is only the money we know of,
so seven point four billion, let's call that an at least, right,
And then on the other end artificial intelligence related areas.
That doesn't mean they're just building a silicon brain, right,
that there are many other things that we might not
(17:14):
associate directly with machine intelligence, that maybe function is support
or infrastructure. So this is all, This is the whole meal.
This isn't just the hamburger.
Speaker 2 (17:27):
Yeah, well, and we agreed. We should especially point out
that the Department of Defense budget for fiscal year twenty
seventeen was five hundred and forty billion dollars around that number.
Speaker 1 (17:39):
Still the biggest in the world. Yeah, and that's publicly acknowledged. Again,
I can't keep saying that enough. Publicly acknowledged.
Speaker 2 (17:49):
That's what they ask Congress for.
Speaker 1 (17:50):
Yes, it's true. Before I derail us, what do you
think should pause for a word from our sponsor?
Speaker 4 (17:56):
That seems wise.
Speaker 1 (18:03):
Here's where it gets crazy. We've painted a little bit
of the picture, right. We know this story now, fellow
conspiracy realist. It is a story of drones. It is
a story of private industry. It is a story of
government interaction, a story of war and a story of secrets.
(18:24):
But in this age of instantaneous information, we know more
than we would have known, say even two decades ago.
We know more about what's happening now. Similar to the
drones acquiring real time surveillance technology, we are able to
be more aware of surroundings in a distant land. And
(18:46):
that's how we know about something called what's it called METT.
Speaker 2 (18:51):
It's the Algorithmic Warfare Cross Functional Team, also known as
Project Maven.
Speaker 5 (18:58):
And this was announced on April twenty six of twenty
seventeen in a and internal memo from the Deputy Secretary
of Defense Robert Bob o work.
Speaker 2 (19:09):
Yeahobo, he goes by bob work. But I just thought, yeah,
Boba work. So I guess we can just read parts
of this memo if you want to. It's kind of
hard to digest, that's the only thing. So maybe if we.
Speaker 1 (19:23):
Anyway walk through parts, get down, humanize it.
Speaker 2 (19:26):
Yes, let's let's do that. Let's do that. Here we go.
I'll be I'll be the dry and the dry part. Here,
Here we go. The aw CFT's first task is to
field technology to augment or automate processing, exploitation, and dissemination
for tactical unmanned aerial systems. Okay, okay, so good god,
(19:50):
Well let's keep going a little bit. So, so it's
gonna augment or automate stuff for UAVs or drones and
mid altitude full motion video in support of the Defeat
ISIS campaign, and this will help to reduce the human
factor's burden of full motion video analysis, increase actionable intelligence,
(20:14):
and enhance military decision making.
Speaker 1 (20:16):
So what that all means is the PED processing, exploitation,
and dissemination means that they're hoping again to get a
brain on board your local neighborhood predator or whatever drone
they end up using by the time you are listening
to this episode. And the processing part is going to
(20:37):
be the analysis, which often would be the human part
of the equation. The exploitation would be the ability to
take action on that analysis, right to use that knowledge,
and dissemination would be sharing the information, determining what information
is relevant, what is irrelevant, and to send where that's
(21:01):
PED And that's the full motion video is similar to
the same thing because it's it's a little bit easier
to take a bunch of numbers that are sensed a
bunch of temperature variables, for instance, and tell a computer
or an automated system that whatever is within this range,
(21:23):
do action X. Whatever is outside of this range, do action.
Speaker 2 (21:26):
Why, you know, even if the action is just alert
the human user, the person that's going to actually analyze
the thing.
Speaker 1 (21:34):
Yeah, absolutely absolutely, And that's tougher with video, right because
there are many many more factors of play. Would you
say that sounds correct?
Speaker 2 (21:43):
Absolutely? It just depends on what kind of sensors they're
using on the drones. If it's you know, is it
a heat based imaging system, is it just a you know,
I mean, what are you looking at? Essentially? What is
the data that you're looking at? And that's really hard
to tell because it's going to change the depending on
the model of drone and what they're you know, deploying.
(22:04):
The really interesting one of the really interesting things here
in my mind is that, you know, we're talking about
a single drone and having an onboard system, but they're
also talking about having a system that can analyze the
footage from all the drones that they have deployed in
a single space and then have the machine be able
to take all that data in and tell you what's
(22:26):
going on, and then send that whatever the algorithm decides
needs to happen, disseminate all of that information out to
the entire team that's also there, so you really have that.
One of the things we talked about recently in another episode,
a full scope picture of the battlefield essentially, so everyone
at all times sees exactly what's happening on the battlefield.
Speaker 1 (22:47):
As close to an omniscient observer as possible. Yeah, as well,
we used to say humanly possible, but that's no longer
the case, right.
Speaker 2 (22:55):
Yeah, exactly, So let's keep going on here a little
bit with what the memo said. This program will initially
provide computer vision algorithms for object detection, classification and alerts
for full motion video ped Further sprints will incorporate more
advanced computer vision technology. After successful sprints in support of
(23:16):
intelligence surveillance reconnaissance, this program will prioritize the integration of
similar technologies into other defense intelligence mission areas, so it's
not just for this single drone operation that they're gonna
deploy it to. It's going to be for bigger things.
It goes on to say, this program will also consolidate
(23:38):
existing algorithm based technology initiatives related to mission areas of
the Defense Intelligence Enterprise or DIE, including all initiatives that
develop employ or field artificial intelligence, automation, machine learning, deep learning,
and computer vision algorithms. So really they're talking about everything
(24:00):
in the future for military applications.
Speaker 1 (24:03):
So beyond far beyond drones. Oh yes, these could also
be submarines for instance. These could be satellites, these could
be space shuttles.
Speaker 2 (24:11):
It could be surveillance, just video camera surveillance that's just
on the ground either, right, you know, is this.
Speaker 5 (24:18):
The kind of stuff where eventually you could program a
drone or some kind of killing machine with parameters that
are like legally sanctioned and you can say, okay, search
and destroy X target or this type of target.
Speaker 1 (24:34):
That's the fear. The potential is there, The potential is there. Absolutely,
So essentially, what they're saying here is that DARPA and
the Pentagon have already poured tons of money into this
automated collection of data. And they're not alone. Other members
of the Alphabet Soup Gang, which is a very endearing
name for them, like the Apple Dumpling Gang, right, the
(24:54):
Alphabet Soup Gang as other members of course n SA
and so on. They have also put forth tons of
money and time into automating the collection of data, and
the next step is discover is discovering a means of
automating the analysis of data. And the step after that,
the especially spooky one, is enabling these unmanned machines to
(25:17):
immediately act in real time on the results of their
onboard analysis. So to the questioning Posonal, the answer would
be yes, they want to have at least the capability
to make a judge, jury, and executioner on an unmanned vehicle.
This doesn't mean they'll do it. A lot of tech
(25:38):
companies do have problems with it, or at least with
actualizing that potential.
Speaker 5 (25:45):
Yeah, you even see problems with the self driving cars.
I mean, as much as much R and d as
have gone into those, they're certainly not infallible.
Speaker 4 (25:55):
And they get into fender benders and make mistakes.
Speaker 2 (25:58):
Absolutely of Let's just have one more little quote here,
and this is from the chief in charge of this
Algorithmic Warfare Cross Function Team or Project MAVEN, and he's
speaking specifically to what they hope to achieve in the
near term with this project. And he says, quote people
(26:18):
in computers will work symbiotically to increase the ability of
weapons systems to detect objects. Eventually, we hope that one
analyst will be able to do twice as much work,
potentially three times as much as they're doing now. That's
our goal. So in the short term, somebody sat down
with him, had a meeting, and we want to increase
(26:39):
just basically what our analysts can get done, and we're
gonna save money on that end, We're gonna save time
and we're gonna be more efficient. That's what this whole
project is about, at least in his eyes or his
pr thing that he got to say. So what we're
talking about with this project so far is just what
it means to achieve what it wants to achieve. We
haven't discussed how it was going to achieve all of this,
(27:03):
and that is where we have our friend Google.
Speaker 5 (27:08):
Yes, Google, Google is nobody's friend. If Google was an
ice cream flavor, they'd be pre Lanes and Google.
Speaker 2 (27:17):
Yes.
Speaker 4 (27:18):
And I can't say that word on the podcast. Every
time you say project Mayven, I think it's a project Mayhem.
Speaker 2 (27:22):
Yeah, slightly different. So we know here that both the
Defense Department and Alphabet and Google are saying that this collaboration,
them working together in any way on this at all,
is all about AI assisted analysis of drum footage. That is,
that is what they're saying. That is the party line
on both sides. The p rtists put forth is everything's cool.
(27:44):
We're just working on looking at video guys.
Speaker 1 (27:46):
It's cool, right, right, specifically through something called tensor flow AI.
Speaker 2 (27:56):
Yeah, Noel, you looked into this, right. What is TensorFlow?
Speaker 5 (28:00):
This is from the website of TensorFlow trademark. I'm gonna
do it like I'm doing an ad read. TensorFlow is
an open source software library for high performance numerical computation.
It's flexible architecture allows easy deployment of computation across a
variety of platforms CPUs, GPUs, TPUs, and from desktops to
clusters of servers to mobile and edge devices. Originally developed
(28:21):
by researchers and engineers from the Google Brain team within
Google's AI organization, it comes with strong support from machine
learning and deep learning, and the flexible numerical computation core
is used across many other scientific domains.
Speaker 4 (28:34):
Yep, that one.
Speaker 2 (28:36):
Yeah, that's the what.
Speaker 5 (28:37):
No, seriously, TensorFlow can actually feel kind of like totally
future science fiction kind of stuff for people who are
not already familiar with how machine learning works. But there's
some pretty awesome introductions primers out there if you would
like to learn more, particularly a YouTube video.
Speaker 2 (28:55):
Yeah, it's about a forty five minute essentially on how
it functions. But it's good, it's interesting, it's over my head.
Speaker 1 (29:03):
And the cool thing when I found that one the
cool thing about that guy is he's If you watch
the video, he says, if you have any questions, you
can email him directly.
Speaker 2 (29:13):
Oh my gosh, I don't really.
Speaker 1 (29:14):
Yeah, I don't know if he knew that his email
address was going out on YouTube, but he seemed pretty approachable.
Speaker 2 (29:20):
And that video is on the Coding Tech YouTube page
and it's called TensorFlow one oh one. Really awesome intro
into TensorFlow.
Speaker 4 (29:28):
Super awesome.
Speaker 5 (29:29):
But yeah, but forty five minutes is probably about the
shallowest deep dive you're going to get into the subject.
And I mean that is a glowing endorsement of the
quality of this this walkthrough.
Speaker 1 (29:38):
Yeah, so what exactly has TensorFlow done so far?
Speaker 2 (29:42):
So in the research for this show, we came across
an open letter that was written from the International Committee
for Robot Arms Control, and they go into just some
of the details of exactly what is happening here the
collaboration between Google and the Department. So let's kind of
look at this. They were looking at an article from
(30:06):
Defense one in which they note that the Joint Special
Operations Forces they've conducted trials already using video footage from
this other smaller UAV called scan Eagle. It's a specific
surveillance drone, and the project is also slated to expand
to even quote larger medium altitude predator and reaper drones
(30:29):
by next summer. So these are the ones we spoke
about in the beginning. These are the ones that are
actively functioning in Syria and Afghanistan and other in Iraq
and other places like that, and eventually to this thing
called gorgon Stare, which sounds amazing, right.
Speaker 4 (30:50):
Sounds scary, yeah, and it.
Speaker 2 (30:53):
Turns it might. So this is these are specific sensors
and it's a technology package that's been it's been used
on these MQ nine reapers, which is just the newest relation.
Yeah yeah, and man, it sounds really scary. But you
(31:13):
can learn more about this right now. If you search
for gorgan go r gon Stare and maybe drone or predator,
that's probably the best way to look into it. And
there's all kinds of information that's been officially released on
this program already. So okay, so we know that this
thing's been used in these smaller projects. Now it's going
(31:35):
to be moved on to these Predator and Reaper drones,
and they're thinking that it's going to get expanded even further.
And I don't even we don't even know what that
expansion could be at this point, because the drone technology
is growing. It's just a lot of it is classified.
Speaker 1 (31:54):
Right, Yeah, a lot of it is classified, and as
we've established earlier in the episode ode, that's really common.
In fact, a lot of it will probably be classified
for a number of years.
Speaker 2 (32:07):
Well yeah, if you think about it this way. So
the gorgon Stare, it can allegedly and I just say
this because I haven't actually seen it, but it uses
a high tech series of cameras that can quote view
entire towns.
Speaker 5 (32:20):
Like what like Google Earth style like yeah, that kind
of zoom, yeah, okay.
Speaker 2 (32:24):
Active running looking at an entire town. And then if
you apply this machine learning to it and you can
see every human being walking around, and then the technology
gets more and more powerful, then you could really just
have a complete surveillance state from above.
Speaker 5 (32:38):
Yeah, that's that's pretty scary, Like you can see them
in their homes. Is it have some kind of like
heat vision that works from afar?
Speaker 2 (32:46):
That's a great question. Then I don't know the answer too.
I certainly hope not. Maybe it would be good for
a military application, but that's terrifying.
Speaker 5 (32:54):
That's the problem, right, Like you can't put this stuff
back in the box. Like, even if it starts off
as a military application, it's going to eventually get in
the hands of Not to say that the military has
our best interests at all times, but it could make
its way into a more nefarious use.
Speaker 1 (33:12):
If we were to put a list of all the
technologies that expanded past their intended use on one side,
and a list of all the technologies that people agreed
not to pursue on the other side, one list would
be very very long. I mean, we for a species
that loses cities and even entire civilizations, we're pretty good
(33:34):
at finding and replicating technology, and we don't like to
lose it. We did an earlier episode on things like
so called Damascus steel or Greek fire. But looking in
that it was very difficult for us to find examples
of technology that was truly lost in the modern day.
Once Pandora's jar is unscrewed, then the it's kadie bar
(33:58):
the door. Yeaistic statement, I Ardi.
Speaker 4 (34:01):
Those badgers get out of that bag.
Speaker 1 (34:04):
They will eat you alive, and you can't get them
back in.
Speaker 2 (34:07):
No, man, Well, here's the thing. The badgers with this
project badgers in the bag. They don't even know where
the other one is right now. They're so far gone
from each other. Because this is where you get into
the whole targeted killing thing that was really big in
what two thousand and seven, two thousand and eight as
we got on through the Obama administration, where drones were
(34:30):
being used more and more to target mostly males, and
males were considered if you're of a certain age, you
could kind of prove that this male was of a
certain age even from a drone, that person would be
considered an enemy combatant in certain aspects or in certain ways.
And if you're using these algorithms to just figure out
(34:50):
you know, if you can just figure out it's a
male in in you know, above a certain age. Then
that's what Google is helping them do is figure out
how to kill people from Afar that are males of
a certain age. Right.
Speaker 1 (35:04):
And this goes into a quotation that you found, Matt,
that gave me some chills.
Speaker 2 (35:11):
Oh yeah. A quote from Eric Schmidt, who was who
up until last year or the end of last year,
he was the executive chairman at Google in alphabet, and
he says, there's a general concern in the tech community
of somehow the military industrial complex using their stuff to
kill people incorrectly.
Speaker 4 (35:32):
Hold on there, how do you kill people incorrectly?
Speaker 1 (35:34):
That's that's the way of doing Yeah, that's what gave
me chills, Matt, when you were originally talking about this,
the idea of killing people incorrectly. The implication there is
frightening because Schmidt is jumping straight across and completely sidelining
the recognition that maybe the tech community doesn't want to
(35:57):
kill people in general.
Speaker 4 (35:58):
Is that just linguistic jiu jitsu kind of it's a bit.
Speaker 1 (36:01):
Yeah, absolutely, yeah, it's a bit.
Speaker 2 (36:03):
And that just so you know, that was a quote
from a keynote address he was giving at the New
American Security, Artificial Intelligence and Global Security.
Speaker 1 (36:11):
Summit, right before the panel on how to correctly kill people.
Speaker 2 (36:15):
Yeah, it was in November of twenty seventeen. Yeah, it
was right before that panel.
Speaker 4 (36:19):
You shoot him in the face, that's how you do it,
and make eye contacts.
Speaker 1 (36:24):
Huh yeah.
Speaker 4 (36:24):
Yeah.
Speaker 1 (36:25):
So the situation then seems on the cusp of something
a rapid evolution, perhaps an evolution that's already occurring. And
we've talked about the motivations of the military, we've talked
about the technical aspects, we've talked about what the suits
and the generals want. But what about the employees, What
(36:48):
about the people who are actually doing the work. As
we know, often those are the people who are the most.
Speaker 2 (36:53):
Ignored, a lot of them bucked right, yes, and we'll
learn about that right after a quick break. Oo.
Speaker 1 (37:04):
So to bring the focus on the employees, the ones
who are actually out there writing the code that you know,
many times they probably won't get credit for, we need
to look at their own internal activism. See Google and
Alphabet employees were well aware of these moves far before
(37:24):
the official announcement, way before twenty seventeen. Of course, they
know this stuff is happening because they're working on things
like this. Behind the scenes.
Speaker 2 (37:32):
Might be compartmentalized, but they're working on things. Sure they
can you know, employees talk.
Speaker 1 (37:37):
Sure. For instance, do you remember that terrible sci fi
series The Cube guys we.
Speaker 2 (37:45):
Do the movie?
Speaker 1 (37:47):
I think there are three?
Speaker 4 (37:48):
There was one of them was Hypercube. I remember that.
Speaker 5 (37:51):
I like the first one though, I quite enjoyed it.
But yeah, it's it's it's shlock, but it's fun.
Speaker 1 (37:55):
But the thing about it is it's a bit of
a cautionary tale because the premise of The Cube spoiler
alert is that a bunch of different experts were paid
to research and construct specific components of what would later
become this death trap, and they didn't know that what
(38:16):
they were building would be used in this certain way.
They thought they were just making heat sensors, right, they
thought they were just making hatches. They thought they were
just making lasers, Yeah, laser grids.
Speaker 5 (38:27):
The first scene the guy actually gets cubed by a
laser grid in a cube in a cube. That's pretty mad.
And the thing too about that is like, is it
just for funzies? Like you don't really know, Like why
why are they in there? Is this just some like
rich a hole. It's just trying to like, you know,
you don't find out.
Speaker 2 (38:48):
That, yeah, you will, really, you gotta keep going.
Speaker 4 (38:50):
We got a cue. How far? How far? How deep
does this rabbit hole, this rabbit cube go?
Speaker 2 (38:54):
My friend, Well, somebody's going to make another one, and
we're gonna find out.
Speaker 4 (38:56):
You think some of the definitive one.
Speaker 1 (38:58):
Sure, it'll probably be a prequel or something that's cool.
You can also, you know, I'm sure read spoilers on
the wiki or on the online in a form somewhere.
But luckily, at least for their own ethical well being
or their ability to sleep at night, the many of
the Google and Alphabet employees were able to communicate with
(39:20):
each other. They were aware of the implications of what
would happen if all this stuff was assembled together like
the Avengers or Captain Planet and the Planeteers, so they
took action. First, they gathered to write an internal petition
requesting the organization pull away from this agreement with Uncle Sam. Petitions,
(39:42):
it should be said, are not inherently unusual at Google,
but this one was particularly important and particularly crucial to
the future of the company because first, it was created
with the knowledge that this program, this cooperation with Project
Maven is only a very small baby step in what
management hopes will be a long, elaborate staircase, a much
(40:07):
larger series of cooperations. You see, Maven is not a
one off thing. It's not its own it's not its
own centerpiece. It is a pilot program. It is taking
a concept and running it around the block to see
how it drives. Secondly, Google employees believe that this project
is symptomatic of what they see as a larger and
(40:28):
growing problem in Google. They say the company is becoming
less and less transparent across all all facets, public facing,
government facing, employee facing, user facing, and they've even shed
their old famous motto don't be evil.
Speaker 2 (40:49):
Yeah, but now if you go to let's see the
Google dot com slash about our company slash our dash company,
it says this is what their mission statement is now
quote organize the world's information and make it universally accessible
and useful, and maybe be a little evil if needed.
I don't see that written anywhere where.
Speaker 5 (41:10):
I think it's implied. I think getting rid of that
original mission statement and getting into these kind of deals
implies such things.
Speaker 1 (41:19):
They do have a on their Code of Conduct they
made some changes that have a wonky timeline about them too,
Because there's an article on Gizmoto by Kate Conger wherein
the author finds the section of the old Code of
(41:40):
Conduct that was archived by the way back Machine on
and listening to the State carefully April twenty first, twenty
eighteen that still has all that don't be evil mentioned. Yeah, right,
And then the updated version that was first archived on
May fourth, twenty eighteen took out all but one of
(42:01):
those mentions of don't be evil, and the Code of
Conduct itself says it has not been updated since April
fifth of twenty eighteen, So the timeline doesn't match up.
User error. Possibly hmm, possibly.
Speaker 2 (42:14):
Wayback machine, what you're doing over there?
Speaker 1 (42:16):
Yeah? Did they get to you, wayback machine? Regardless of
you know, whether you feel that this is just a
series of unconnected dots being forced into a larger perceived pattern,
whether you think people are practicing confirmation bias, or whether
there really is something rotten there in Silicon Valley, the
(42:37):
fact of the matter is you can read the full
text of the petition online and find out what the
Google employees themselves thought. We have a few highlights here.
We cannot, they say in the petition outsource the moral
responsibility of our technologies to third parties. Google stated values
make this clear. Every one of our users is trusting us,
(42:58):
never jeopardize that. Ever, this contract referring to MAVEN puts
Google's reputation at risk and stands in direct opposition to
our core values. Building this technology to assist the US
government in military surveillance and potentially lethal outcomes is not acceptable.
Recognizing Google's moral and ethical responsibility and the threat to
(43:21):
Google's reputation, we request that you cancel this project immediately, draft, publicize,
and enforce a clear policy stating that neither Google nor
its contractors will ever build warfare technology.
Speaker 5 (43:35):
And a writer for en Gadget actually reached out to
Google and they were provided with the following statement.
Speaker 4 (43:42):
Quote.
Speaker 5 (43:42):
An important part of our culture is having employees who
are actively engaged in the work that we do. We
know that there are many open questions involved in the
use of new technologies, so these conversations with employees and
outside experts are hugely important and beneficial. MAVEN is a
well publicized DoD project, and Google is working on one
part of it, specifically scope to be for non offensive
(44:03):
purposes and using open source object recognition software available to
any Google Cloud customer. The models are based on unclassified
data only. The technology is used to flag images for
human review and is intended to save lives and save
people from having to do highly tedious work.
Speaker 4 (44:20):
I'm just gonna finish. I think it's worth it. Any
military use of machine learning naturally raises valid concerns. We're
actively engaged across the company in a comprehensive discussion of
this important topic and also with outside experts as we
continue to develop our policies around the development and use
of our machine learning technologies.
Speaker 2 (44:39):
So basically, we're gonna keep making this project. Sorry, guys.
Speaker 4 (44:43):
Yeah, it's cool, but it's okay.
Speaker 1 (44:45):
Because we're going to have a quote unquote healthy conversation
about it. Here's what happens next. As you said, mat,
Google refuses to backwave from this project. They're still going
to go through Noel. As as we can clean from
the statement you read, they say it is going to
be nonviolent and there's no spooky stuff going on. It's
all unclassified. Over three thousand engineers signed this petition and
(45:11):
a dozen resigned. More may resign in the future. This
resignation wave is happening as we record this, and it's
due to a coldly fascinating moral quandary. It's this, how
responsible are these engineers for the applications of their inventions?
Is for example, the creator of the Winchester repeating rifle
(45:33):
guilty for the deaths caused by that weapon. His wife
thought so, and that's why she built a crazy mansion
out west, which we still, believe it or not, have
never visited.
Speaker 4 (45:42):
Oh man, I really want to go.
Speaker 1 (45:43):
Yeah, I think we should absolutely go. Paul, would you
go with us if we If we go to the
Winchester mansion, we got we got an ardent thumbs up
from him, they both.
Speaker 4 (45:51):
Paul's got some some beefy thumbs Yeah.
Speaker 2 (45:54):
You go into all the rooms first, Paul.
Speaker 4 (45:55):
Yeah, it's just a test for traps.
Speaker 2 (45:58):
Yeah. No.
Speaker 5 (45:59):
But it also like Oppenheimer, right, he was famously had
serious qualms about the way his technology was used in
terms of being weapons of mass destruction.
Speaker 1 (46:10):
And Einstein also tortured by the He didn't build a
physical thing, but he was tortured by the idea that
his own realizations had led to this. So with this
in mind, if an engineer builds software for the purpose of, say,
more efficiently delivering payloads to locations in near Earth orbit,
(46:31):
and that same software is later used to deliver payloads
of bombs to people that the drones owners don't care for.
Is that engineer then responsible for the ensuing damage and
or death. It's a quandary that a lot of people
are having a hard time answering to go back and
noll to something you mentioned earlier about the dilemmas of
(46:55):
autonomous vehicles. One of the current nie unanswerable questions, or
at least when that hasn't been answered yet, is what
happens if there is an imminent accident that's going to
occur and you're in an autonomous vehicle. Does the vehicle
swerve and hit a pedestrian? Does it prioritize the person
(47:19):
in the vehicle their life over the life of a
person who deserves to live just as much but happens
not to be in the vehicle, And if so, who
is responsible? Is it the person who is in the
vehicle for not taking control if they could. Is it
the company that manufactured the vehicle? Is it the engineer
who made the software that made the vehicle make that choice.
(47:41):
The trolley problem of the old thought exercise becomes increasingly concrete,
increasingly dangerous, and increasingly complicated, and frankly, we're not equipped
to answer it at this point.
Speaker 5 (47:52):
No, it's it's it's it's an utter conundrum because what
you know to be that programmer who gets to make
that decision, you're essentially playing god, aren't you.
Speaker 4 (48:02):
It's like what life is more valuable? You know? Is
it our customer or is it the pedestrian.
Speaker 5 (48:08):
There's really no way of properly answering that without making
a serious judgment, call a hard and fast choice.
Speaker 1 (48:14):
Right, Yeah, you could bake in something where it's just
an absolute calculation of the number of lives. So, for instance,
in that case, if there is a crowd of five
people who are gathered for their Paul Decant fan club meeting,
which inexplicably takes place on the sidewalk by a busy road.
(48:35):
If those five people are gathered for that meeting and
there's only one person in the car, does the car say, Okay,
we'll just pop the airbags and hope this one person survives,
but we're not going to kill five people for one
And then what if there are two people in that
car and one of them is pregnant. And what if
the five people on the sidewalk celebrating with the Paul
(48:55):
Decant fan club, what if they're all in their seventies
right and pass report of age generally speaking? You know
what I mean? This this is the kind of stuff
that we have not yet, as far as we know,
learned to program for.
Speaker 5 (49:10):
Then before you know it, you've got you know, power
hungry cars mowing down old ladies because they've you know,
they've they've lived their life.
Speaker 4 (49:18):
They're expendable.
Speaker 1 (49:19):
But like, what what about a school bus? Does a
school bus, based on the number of kids it has,
does that have just immunity to always prioritize the children
on the bus. It's very politically difficult to argue against.
Speaker 2 (49:31):
That, oh man man.
Speaker 1 (49:35):
Matt leaned back for that.
Speaker 2 (49:36):
Oof, Yeah, because of what happens when a public bus
and then a school bus are both operating and there's
a crash a minute.
Speaker 1 (49:44):
I don't know, because I'm marta. Most of the buses
are empty, Oh shade, It's true, It's true.
Speaker 5 (49:50):
What about the street cars? The street car even emptier empties.
Speaker 1 (49:57):
There's there's not even a driver.
Speaker 2 (49:59):
Yeah. At this point, the Marta buses are just hopping
on the trolleys.
Speaker 4 (50:02):
This is some serious Atlanta inside baseball.
Speaker 1 (50:05):
It is true. Learn about our public infrastructure debacles. We're
working on it. We're doing our best things.
Speaker 4 (50:11):
It's gonna be called it's gonna be rebranded to the atl.
Speaker 2 (50:15):
WHOA, that's cool.
Speaker 1 (50:16):
Yeah, nothing weird about that, nothing forced about that. Right.
So Google is unfazed. They're not only going to continue
this project, which we should also say is not super
super big. It's publicly described as being worth a minimum
of nine million dollars. That's a lot of money to
normal people. That's not a lot of money to Google.
(50:38):
But they are vying for additional projects within this military space,
and this is not unprecedented. IBM has a long standing
relationship with the US military on a number of fronts
and also cough cough, Nazi Germany cough cough.
Speaker 4 (50:54):
Yes, did you guys know the Hugo boss designed the
Nazi uniform?
Speaker 1 (50:57):
Yes?
Speaker 4 (50:57):
I did not know that until yesterday. Shock.
Speaker 5 (51:01):
What a great pr team they must have to still
be around after that.
Speaker 1 (51:05):
Seriously, well, I mean there were a lot of German
companies that.
Speaker 4 (51:09):
Were for sure Volkswagen, even I think was interesting.
Speaker 1 (51:13):
On an additional note, IBM itself may be drawing an
ethical line. They have signaled that they will not create
intelligent drones for warfare, and a blog post they had
recently IBM said they're committed to the ethical and responsible
advancement of AI technology and the AI or machine consciousness
should be used to augment human decision making rather than
(51:35):
replace it. So that's similar to the argument that automated
vehicle systems for consumers should be doing a lot of
assisted driving, but there should always be a human behind
the wheel.
Speaker 2 (51:48):
That's probably a good call International Business Machines.
Speaker 1 (51:52):
That's true. Oh, that's true. And of course shout out
to everybody who listened to our other episodes on the
evil of machine consciousness, And personally I prefer that term
to AI because what why is it artificial?
Speaker 2 (52:09):
Right?
Speaker 1 (52:09):
Why are we so special?
Speaker 2 (52:11):
Yeah, And at this point I'm just going to urge
everybody to go through and read that open letter that
was written by the International Committee for Robot Arms Control.
There are some pretty terrifying things that they go into there,
specifically about the slippery slope that Project Maven represents. So
I would go and read that. You can find it
(52:31):
Robot Arms Control Open Letter.
Speaker 1 (52:34):
I when you first showed this to me, Matt, I
was cartoonishly tickled because I originally read the title as
the International Committee for Robot Arms.
Speaker 2 (52:44):
Yeah.
Speaker 1 (52:44):
Yeah, and I didn't read the control part. At the end, Man,
it was a long day. So what do they say, though,
what's one thing that really stood out to you in
this letter?
Speaker 2 (52:57):
Well, here's the quote. We are just a short up
away from authorizing autonomous drones to kill automatically without human
supervision or meaningful human control. Tight that alone. Just you know,
if you arm these weapons, these flying autonomous weapons, with
the ability to make those decisions, yeah, that's a target
(53:18):
that's not a target, that is definitely a target kill.
Speaker 1 (53:20):
What happens if they get hacked as well.
Speaker 2 (53:23):
Or if they just are just operating someone loses control
somehow just however.
Speaker 1 (53:30):
Or if they're operating independently somewhere and they have some
sort of cognitive leap, and they say, you know, the
best way to achieve the objective is to get rid
of everyone. That would almost never happen. We're talking the
odds of winning the lottery eight times in a row.
Speaker 4 (53:46):
We're talking about like a terminator scenario, right.
Speaker 1 (53:48):
Some Skynet stuff for now that is still relegated to
the realm of dystopian science fiction.
Speaker 2 (53:56):
But they do make another really great point. I just
want to say here. They are deeply concerned about the
possible integration of Google's data, the data that they have
on everyone on you, probably massive data set, integrating that
with this military surveillance data and combining that and applying
it to this targeted killing.
Speaker 4 (54:17):
Notion so you can cross reference.
Speaker 2 (54:19):
Yeah, and this is a you know, these aren't just
some Joe Schmoe's getting together and saying, hey, we need
to write this letter, you guys. These are some serious
engineers and academics and people who are working in these
fields saying like raising a flag and saying no, no, no, no, no,
we got to watch this.
Speaker 1 (54:36):
We know where this could go. Yeah, and now we
draw this episode to a close, and we don't have
a solid answer or prediction. We're right here with you, folks,
unless you are working directly for Project Mavin, right, we
(54:58):
don't know. No one knows where exactly we are going
to paraphrase Jim Wilder and Willie walk up.
Speaker 5 (55:05):
Unless we forget to mention that Google is also obviously
doing some super innovative, fascinating things that are quite good
for humanity. Potentially, they're using machine learning to develop testing
software that can actually find different complications related to diabetes
and also early signs of breast cancer, and the FDA,
(55:29):
according to this article from Wired, is already in early
stages of approving AI software that can help doctors make
very important life or death medical decisions. And this is
from an article called Google's new AI head is so
smart he doesn't need AI. About this guy, Jeff Dean,
(55:49):
who is a big part of a lot of Google's
AI innovations.
Speaker 1 (55:53):
That's a great point because we can't we cannot forget
how large and varied Google and Alphabetter's organizations are. There's
Google dot org as well, where you can learn about
how they're using data to uncover racial injustice or building
open source platforms to translate books for disadvantaged kids. It's
(56:14):
absolutely right. I really appreciate bringing it up because they're
not just this solely evil thing, right, And sometimes those
aims of these these large departments within Google may even
contradict one another.
Speaker 5 (56:27):
And you got to think, too, like an argument, maybe
that a big high mucky mucket Google might make about
getting involved in something like this is like, well, if
it's not us, you know, and we're obviously going to
handle it with thoughtfulness and care. If it's not us,
it's going to be somebody else who might do a
less good job, you know, and not think about the ramifications.
(56:49):
So you know, even that statement that we read earlier,
it was pretty measured.
Speaker 4 (56:53):
I thought it was actually not bad.
Speaker 1 (56:55):
Yeah, just kill people correctly, right.
Speaker 5 (56:57):
Well, okay, you got me there, Ben, I guess what
I mean is though it did sound like pure pr
it was like, we get it.
Speaker 4 (57:03):
It's a thing.
Speaker 5 (57:04):
We understand that there is there are always consequences associated
with this kind of stuff, but we're aware of them
and we feel that we're equipped to help mitigate some
of that. But it's against the badger're out of the
bag scenario, right, It's not up to you anymore at
that point.
Speaker 1 (57:20):
And if you would like to learn more about Project
Maven specifically, please check out tech Stuff. They have a
podcast that just came out on this program. It's hosted
by our longtime friends sometimes Nemesis and Complaint Department Jonathan Strickland.
Available twenty four to seven for any issues or criticisms
(57:40):
you have of stuff they don't want you to know,
you can reach of me is Jonathan dot Strickland to
HowStuffWorks dot com.
Speaker 5 (57:44):
He just launched a live chat kind of situation too,
so we can suss out your issues in real time.
Speaker 2 (57:50):
Yeah, yeah, yeah, like a drone, exactly like a drone.
It'll especially help you out on Twitch if he's ever
on there.
Speaker 1 (57:57):
No, let's say Google does somehow create a policy banning
any and all military partnerships. He let's go a little
bit further and say that all US tech companies follow
their lead and do the same thing, because there are
industry wide calls for everyone not to play this particular obo.
I mean, let's even go further and say that all
(58:17):
tech companies in the world refuse to help the US
build self aware weapons of war. You know who will
continue this research? All caps, every single country that can
afford it, every single private company that does not have
the same moral quandaries about hypothetical scenarios, every single individual
(58:39):
that does not have those kind of quandaries. It is
again going to happen, and it.
Speaker 4 (58:45):
Gets worse the AI race.
Speaker 1 (58:47):
Yeah, it is. That's a good way to put it
and to paraphrase Billy Mays, but wait, it gets worse.
We'd like to end today's episode on an ethical quandary
question that you could feel free to suss out yourself.
Have a couple of couple of beers over, if you drink, meditate,
(59:07):
if you do that, whatever you do to get yourself
in a thoughtful headspace, and let us know what are
the implications here. What sort of machine consciousness are we creating?
Are we making something that will be inherently belligerent? And
if so, If the world's first machine consciousness is built
(59:28):
to kill, how will this influence is later increasingly self
directed actions, its thoughts, it's recognizations, it's emotions, or whatever
those equivalents are. Imagine whether they're imagine if you would
that there were two different versions of machine consciousness. One
(59:50):
maybe is built to optimize agricultural projects in a transforming ecosystem,
and the other is built to finds people and annihilate
them or buildings. What happens next? Where do they go?
Do these things like the early drones? Do they increasingly
(01:00:13):
speciate and become more attenuated. There's the argument that we
have with biological entities, with human minds, and that is that, yes,
people change, but they don't change in the way you think.
Over time, we tend to become more concentrated versions of
who we were in the beginning. Now one could say
(01:00:37):
that this is not something to be too disturbed by,
because a human engineer could drop in and maybe incept
this machine mind with a different, different line of code
that would alter its behavior. And maybe now it says
I'm tired of killing right, or I have new programming.
(01:01:00):
But we have to consider that if we are as
a species creating minds that may one day have their
own agency as much as as a human being, maybe
more eventually, maybe in our lifetimes, they will have more
agency and potential because they won't have the same biological,
hardwired limits that human beings have. What kind of minds
(01:01:22):
are we building? And why?
Speaker 2 (01:01:25):
You can find us on Twitter and Facebook where we're
conspiracy stuff, conspiracy stuff show on on uh Instagram that's
the where that one is. You can find our podcast
and everything else that's stuff they don't want you to
know dot com. UH tell us what you think of,
like what kind of minds are we creating? Like Ben
speaking to what's happened Heady stuff heady mind. Yes, this
(01:01:47):
is uh terrifying and I need to use the restroom
now because I am the p has been done scared
right out of me.
Speaker 4 (01:01:54):
We better, we better take care of that right now.
Speaker 5 (01:01:57):
Oh we need As we are not AI, we still
have to, you know, take care of our bodily needs.
Speaker 4 (01:02:02):
But in the meantime, if you.
Speaker 2 (01:02:03):
Would like it, and that's the end of this classic episode.
If you have any thoughts or questions about this episode,
you can get into contact with us in a number
of different ways. One of the best is to give
us a call. Our number is one eight three three
st d WYTK. If you don't want to do that,
you can send us a good old fashioned email.
Speaker 1 (01:02:23):
We are conspiracy at iHeartRadio dot com.
Speaker 2 (01:02:27):
Stuff they Don't want you to Know is a production
of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.