All Episodes

July 29, 2016 55 mins

Every year, programmers write more than a trillion lines of code. How could automated cybersecurity systems help our technology stay safe from malicious hackers?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says, if you
should get it an email with the subject stinky cheese

(00:21):
better of protecting your chances under no circumstances should you
open it. I'm Jonathan Strickland and I'm Joe McCormick, and
I'm not gonna ask about that. Let's move on to
the topic today, which is cybersecurity. Yeah, we wanted to
talk about a recent story about the idea of automating cybersecurity.
But before we even get into that, let's talk about why.

(00:43):
You know, I'm just gonna set the groundwork of what
cybersecurity is all about and why it's such a huge
challenge in today's world. So, really, when you think about it,
cybersecurity experts and hackers have always played kind of a
tick talk game. Really, the hackers, you could argue, Yeah, exactly,
it's it's it's it's very much a reactionary kind of relationship.

(01:07):
So it starts whenever a developer releases some software into
the world. Operating system upgrades would be a major example.
So let's say that you know, Microsoft releases a Windows
update or Google and Android update, that kind of thing.
Hackers then start to look at the operating system and
start to explore it, probe it for vulnerabilities, things that

(01:28):
could be exploited. Um. And the reason for that is
the way a computer works, the way it executes code.
If you're able to exploit vulnerabilities, you can get access
sometimes to very important elements of a computer, like to
the point where you could potentially take it over. Sure,
you can either bend it to your will, use it

(01:49):
to do some some upward processing, or or get data
from that person's computer to to use in some kind
of criminal way. Oh yeah, absolutely. Uh. It may be
something as simple as simple in quotes as stealing files.
It may be something more complicated, like installing a key
logger that's going to copy everything someone types in so

(02:11):
you start to get all their their info, like their
logins and passwords, that kind of stuff. It may even
be a backdoor access where you can you can control
that processing power for specific nefarious purposes without the person
necessarily knowing what's up except for the fact that their
computer seems to be really kind of sludgy and slow,
or it could just be fair mischief. Yeah, it could
just be that, you know, yeah, as they say. So. Meanwhile,

(02:35):
you've got the cybersecurity professionals who are looking for vulnerabilities too,
but they're not looking to exploit them. They're looking to
patch them to to address those vulnerabilities and and tweak
them so that they're no longer and opening for hackers
to exploit. They also try to nullify malware that hackers
are developing. So uh, they're trying to make sure that

(02:58):
the various worms and virus is and other types of
malicious software that hackers unleash upon the world are nullified
in some way. It's super tricky to do because you
remember those old days of the huge downloads of the
Norton anti virus updates. Yeah, yeah, well yeah, I remember
the days of like when you're shutting down your computer

(03:18):
and says, hang on, I need to download about, you know,
half the gig's worth of data before you can leave,
and then you think, well, as we'll go get another
cup of coffee because I'm gonna be here for a while. Uh.
At any rate, this this is very much the relationship. Right.
You've got the release of software hackers trying to exploit it.

(03:40):
Cybersecurity professionals trying to address the vulnerabilities and nullify the malware.
Hackers go back to trying to find other vulnerabilities to exploit. Also,
this is not necessarily all just happening back to back,
because there's a lot of overlap. I mean, just because
cybersecurity professional identifies and even patches vulnerable opening and software

(04:02):
that doesn't magically propagated out to everybody who's ever downloaded
that software. Now, so this is your your responsibility message here.
Update your browser. Update your browser, update your operating system,
update your security settings. Make sure that you keep those
as close to up to date as possible, because while

(04:23):
it can be irritating to take up time to do
that sort of thing, it often addresses these vulnerabilities and
makes you less liable to experience issues created by evil,
nasty hackers out there. This also means that cybersecurity folks
are typically a step behind hackers right because often they
while they're trying to identify the own vulnerabilities, they may

(04:45):
not be looking in the same place as that hackers
start looking at, and they have to respond to the
malware that hackers are creating. So you get this TikTok,
where the reactionary response to the cybersecurity is to counter
the move that the hackers have made, but it doesn't
magically counter the next move the hackers make. You have
to do it over and over and over again. So

(05:08):
wouldn't it be cool if we could get an actionary
system rather than a reactionary or one that is reactionary
all on its own and doesn't require a human interaction
at all? Because where the bottle nick really well, especially
now and I'll get into more of the reason of
why now it's particularly a problem in a little bit later,

(05:30):
But as you say, it can be actionary to Lauren,
you could have software that is actively looking at vulnerabilities
before any human has even laid eyes apart for the
developer on the code, and then in that case you
close off the system before a hacker is even able
to exploit it. But in other cases where there may

(05:50):
already be a vulnerability known to hackers, uh, the systems
could be patching that vulnerability as well as trying to
counteract any malware that hackers have created. If you were
able to do this and take humans out of that picture,
it would be amazing because it would be way faster
and more efficient than than than a human. But it's
a tall order. It's really not an easy thing to

(06:14):
ask for. Who would ask for such a thing? Sort
of a you're right they did, DARPA asked that thing.
Wait wait, First of all, though, let's remind everybody who
is DARPA. It's the R and D division within the
Department of Defense, with the awkwardest name ever being the
Defense Advanced Research Projects Agency used to be. Or yeah,

(06:38):
it used to be ARPA. I like. I like that
the way you deliver that it made me think of
like defense against the Dark arts, which in a way
is kind of what we're talking about today. Yeah. I
actually I'd never heard it put this way before. This
might be have always been part of its charter, but
but I didn't come across this description until some stuff
we were reading for this episode today. But they described

(07:00):
it as preventing strategic surprise. I watched a video about that,
and my favorite quote was, surprises are hard to predict.
I know exactly what you're talking about. I watched that
same video. I'm like, you don't say. I think you
said surprises can be hard to predict, Yes, which I

(07:22):
thought was amazing. It reminds me of the Giant the
Giant Souvenir shop at Las Vegas, where one of the
signs says, if it's in stock, we have it, and like, yeah,
I guess that's I mean, I would have argued that
that was pretty much obvious, but I'm glad that you
put it out there. So, uh, yeah, DARPA is it's

(07:44):
got a long history obviously with technology. I mean, the Internet,
you could argue, is really a product of work that
was done when DARPA was called ARPA. There was a
predecessor to the Internet called ARPA net and during the
development of our and that many of the protocols that
underlie the way the Internet works were developed. So very

(08:06):
much a part of the world of software and hardware.
Not just hey, can we develop a new thing that
flies faster and uh and it's harder to detect than
anything else we've ever created, but things that have benefits
far beyond just a basic military application. Now, to be fair, DARPA,

(08:27):
being part of the Department of Defense, is primarily concerned
with matters of protecting national security and and uh making
certain that the United States technological capability remains at the
very front of the entire world as much as possible.
You know, you've got to keep that in mind. But
beyond that, the developments of various DARPA initiatives have uh

(08:52):
much greater consequence than improving our defensive capabilities. Oh sure, well,
and and started started to say in way that just
like NASA, getting people into space has a lot of
further reaching implications in terms of technology and design. And
we've in fact talked about a few of the darpest
projects here on the show before. Right, So first we
should mention DARPA really is more of an administrative organization. Right.

(09:17):
It's not so much that you go to DARPA and
you enter into a world of shiny labs with lots
of beakers and and beeping computers and stuff and and
and technicians and scientists running everywhere with like crazy alien
looking devices. That's not really what DARPA is. What DARBA
really is is the organization that provides funding to other

(09:40):
research organizations. They what what DARPA does is identify a need.
They say, we need for this type of technology to exist.
Who out there thinks they can do it? And then
we've got some money for you, exactly, And then you've
got different organizations that respond and they'll say I think
we can do it, and then DARPA reviews the various proposals,

(10:03):
decides which organizations are the most likely to succeed based
upon those proposals, and funds them in order to try
and develop that technology. Uh. And sometimes what DARPA does
is they do this in the form of a competition.
It's not just hey, hey, we need this one type
of technology. Who thinks they can do it? It's hey,

(10:24):
we've got this idea for a crazy thing we want
people to be able to do with with technology. We're
gonna pit you against other groups that also want to
do this thing, and whoever wins gets a big old
fat prize, which may or may not be equal to
the amount of money that these organizations have to pour
into their research and development to create the technology in

(10:46):
the first place. But they still own that technology and
can use it to make the money. Absolutely so, we
have talked about them multiple times. Uh. The first time
that I was able to find using the handy dan
control f feature on our RSS feed was from November eighth,
two thousand thirteen, when we published robot you Can Drive

(11:09):
My Car? How many times have we made that joke.
I'm almost convinced. I'm pretty convinced I made that that title.
I could be wrong. I could be wrong, but yeah.
So it was one of our earliest episodes about autonomous vehicles,
and in that episode we talked about how DARPA played
a key role in getting the development of driverless cars

(11:31):
rolling because they have wheels. Uh. But yeah, the the
original Grand Challenge that DARPA issued was about autonomous vehicles.
On May eighth, two thousand fifteen, we published an episode
titled What's Up with DARPA, which really was more of
an episode to explain what DARPA really is all about

(11:52):
and some of the projects that DARPA was overseeing at
that time, um and mostly leading up to the Grand
Robotics Challenge, which of course we already kind of covered
in the previous episode, and then we had on June seventeen,
two fifteen, So not long after that one, we released
our episode about the DARPA Robotics Challenge. They are not
the not the Grand Challenge, which was autonomous cars, but

(12:14):
rather the challenge of building a humanoid robot, or at
least a robot capable of following over a lot. They
people excelled at that part of the challenge, which really
wasn't part of the challenge. You didn't want your robot
to fall over, but to be able to do a
series of of steps, a series of tasks that would

(12:36):
potentially be part of a rescue operation or an emergency
response operation in the wake of a disaster, similar to
the uh the Fukushima plant when the tsunami hit. So like,
one of the tasks was walked to a door and
open it. Man, that was that was a tough one. Yeah,
it turned out to be a lot hard. Like robots
were not being able to grab the weren't able to

(12:59):
perceive where the doorknob was accurately, and so they kept
trying to grab at places the doorknob definitely was not.
Stepping through the doorway was bad, Yeah, which is a
very difficult thing to mechanize. That. They had to pick
up a drill and like use it. Yep, they had
to climb us as stairs, which also turned out to
be super tricky. Yeah, there are a lot of things
that we humans typically find fairly easy. I mean, even

(13:23):
if we couldn't do all of the different tasks, we
might find at least, some of the tasks pretty natural
for us to complete, but for robots that's not the case.
There is no natural for a robot. Uh. And it
turned out that a lot of those what we thought
of as simple tasks were really complicated. So those are
the episodes where we specifically focused on DARPA. But today

(13:45):
we want to talk about the more recent Cyber Grand Challenge,
which comes into the concept we're talking about here about
creating an automated system capable of recognizing vulnerabilities and patching
them in real time. That's the basic idea, right, Like
DARPA's like, hey, this is what we want, uh, you team,

(14:07):
you know, start forming teams out there to do competitions,
and we'll have a big challenge and whoever wins gets
a prised. That's the basic idea, um. But it's interesting
because it's using an approach that humans have been following
for a while. It's it's not not like they've invented

(14:30):
a game that the robots have to compete in. The
game already existed and in fact has a a bit
about an interesting name. It's it's a capture the flag game. Yeah,
which is funny because it doesn't actually resemble Capture the Flag. Yeah,
so what what is the capture? They have these tournaments, right,

(14:52):
so capture the Flag tournament might be a thing. A
bunch of hackers show up too, and it's a model
for testing people's skills at cybersecurity challenge. Just so a
CTF tournament you might see something like this, a bunch
of cybersecurity pros all show up and get a piece
of target software and then they go to work trying
to be the first to discover security flaws in the

(15:14):
software and then release a secure patch to fix the problems.
So they've got to sort of they have to like
reverse engineer the software, figure out what its vulnerabilities are,
address those vulnerabilities, and get them fixed. Uh. And so
that sounds like a good test of your steel as
a cybersecurity professional. But what if you removed the human

(15:39):
hackers from the competition, right, Like, so in this case,
the people competing aren't competing directly in searching that software,
finding the vulnerabilities, and patching them, but rather developing software
that can do that on their behalf automatically. Yeah, so
that they aren't the ones guiding the software. Very much

(15:59):
like in the rival less car scenario. You were not
allowed to have any kind of remote control of the vehicle, right,
The vehicle had to be able to do everything on
its own, and if it failed, it failed. Same sort
of thing with this kind of software. The idea being
that you don't get to guide the software. It has
to be able to analyze that that target software and

(16:23):
identify the vulnerabilities and patch them all on its own.
So why even bother doing this? Well, it's because computers
are everywhere now, They're pervasive, they're integrated into our daily experience,
and they're on all scales. Right, the stakes have never
been higher as far as cybersecurity goes, and they're only

(16:45):
getting higher exactly right. So it used to be higher,
or it used to be you know, the early days
of viruses, you know, way back in the day where
things weren't even necessarily networked yet and people were spreading
viruses via physical discs and oh yeah, uh, you know,
the worst that was going to happen is you might
cause a lot of damage to your computer. Yeah, and

(17:07):
that's not good. I mean that could have you know,
potentially disastrous consequences for somebody's personal I don't know, personal
projects or whatever. But you wouldn't have um extremely dangerous, widespread,
society wide consequence. It's not not a catastrophic event. It's
something we're not going to kill anybody. Right. On an

(17:29):
individual basis, it could range from inconvenient to financially difficult,
depending like let's say you know, my dad, for example,
is an author, and if he had had a computer
virus affect our old computer upon which he was writing
his novels, that would have had a very profound impact
on his ability to do his work as an author.

(17:50):
But it's not like that would suddenly also affect all
the other computers in the world. It's happening on a
very individual machine. Right. But then, okay, so once you
start networking computers all over the place, suddenly you can
spread viruses uh and malware and vulnerability knowledge much more easily,
and you can exploit sensitive information and you can uh,

(18:15):
you can you can have much more far reaching consequences.
You can cause economic disaster. Um. Now, imagine expanding this
to the next level beyond just network devices. Our devices
are no longer just information devices. There, uh, you know,
standard infrastructure devices, devices that control the world around us,
power plants in our own HVAC systems and all all

(18:37):
sorts of things. Yeah, so we elevators everything, right, Yeah,
we started with things like your desktop computer. But then
we're also like, okay, well we also have laptop computers.
And now we've also got uh cell phones. Oh and
smartphones came a little bit later, tablets to other computer systems.
In uh, traffic lights to not just within an entire

(18:59):
system of try lights, but within an individual traffic lights
or appliances or televisions or sensors. I mean we as
we approach the Internet of things, we have more and
which you could argue that era is already upon us.
We are in the Internet of things era. Now we
have more and more devices that are partially or fully

(19:23):
dependent upon computerized systems, which may and in fact you
might as well say, do have vulnerabilities in them. They
may not all be identified, but there's almost certainly a
vulnerability in every system that human beings have produced, because
we can't necessarily predict the vulnerabilities while we're making them,

(19:43):
Like they really shouldn't have released that smart of and
has the vulnerability where somebody can get it to throw
you inside and turn on self cleaning mode. HANSL. And
Gretel five thousand. I told them not to do it,
but you know, they just might not have been able
to predict that that vulnerability was there. Yes, when you're
in the middle of developing, it can be difficult to
see that, right because your number one goal is to

(20:07):
get the thing you want to happen to happen. Yeah,
you want, you want the whatever it is, whatever the
the end goal is of the code you are writing.
You want to achieve that goal. So let's say that
you're just writing a program. It's a really simple program.
It's like, let's say it's a word processing program. So
you're just trying to create a word processing program, and

(20:27):
you wanted to have all the basic elements of word
processing involved in it. You're concerned with writing code that
creates a working word processor. You may not notice that
in the way that you develop this code, you have
created a vulnerability that would allow a hacker to access uh,
let's say some administrative level commands on an operating system

(20:52):
through the code you've made. Because that wasn't what you
were thinking about when you were building your word processor.
You weren't trying to you weren't even thinking that that
was a possibility. You were just trying to make a
working word processor. Yeah. Yeah, And and and for all of
these all of these computers, people are writing so much
code every day, trillion lines of code a year. That

(21:13):
was in the early two thousands, so we're talking about
way more. And and you think about you think about
the number of platforms that have increased since the early
two thousand's. That's before the smartphone was really a thing.
You know, think about two thousand seven. That's when the
iPhone gets introduced. That's when the smartphone, at least in
the United States, really becomes a consumer product. Before that time,

(21:34):
it was something that you might see some executives having
as part of the way they interact with their businesses,
But the average consumer in the United States didn't have
a smartphone until after two thousand seven. At that point,
you've got so many different devices now that people are
developing code for, and some of them aren't even you know,
your traditional computers or smartphones or tablets. They might be

(21:56):
hardware that the consumer is never interacting in a way
where they're even all where their software involved. Sure, like
every single Intel chip that's sold has has stuff hard
coded into it, and that's it's not just Intel every
every chip, sure yasically many chips and your basic firmware. Right,
the idea that you've got programming that is physically codified

(22:20):
into a system, it's not. It's not like some ephemeral
software that exists for a moment and then it's gone.
It's it's actually part of the device itself. So we've
got all this code, huge amounts. Now, imagine it's your
job to go through code and find vulnerabilities, and you're thinking,
there's more code generated every day than I could possibly

(22:41):
get through in a week. It's kind of like the
issue with YouTube where you talk about how many hours
per minute get uploaded to YouTube. There's no way to
watch all the content because it's physically impossible. Yeah, you
could say, well, how could we monitor all that content
to make sure people aren't uploading copyrighted movies and stuff

(23:02):
like that? But you know what you could do. You
could design a program to look through all of that
stuff and compared against the database of copywritten material, it's
alood easier than hiring Bob to try to watch an
impossible number of YouTube videos every day. Yeah, so, yeah,
you've got and if you've seen you've seen people trying
to get around this, I bet right on YouTube where

(23:23):
they like they upload copyrighted material, but it's like obscured
and flipped and zooming in and out in weird ways
to try to prevent auto detection. Right, have you seen this? Yeah?
Or people you know, people just have like a a
section of whatever the view would be. Yeah, those are
the worst, right where you're like, oh, it's the lower

(23:45):
two thirds of a television screen so that this doesn't
get picked up. An awful, awful sound quality. What a
wonderful experience I'm having right now. I'm going to go
buy the movie. Man. Really we think about it, it's
more of a great tool to convince people to purchase
this stuff legitimately in order to have a decent experience.
But at any rate, Uh. One of the things that

(24:06):
was interesting when I was researching this was the you know,
you got those trillion lines of code, what does that
mean in terms of vulnerabilities? And the estimate I was
hearing was about they were talking about a billion vulnerabilities
existing out there, a billion. So you've got trillion lines
of code and a billion vulnerabilities. Finding a billion vulnerabilities

(24:27):
within a trillion lines of code. That is such a
monumental test. Most experts are saying like, yeah, I can't
even wrap my head around hold on, am I doing
the math right? That that that would be one vulnerability for
every thousand lines of code? Yeah. Yeah. So then you
sit there and you think about it, like if you
let's say that you've got a job requests saying, yeah,
so turns out there's a billion vulnerabilities out there. I'm

(24:50):
gonna need you to clear those up before the end
of the year. She was just like, I'm going into
a new line of work and you can't do it. Yes,
I'm leaving this up and also possibly my sanity goodbye. Um.
And these these vulnerabilities do shake out into actual issues.
I read a report where a security firm called Jamalto

(25:11):
said that approximately a billion records personal records were compromised
worldwide alone, so one in like one in seven points something. Yeah,
so you've got well, I mean assuming that that that
a human person only creates one record per year, and
I think it's more than that. But but yeah, but
at any rate, like like like a billion is a

(25:31):
nice round number, Like it's large, it's not small, it's
it's impossible for me to even you know, have a
concept of how much that is. And when you think,
like we said about how they this this code is
integrated not just in software but in hardware across all
sorts of different devices and infrastructures, you realize this is

(25:53):
legitimately a problem. It is. It is really a threat, right,
It's not it's not just it's not just something like
oh that's inconvenient. It's not like, oh man, my cellphone's
gonna get hacked. It's like, oh man, are a hydro
power plant is going to get hacked? The US Director
of National Intelligence one James Clapper in listed cyber attacks

(26:14):
as the most serious global threat, above terrorism, above weapons
of mass destruction, above sid ducks. I don't know, yeah, Godzilla, yeah,
more so than that not saying something so yeah, you've
got you've got the perfect situation for disaster here, right,
You've got a target rich environment, and cybersecurity experts might

(26:37):
know to to really seek out the stuff that's going
to have wide propagation first, because obviously, if you're a hacker,
you want to try and hit as many targets as
you possibly can. So from a cybersecurity point of view,
you'd say, let's make sure we cover the stuff that's
going to get the widest circulation first and then worry
about the smaller programs kind of like you know, think

(26:58):
of it like concentric raicles. We want to get on
that middle of that target first, so like operating system
updates or major product upgrades, that kind of stuff, stuff
that lots and lots and lots of people are going
to get. But that also means the hackers can say, well,
I won't hit as many people if I aim for
these other more niche oriented software packages, but I'm also

(27:22):
less likely to encounter resistance. I'm more likely to find
a vulnerability that people haven't identified yet, and therefore will
be able to hit a greater percentage of that niche
than I would if I aimed for operators who's really
shoring up the defenses of this organ trail clone that
I downloaded right, which, by the way, is a great game,

(27:44):
But I mean I I do want to put in
here that there are certainly uh hackers for good, like
white hat hackers cybersecurity type experts that we've been talking about,
who are actively working every day to to plug up
these kind of vulnerabilities. UM you know, not looking to
cause harm or mischief, but to seek these things out
and to change them. Their conventions around the world where

(28:06):
where where hackers and other interested parties gathered to strategize
and to learn and to present research and to disclose
security problems that they've found. Um, you know, either providing
outright or selling the information to the parties at hand
that that would be able to enact changes. Um. One
of those black hat is actually happening this very weekend,

(28:27):
July August four in Las Vegas. Right. The other big
one I hear about all the time is def con right,
that also happens in Las Vegas. I urge you that
if you ever decide to attend one of these conventions,
bring a burner phone and leave your normal one at home. Yeah,
I'm being totally serious, that's I I agree. These hacker

(28:48):
thons often have a wall of shame, and if you
have not secured your technology properly, they will put your
name up there because they will have found how to
access your stuff and say, look, uh, this is serious.
You need to be aware of this. Yeah, I'm not
These these quote unquote white hat white hat hackers certainly

(29:09):
are not outside of the realm of the mischief type
of concept. I mean, but it's not's mischief for good
and yes, exactly to be fair, one of the issues
that hackers who are doing this work run into is
a lack of cooperation on the side of the companies
that are producing the software. Right, and that's starting to change,
I think, um, I think that a lot of companies

(29:31):
have come into the realization that it is less expensive
to hire this kind of security expert than it is
to allow someone to create a vulnerability in the programming. Um.
But and and and and many many companies do these
days have hackers on their team looking for these kind
of vulnerabilities. And there's been tons of successful projects to

(29:52):
come out of freelancers and contractors and full time hackers,
you know, work bolstering everything from from the security of
hospital patients records and pacemakers and insulin pumps. Insulin pumps
have computers in them. Now that's terrifying, um and wonderful.
But but but to to making a t M S
and net network routers more secure, to making prison and

(30:14):
office doors unhackable important stuffs for work right, right, all
of these things. Um, so you know it's not that
it's not that humans are totally falling down on the job.
It's just that we are only humans, right, which is
why we need the machines to take over. Right, Yes,
because the idea of being that you know, if you
have a computer program, stop us out. Exactly. If you

(30:35):
have a computer program that's properly orchestrated, properly designed and
coded so that it can look for vulnerabilities and patch
them autonomously, then it's going to be able to work
much faster, more efficiently than any human could. It never
gets tired, it's never going to miss vulnerability because it's

(30:57):
been staring at this code for like six hours and
it just you know, you get that blindness six hours
was Yeah, well, most of my programmer friends are like,
you know, fourteen hours in with like seventeen cups of
coffee and sure increasing twitch and exciting eye twitch. Yeah. That.
The issue there, of course, is that you're more likely
to miss something, yeah, you know, but a computer program

(31:19):
doesn't because it just keeps on trucking. No. Obviously, the
dependability on the program is only as good as the
developers are, right, but ideally they would catch problems before
bad guys could ever identify there was a problem there
in the first place, and everything gets patched, maybe even
before the release of the software, so that there's never

(31:39):
the opportunity for a hacker to take advantage and exploit
of vulnerability. So this all leads up to this Cybergrand Challenge,
which is happening August fourth, and that's where we get
this automated Capture the Flag tournament. This is also in
Las Vegas, kind of on the tail end of of
black Hat. I'm not sure if it's officially I doubt

(32:00):
it's officially affiliated, right, Yeah. I love the idea of
DARPA showing up to black Hat soap UM and everyone's like, hey, buddy,
hacked your system last week. Looking good. So so this
is this is actually the final round of competition. It's
a competition that's been going on for a while now.

(32:20):
It's not something that is you know, they had made
it just for this one day UM and in fact,
in earlier rounds of competition, which began back in ten,
there were more than thirty teams that registered for this
UM and they could register as either an open track
competitor which covered self funded teams, or a proposal track competitor.

(32:41):
Which were teams that were invited to participate by DARPA
itself and partially supported by the agency to develop the
tech necessary to compete. This is not unusual. The same
thing happened in their driverless car challenges, where they had
teams that were specifically invited to participate versus those that
volunt that that essentially stepped forward to enter the competition.

(33:03):
So so there were more than thirty back in two
thousand and fourteen, we're down to seven finalists. Now. Each
finalist team received an award of seven fifty thousand dollars
to prepare for the Grand Challenge after completing these preliminary rounds.
So here are the seven finalists in the Cybergrand Challenge.

(33:25):
There are some researchers from Moscow, Idaho, which I did
not know was a place UH with the Center for
Secure and Dependable Systems or CSDS. Now, this was a
group that formed out of the with the Idaho State
Board of Education called for this group to come into
being UH specifically at the University of Idaho to advance

(33:48):
computer security education and research. According to their their profile
on DARPA, they represent the only system that was entirely
built from scratch. Every other stom that is in the
finals had existed in some previous form before these these
preliminary tests began. Next, you have Deep Red, which is

(34:11):
a team from Raytheon. They took their name by combining
Deep Blue, which was IBM system that took on Grand
Masters and chess, and the color of Raytheon's logo, which
is red Deep read. So this is not the the
Dario Argento movie. No. Now. Next you have Dissect, which

(34:32):
is spelled d I S e k T. And they
hailed from Athens, Georgia, so I went to college in Athens, Georgia.
They are the one team in the challenge that has
managed to post scores in five other CTF events hosted
by various other universities and and organizations. So they've they've
got a record, yeah, of doing well in other competitions. Next,

(34:55):
you've got four All Secure. That's all one word. They're
out of Pittsburgh pencil Mania. That team started with researchers
who worked with Carnegie Mellon University. You have Shellfish with Fish. Yeah,
they're out of Santa Barbara, California. That group grew out
of a hacking team at the University of California, Santa Barbara,

(35:16):
and it includes, according to their profile, the youngest program
analyst expert in the competition. I love like the little
facts that you get under each team as you go
through them. Uh. Next you have tech X, which is
based out of Ithaca, New York, and Charlottesville, Virginia. Team
members come from Gramma Tech Incorporated and the University of Virginia.

(35:38):
They developed a program they call pea soup, which is
an acronym that stands for preventing exploits of software of
uncertain provenance. I don't know do I love or do
I hate those contrived acronyms. You might want to ask
where they answer that question show? You might want to
ask where they deep inside yourself, where does the A
and P soup come from? In that preventing exploits of what? Yeah,

(36:05):
where's the A come from? One wonders from that, I
should be peck soup or soup. Uh. Then you have
I love this name to co Jitsu, which is a
team that is based in uh, well, the three different places. Actually,
they have researchers who are in Berkeley, California, Lausane, Switzerland,

(36:25):
and Syracuse, New York. So it's a collaboration of scientists
from Berkeley, cyber Haven, and Syracuse, and they're all competing
for prizes that that collectively amount to just under four
million dollars um. And so there's some there's some big
money and obviously some great bragging rights if you're the

(36:48):
group that creates the automated system that wins. I mean,
that's that's some nice accolades to have. Sure. I mean
also see above re being able to sell it to
a company done that rent it out at any rate. Right,
But they, you know they Dark was very quick to
mention that this competition is really more about identifying the

(37:12):
most effective approaches. It's not necessarily we have identified the
working strategy. This, this one product here is clearly the
way we're gonna go everyone else. Thank you for showing up.
It's not Willy Wonka, right, you know, it's not everyone
else has to go home. It's rather saying the elements
that you had in your approach, these these particular ones

(37:34):
we've identified, were really effective, but this other team had
these that were very effective in a different way. How
can we start to look at the things that worked
best and create best practices. So they're going to create
a Frankenstein cyber security robot. Well it's really too maybe
who knows, but really, I mean, it's about about identifying

(37:54):
what strategies work the best in order to move forward
with the next step. Um. So you might wonder, all right, well,
how is this all gonna play out based upon that? Well,
you know, it hasn't happened yet. Uh. And while we're
all about talking about the future and speculating, we are
incapable of telling you who won yet I can't. After

(38:19):
it happens, we can do it, but right now, not
so much. Um. It may turn out that none of
those programs perform better than human experts they'll be pitted against.
That is a possibility that certainly wouldn't surprise me today,
right Uh. And it may still be that even if
that happens, we're able to at least identify the reasons

(38:40):
why programs didn't measure up to humans or things that
got close but didn't quite get there. Well, I mean,
one thing that strikes me is maybe maybe they've found
a way to uh to circumvent this problem. But it
seems like you couldn't really combine the advantages of an
automated system. For is the advantages of a human operator

(39:02):
in a competition setting, because in a competition setting, there's
a limited scope of problem solving area, if you know
what I mean. So there's like a limited problem solving space,
and what the automated system would seem to have at
its advantage is sort of just limitless time and and

(39:22):
speed to search the problems space for problems to identify
and and like like multiprocessing multiprocessor tracks to accomplish that
on multiple systems, in multiple programs at the same time. Um,
but but I'm sure that you can. I mean, the
hard evidence that you're going to get here is like
if if a computer program can crack all the vulnerabilities

(39:46):
and patch them in a fifth of the time that
a human does it, or vice versa, then then you've
got a pretty solid idea of of how fast each
system is working. And and over time that kind of
time difference will will be affected by how much it
can get done well. And also we need to remember
Darba challenges sometimes teach us a lot. Even when everyone fails,

(40:10):
all the competitors fail, that was the case, this failure
won't be nearly as funny as watching those robots trying
to open that door. Yeah, right, well, I was mostly
thinking of the driverless Car Challenge. That's that's funny too. Well,
the Driverless Car Challenge. The first time they held that
back in two thousand four, they didn't award a winner,
no one. No one team was able to complete the objectives. Uh,

(40:33):
in the time allotted. Most of them had pretty remarkable
failures where the vehicle at some point got off track
or failed to respond anymore, or just you know whatever
for whatever reason it was, was not able to complete
the course. But they decided to go ahead and hold
the challenge again the following year, which gave the team's
chances to go back and reevaluate their work, make changes

(40:57):
and improvements so that they were better able to compete
the following year. And that's when things started to really
move forward literally in that case. And uh, when you
look at it that way, you could say, well, if
they had just in two thousand and four said well
this this isn't gonna work, We're walking away from this,
then we wouldn't be on the cusp of the autonomous

(41:20):
car revolution, which we appear to be right now. Right
they could very we have companies right now talking about
it will be a matter of years, not a lot
of them, a few years, and then we'll start seeing
autonomous cars in earnest start to hit the roads beyond
just the limited use we're seeing, where it might be
like an office park automated bus, or something that navigates

(41:42):
through like a relatively closed system like an airport, that
kind of thing like around an airport at various terminals. Um.
So it may be the same with this Grand Cyber Challenge,
where that first year of competition we don't see a
clear winner, but that doesn't necessarily mean this is the
end of the line um. Although it's also possible that

(42:06):
one of the automated systems will just totally smoke all
the other competitors, both computer and human, the most likely
outcome will be that through this challenge, will learn which
of these techniques are the most promising and which one
seemed to be less effective, and thus people can direct
their attention to to the avenues that appear to be
best best chance of success, with the goal ultimately of

(42:30):
creating these automated systems that could be rolled out on
rather large scale too. You know, check for probe for
vulnerabilities in software across multiple platforms in some cases before
they can actually be encoded into hardware. I mean anything
that's been encoded into hardware that's tough. Like you, you

(42:51):
can do firmware updates, but that's really a software layer.
It's not like you're physically changing the chip that's already
been produced. You're you're just you're trying to compensate for
a vulnerability that's been hard coded into a device. That's
a little trickier, but um, moving forward, you could at
least mitigate that somewhat and limit the number of vulnerabilities
that get put into hardware. So the biggest outcome of

(43:16):
this would be that we'd have a safer approach to
this wondrous future we've talked about, this Internet of things
future where reality is responding to our wishes and desires
before we can even voice them, and refrigerators can eat us.
I don't want to see that happen, Joe. But I
also don't want to see a future in which this

(43:36):
wondrous world I'm walking around in is also uh enabling
a hacker to track my every movement, or get tons
of personal information about me, or exploit me in some
other way that I may or may not be aware
of or you know. Yeah, it's a concept where we
want to make the world better with this, not scarier. Yeah,

(44:00):
you don't want to open the door and have a
guy sitting there like. So here's the thing. I've been
tracking your every movement for the last two years, and
unless you pay me this exorbitant amount of money, everyone's
gonna know about how often you go to Taco Bell.
You're terrible, terrible puppy kicking excursions throughout the neighborhood when
you think everyone's asleep or whatever it may be. Uh,

(44:25):
the taco bell thing, while it does strike deep into
my heart, I think I could. I could reconcile myself
with over time, Over time and a couple of taco locos,
I could probably do it. But yeah, it's oh my god,
are those the ones that are in Derrito's. Yes, that's
the thing, right, Yeah, a taco that's inside Derrito's. The

(44:45):
shell itself is made out of essentially derrito chip. Yeah,
and and then you've got um like melted cheese on
top of it. It looks pretty insane. I have never
actually tried one of these things, and I just happen
to know what the name is um. But yeah, if
we want this another future, we want it to be
a safe one. And obviously, again, if we're talking about

(45:06):
trillions of lines of code, expecting it to be a
safe future without the use of computer assistance seems to
be implausible at the at best. You know, one potentially
frightening implication of this whole scenario. It's not of what
what DARB is doing with this competition, necessarily, but just

(45:28):
the fact that we're in a position to be having
this kind of competition is that we could as humans
lose sight of what's happening in the sort of back
and forth between people who want to compromise our information
security and and take over our devices and the measures

(45:50):
that are put in place to protect them. It's kind
of like that frightening scenario and automated trading, where you
have computers making lightning fast buy and sell decisions on
the stock market or commodities markets, and even the people
who design these systems don't understand what they're doing in
real time, right writer, or you know, small small glitches

(46:12):
like like that time that like I don't know, like
like something on Twitter made the stock market waiver for
a moment and everyone was like like like taking their
hands slowly off of the computer wheel. It is terrifying
when you get to a system that's so sophisticated that
even the people who design the system cannot be fully
certain what caused it to make a specific decision at

(46:35):
a specific time. Right, And so I think that there
is generally we might want to be concerned about allowing
a state of affairs where humans just sort of get
cut out of the loop of understanding the software that
governs our day to day lives. Like, so you imagine
this this future scenario, hackers have massively powerful automated vulnerability

(47:00):
aching software that tests all networked systems that can find
for weak points. You know, it's essentially like living in
a world where people could insert lockpick guns into ten
thousand different houses front doors every eight seconds to see
what could be picked. So this is kind of like
the black hat version of the software we were just talking, right,

(47:21):
But I'm getting there. So if you have that, there'd
be no way for human security agents to keep up.
So you need this kind of automated vulnerability seeking and
containment software, right, But imagine so you've got these two
systems working in tandem. Both automated sort of in a
in an automated security arms race back and forth. Um,

(47:44):
will humans lose track of what their own software does
and how? I don't suspect so, because humans are still
the ones creating the software with a specific purpose in mind.
And what we're looking at is the matching or exploiting
of vulnerabilities within that software, not fundamentally changing how that

(48:05):
software behaves or what it is supposed to do. But Jonathan,
what if what if that software decides that the real
vulnerability in the system is us? It's a little different.
But I like where you're going. Yeah, no, no, no,
I mean, I don't mean that, but I do. I know,
I know, you know, but I do mean. Like, Okay,
so let's say a computer this program detective vulnerability and

(48:30):
then patches it. But as we all know, sometimes maybe
a security patch can destabilize the system in another way.
You've just caused a new problem that needs to be addressed.
And so what if they say, ah, you know, every
time it patches something, it's not sophisticated enough that it
can do that without compromising something else in the system.
So we've got to we've got to let it fix

(48:51):
the compromised part also, so it's got a patch security
and it's got to fix the destabilized system, and then
that caused another problem. Bloom. I I can imagine scenarios
where they're cascading effects requiring us to create self modifying software,
and I don't know. Self modifying software always makes me

(49:11):
feel icky. Well, the problem is we need it now anyway.
We have so many, so many lines of code that
there's a need. Like if what's the other option. We
don't develop the automated software we train about another billion
people in cybersecurity. It's well, or we don't have an

(49:34):
Internet of things, I guess, is an option which is
not That's not happening unless there's just a catastrophic change
in our technology, you know, one of those sun spots
or solar right layers, barring some enormous electromagnetic pulse device
that goes over an entire or large enough section of

(49:54):
the world, or a scenario where our priorities shift to guzzoline, right,
I think, I think that's I mean, we we've kind
of committed, right, We're kind of committed to a pathway
which requires us to do this, and so it's almost
a moot question at this point of should we do
this now, it's we have to do this, or we

(50:15):
have to at least attempt to do this to see
if it will work, because we've created a problem that
isn't going away on its own unless we make a
fundamental change in the way we are moving forward, which
doesn't seem likely, uh, at least not within the foreseeable future.

(50:36):
It would amaze me to see a real move to
put the brakes on the Internet of Things. Yeah, I mean, so,
I understand why you're getting where you're where you're headed.
But I think, first of all, we already have those problems.
Right If you detect a software vulnerability and you patch

(50:56):
it and that destabilizes things, we already have to fix that.
It's just right now we're the ones who have to
do it. But yeah, no, no, I I see, I
certainly see your point, Jonathan, And I think that that's
that's it's absolutely ludicrous to think that we're just going
to stop networking all of our increasingly valuable electronics to

(51:16):
the Internet. Um. But but I but I definitely you know,
think that caution, or at least a kind of concept
of science fiction horror be be kept close to our
hearts and and considered. Uh yeah, I mean, I don't
really have an alternative to suggest. I understand that I
think we probably need something like automated cybersecurity, but it

(51:39):
just I don't know. I just guess thought we should
be aware of this fact that, you know, there's always
something a little bit strange about the idea of, in
any extent, of cutting humans out of the loop of
architecture of software that runs our lives. I do think
also that like, and I say this with absolute respect
and love for for programmers, and specifically specifically cybersecurity programmers.

(52:03):
Um uh, I don't think we have to worry about
those nice humans being not paranoid. I think I think
that they've got that covered, and I think that they
will take that into consideration when they're doing their work.
I think it's part of part of the gig. Yeah,
I agree, I think. Uh. I mean, obviously, any time
you're talking about developing any sort of technology, particularly automated technology,

(52:27):
you have to be cognizant of the consequences if stuff
were to go wrong, trying to anticipate as many of
those possible outcomes as you possibly can, and to plan
for them and account for them so that they don't happen. Uh.
For one thing, it's it's always going to be impossible
to do that to perfection. Um. And at some point

(52:51):
you just have to say, well, we just we've got
to move forward and hope that we have uh accounted
for all the most risky outcome us um. But yeah,
it just it becomes a matter of practicality eventually, and
unless we do reverse gears and back off from this
approach of Internet of things, which at this point I

(53:13):
think there's so many companies that have so much money
in Internet of things that that's unrealistic. Uh, It's it's
something we have to move forward with. I'm very curious
to see how this turns out. I really look forward
to reading up on the competition once it's finished and
seeing how the various teams did and uh, you know,
did any did any of the teams or did multiple

(53:34):
teams uh significantly outperformed the human participants. I can't wait
to learn more about it. So we'll probably at some
point do a follow up of some sort um either
forward thinking video or maybe a future podcast where we
talk about these concepts and how how did the machines do?

(53:54):
Did we did we see a market improvement in performance
over humans or is that something that humans are just
better at doing than machines are right now, because sometimes
we run into that stuff. It seems to be fewer
and far between these days, but it does still happen.
For instance, we're still better at opening up a door
and walking through it. All right, Well, that wraps up

(54:15):
this episode. Yeah, that's I love, I love it is
to be. It's like the classic Escape from Daleks is
just run up some stairs until the Yeah, the reboot
ruined all that, but back in the day, the good
says stairs would protect you from the Dolleks. All right, So,
if you guys have any suggestions for future episodes of

(54:36):
forward Thinking, or you got any questions or comments anything
like that, you can send us a message, otherwise we
won't hear you. Our email addresses FW thinking at how
Stuff Works dot com. If you search Facebook for FW thinking,
our profile will pop right up. You can leave us
a message there. We are FW thinking on Twitter. You
can always tweet us at Twitter. We're happy to hear

(54:57):
from you, and we will talk to you again really soon.
H For more on this topic and the future of technology,
visit forward thinking dot com problem brought to you by

(55:21):
Toyota Let's Go Places,

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.