Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe McCormick and Robert.
I got a question for you, Kenny. Have you ever
wondered if it's possible to create a piece of digital information,
(00:26):
like a computer file, a bit of computer code, a
computer virus that could literally kill or injure somebody. Oh,
of course I have. I mean having watched and enjoyed
such films as The Ring, as Video Drone, like, just
the idea of there being some sort of in these
cases media, But we can easily extrapolate that to to
(00:48):
digital media or just digital information. You can't help but think,
is there is there something like that that could exist
that would have a devastating or even lethal effect on
anyone who interacted with it? Yeah, video file, audio file,
computer program, something that comes out of the digital interface
and actually harms you. Well, it's not hard to see
(01:09):
how you could harm somebody indirectly with something like that.
One example would be a computer virus that takes down
a lot of systems or causes widespread economic damage. That's
been happening since the eighties as well discussed. Yeah, certainly so.
Widespread economic damage means people lose their jobs, and statistically
we know that that will indirectly lead to some number
(01:30):
of deaths above the mean mortality rate. But I mean
something more direct obviously, you know, I'm talking about the
cyborg ninja kind of stuff, But take away the cyborg ninja,
I'm not talking about robot assassins or at least personal
data like that's been another big one to write. I've
seen accounts where people have said this individual is potentially
suicidal over the leakage of the images, video or personal information. Sure,
(01:55):
that's the devastating effect of digital gossip. But couldn't malicious
hacker injured or assassinate somebody just with a digital file
directly a piece of computer code video? Yeah, I mean,
this is of course an increasingly important consideration, you know,
because we just look at all the things around us
(02:15):
that are becoming connected to the Internet that you know,
years ago, I would have thought, why would I need my,
uh let's say, my thermostat to be connected to the internet.
It seems crazy, and yet here I am in the future.
Especially during the cold months, I enjoyed waking up grabbing
my phone and adjusting the thermostat, warming up the house.
And at the same time, I'm thinking, is this a
(02:36):
little crazy that this electric you know, gas powered fire
in my home is now controlled by a device that
is connected to the internet and all the horrors of
the Internet. I end up just having to like stick,
you know, push that out of my brain and just
focus on the fact, Oh I can before I get
out of bed, I can, I can make it a
(02:57):
little warmer. Now. Fortunately, there are limits to what you're
amostatic can do. Right, You're not worried about some crazy
kid on four Chan deciding that he wants to cook
you alive and turning your house thermostat up to five
hundred degrees. But the more we think about a smart house,
like there was some horrible sci fi movie that came
out years and years ago, and it had a smart
(03:17):
house that where the robot you know, goes completely how
on everybody, and it had like a terminator arm that
hangs from the ceiling and like travels around the house. Uh. I,
As I keep thinking back to that, the more interconnected
our homes become, and now that you know, the whole
idea of like your house becoming self aware and killing
you is one thing. But yeah, just the idea that
(03:38):
all these things are connected, at least in a small way,
to everyone else in the world, it can be a
little much. This was explored to great effect in the
wonderful Stephen King movie Maximum Overdrive. I'm just kidding, not
not such a great movie, but the premises all our
machines turn against us, right, our consumer technology, from truck
(04:00):
to household appliances start trying to kill us. I think
in the movie it's aliens, right, I can't remember in
the movie. In the book, I mean the short story rather,
it was delightfully vague. Um. And then of course Maximum
Overdrive the film as its own experience. But I guarantee
you there's got to be a script out there where
someone has taken Maximum Overdrive, or at least trucks the
(04:22):
original story and upgraded it to the you know, the
so called Internet of things. Yeah, and the most obvious
analogy from the movie is going to be autonomous vehicles.
Autonomous vehicles if have if they have the wrong security exploits,
if people can manipulate them in the wrong ways, it's
not hard at all to see how they can be deadly.
But I want to get even more insidious about devices
(04:44):
that we personally hold in our hands and used to
mediate our relationship with regular information like text and video.
And you know ideas. I've got an archived Wired magazine
article enti idled hackers assault epilepsy patients via computer and
this is from March two thousand eight. And what happened
(05:08):
in this uh incident is that somebody attacked an epilepsy
support message board hosted by a group called the Epilepsy Foundation.
And just to read a quote about what happened quote,
the incident, possibly the first computer attack to inflict physical
harm on the victims, began Saturday, March twenty two, when
attackers used a script to post hundreds of messages embedded
(05:32):
with flashing, animated gifts. The attackers turned to a more
effective tactic on Sunday, injecting JavaScript into some posts that
redirected users browsers to a page with a more complex
image designed to trigger seizures in both photosensitive and pattern
sensitive epileptics. And then later in the article they note
(05:52):
and this is worth noting, epilepsy effects about fifty million
people worldwide, but only about three percent of those people
are photosensitive, meaning you've often heard, you know, the old
Pokemon story that flashing lights or flashing images can cause
seizures and people with epilepsy. That is true for some
people with epilepsy, but not all. So the risk here
(06:13):
is not necessarily like a wide attack where you just
end up hitting that small percentage of people who are affected.
But what have you targeted it after a specific individual?
And this is apparently happened. Now we have this story
from twenty sixteen where there's an American journalist named Kurt
Aikenwald who was known publicly to have photosensitive epilepsy. And
(06:34):
during the election, so he's a political journalist and of
course being a political journalist you make enemies, and somebody
who did not like his political coverage sent him a
series of tweets with strobing light images. Uh and allegedly
this caused a seizure. And so he is now a
witnessed in a criminal prosecution against these digital attackers who
(06:56):
attacked his physical body. And we're able to cause a
physical injury with just information through an interface. It's interesting
that it took place on Twitter too, because I mean,
Twitter is known to be this place where like a
lot of the Internet where where people feel like they
can be just as nasty and awful as they possibly
(07:17):
can without any repercussions. And and here we see a
situation where it ends up transcending nearly the hurting of
feelings or or psychological damage, but actual physical attack. Yeah.
But while almost anybody can be psychologically harmed by information
received through an interface, it's really difficult in general to
(07:41):
physically harm somebody with information received through a standard you know,
digital media interface. It's really rare. Like, there is this
one specific exploit in the brains of three percent of
people or so who have epilepsy. That means that certain
types of light images projected on a screen can cause
physical injury or it can trigger a seizure. Not everybody's epileptic, not,
(08:04):
not all people with epilepsy have this condition, so it's
it's pretty rare. But this is one neurological vulnerability to
information based weapons built right into some of our brains.
Most of the time, for most people, the brain is
very secure, right, It's it's hard to cause direct injury
to somebody's body, or steal their innermost secrets, or do
(08:25):
anything like that with information interfaces alone. But today, we
want to talk about how that state of affairs is
very likely changing, and it may be changing very soon,
because we want to talk about the coming age of
neuro security. And the crazy thing here is that we're
not talking about something that may come to pass. We're
(08:46):
talking about, as you'll see, as we discussed this further,
this is something that is definitely going to happen, that
needs to happen, an inevitable next step. Yeah, and unless
basically life or technological progress on Earth stops right now.
This is not a singularity issue. This is not This
is a very near future concern, yes, and and very
(09:06):
very plausible based on things that we already have today.
So there are several different things that you could call
neuro security. One thing would be using neuroscience principles in
the general field of security, right protecting your borders with
f m r I brain scanners during border stop interrogations
or something right picking up say, for instance, if you
could use this technology to pick up on like extreme
(09:29):
levels of nervousness that might need to be inspected with
additional questions, or if it was even possible to tell
that there was some sort of malicious intent or stocking
up your t s A. The ranks of your t
s A agents with scanners. I mean, like from the
movie Scanners Psychic Psychic t s A. Okay, yes, it's
basically scanner cops, the sequel to My God. You're right,
(09:52):
scanner cops. Can you imagine the faces they make while
you're standing in line? But that make flying better because
it makes it funnier or worse because it'd be even creepier.
Probably creepy, I'm guessing creepy. Sorry, Well, that's an interesting subject,
but a subject for a different day. Today, we're talking
about the security of our biological information systems, essentially applying
(10:17):
computer cybersecurity principles to your brain and your nervous system. Now,
you might be asking, that sounds ridiculous. Why would you
ever talk about that? I mean, that's that's just such
a weird sci fi scenario. Nothing like that's ever gonna happen,
right right, I mean to bring up scanners again. It
just makes me think of the first Scanners movie that
(10:40):
at the time I thought ridiculous moment where the scanners
are interfacing with the computer with their brain, and that
threw me completely out of the movie because I'm because
I'm like, all right, you you. I'm on board with
brain to brain the psychic connections, but you're throwing me
off when I'm trying to imagine a brain to machine
connection that's just purely based on psychic power. It does
(11:02):
seem to violate the magic of the film, right, It
gets it gets the mythology out of whack because there's
a scene in the movie Scanners where one of the
scanners he gets on a telephone and he calls into
a computer system and he reads the mind of the
computer system. Yeah, and he wasn't even making facts machine
noises with his mouth. That that I would have been
on board with, but yeah, not the way Cronenberg decided
(11:24):
to to display it. Michael Ironside could have sold those
facts machine noises with his mouth, but not the guy
they had play in the Here. So we're going to
talk about a particular study that I'll get to a
couple different times in this episode. But actually it might
be wrong to call it a study because it's really
more an attempt at definitions, right, trying to lay out
what the concept of neuro security would be and what
(11:48):
are some things we need to watch out for. And
so this was published in two thousand nine in the
journal Neurosurgical Focus. It's called neuro Security, Security and Privacy
for Neural Devices from Tamara Dinning, Yoki Matsuka, and tada
Yoshi Kono. So the authors of this paper note that
there are three primary goals in computer security. You've got confidentiality, integrity,
(12:15):
and availability. So confidentiality means what you think it does.
It means an attacker of your computer system should not
be able to exploit the device to learn or expose
private information. Standard example would be hacker steals your bank
account info, or your private emails or your private photos. Yeah,
(12:35):
these are essentially externalizations of my private thoughts, and I
don't want anyone to have access to either exactly now.
The next one was integrity. Integrity means that an attacker
should not be able to quote change device settings or
initiate unauthorized operations. In other words, the attackers should not
be able to use this device whatever it is, computer,
(12:57):
cell phone, anything like that. Should not be able to
use it for their own purposes or change what the
device does for the primary user. An example here might
be that a hacker could take over your computer to
turn it into a bot. That's part of a bot
net to have a d D O S attack against
some website. Maybe they had a bad meal at Olive Garden.
(13:20):
They want to take down the Olive Garden homepage, so
they hijack your computer and make your computer one of
many computers that bombard Olive Garden with with requests to
load the page. Okay, and obviously I would not want
that to happen in my brain, either to change the
settings on my brain and ultimately change my behavior, change
my motivations, like even if it's done in a very
(13:42):
slap dash awkward way, like you know, hands out of
my brains exactly. So the last one is availability. Availability
means that the attackers should not be able to destroy
or disable the device or do anything that would render
it unavailable to the user. Classic example, hackered deletes all
your files or alters the computer's boot procedures so that
(14:04):
it won't load your operating system on startup and it
just becomes useless. Likewise, I don't want anyone to strategically
remove memories from my brain, to wipe my memories from
my brain, or to even temporarily deactivate certain cognitive centers
or networks in my brain. Yeah. Now, in these examples
you're talking about, you're talking about sort of whole brain functionality,
(14:27):
but there could be dire consequences for much lower stakes questions.
Somebody might not necessarily be able to disable your entire brain.
But in a minute, we're going to talk about some
particular types of neuro technology. And in many of these cases,
for example, just disabling your neuro technology could have devastating
consequences for you. They wouldn't have to be able to
(14:49):
turn off your brain. They might just be able to
turn off your neural implant at a time and place
that would make you very vulnerable or could hurt you.
I experienced something like this the the weekend. I had
to drive to a major phone service providers m brick
and mortar store that I'd never been to before, and
I had to do it without a functional phone. So
(15:11):
I had managed I ended up printing out the wrong
directions map quest or you know whatever map program I
I used. It's amazing how dependently could come. Yeah, so
in a way, it was it was like a part
of my brain was not functioning because the phone was
not functioning. You have offloaded part of your traditional capability
something maybe ten or fifteen years ago, you would have
(15:32):
probably had better internal mechanisms for locating a store you
needed to get to. Uh, And now you've said, well,
I don't have to worry about that anymore. That's in
this peripheral that I used to supplement my brain. But
if the peripheral breaks, you're you're messed up. Now. Yeah,
and that is technology that exists, you know, quite literally,
you know, arms distance away from the brain. But the
(15:55):
thing is we're seeing the technology creep increasingly closer to
the brain. And then what happens when that stuff goes
offline or becomes compromised. So to quote the authors of
the study I mentioned, they say, quote, we define neuro
security as the protection of confidentiality, integrity, and availability of
neural devices from malicious parties, with the goal of preserving
(16:17):
the safety of a person's neural mechanisms, neural computation, and
free will. Now we're going to look directly at some
of the neural technologies that might be vulnerable to security
concerns like this. But before we do that, I think
we should look more broadly at the idea of information security,
because if you're not all that familiar with the history
(16:40):
of the Internet. It might be kind of puzzling to you, like,
why is the Internet so horrible in terms of security?
We we've got this global thing, what would you even
call it? Would you call it a technology? We've got
this global information civilization that is just terrible. It is
just terrible in terms of security. There there is not
(17:03):
an overarching strategy to keep everything safe. I keep thinking
of it like the cat's cradle that you create with
the with the length of string and your fingers, and you,
you know, you interweave your fingers and you create a
pattern so that the Internet in this case, it's not
the string, it's not the fingers. It couldn't exist this
without the string and the fingers. It's ultimately that shape.
(17:23):
You know. That's kind of a loose way I think
about it. Well, maybe we should take a quick break
and when we get back, we can look at how
the Internet ended up becoming so vulnerable to security concerns.
All right, we're back, So, yes, how did the Internet
end up becoming so vulnerable? Now? This is on one
(17:47):
thing to take me on a trip on fear and
paranoia memory lane, And I'll try and keep this this
fun too. I guess if if you're a fan of
like Halt and catch fire, it's kind of fun to
think of it in those terms. But yeah, I mean,
on one hand, it's an easy question to ask because
we live in a day where we have ransomware attacks,
identity theft, docsining, invasion of privacy. But on the other hand,
(18:10):
so many of us were born into this system of
the internet, or you know, or we if you're like me,
you entered into it during college, and it's easy to
just assume that the systems that run the world and
the organizations that were in the world haven't figured out
to some extent. You know, you expect the bank to
be secure, You expect some security to to be in place.
(18:33):
There are some rather significant security options to be in place,
so you would assume that the the virtual bank would
be much the same. Right if you were to just
find out that there are human sized holes in the
bank vault that keeps your money, that would be rather surprising.
It's not surprising at all to find out that there
are hacker sized holes in the digital systems that protect
(18:55):
your private information. I mean, how many times now if
we had the story about some online retailer or maybe
even physical retailer that you did business with and they
have your credit card number. They got hacked, and now
your information belongs to somebody out there and you have
to get a new credit card or something like that. Yeah,
and I think we can all relate to that kind
(19:17):
of anxiety. Yeah. Now, I mentioned earlier that that so
many of us were a number of our listeners here
were born into the Internet age. So I'm going to
try and put this in turn, like the origins of
the Internet, in terms of something that maybe more of
us could understand, but then not maybe not the younger people,
but for a number of you out there. I bet
you can remember the day that your mom joined Facebook.
(19:39):
So you remember the realization that oh crap, this this
internet thing, this this really is for everyone. It's that
horrible moment where something that you and your friends do
online that you think of as not real life comes
crashing into real life and you realize like, ohhh, this
is connected to the same universe where I live. Yeah, yeah,
(20:02):
I remember. There was like a sharp contrast, certainly between
the MySpace aide. Remember what it felt like to be
on my Space was totally different than the early stages
of Facebook, which was, which was completely different from what
Facebook would become in terms of you know, just earlier
models just felt like, oh, I'm just surrounded by like
minded people who share my my same sort of attitude
(20:24):
on life and and values about what's important. And granted,
there's there was already a certain amount of bubble construction there,
but but yeah, we we just we we had expectations
about what this this technology was and who this technology
was for, And the same situation really applies to the
Internet as a whole. It's architects did not build it
(20:46):
from the ground up as a global system for the masses,
a cat's cradle weaved around the fingers of so many computers. No,
they designed it as an online community for a few
dozen researchers. It was a research project. It was like, Hey,
we've got these computers at different universities and institutions. What
if we could link them together so they could trade information?
(21:07):
Would that be weird? I don't think that. Maybe some
people did, but it doesn't seem like generally they predicted
that this would become a hub of commerce and recreation
for everybody on the planet. Yeah, I mean, it's you
can you can't help, but compare it to say empires, right,
can you imagine you have uh, you know Genghis Khan
(21:27):
and he's sitting around and uh and they're saying, hey, Genghis,
what's up? And he said, oh, not much. I got
this idea though, I'm going to call it the Mongol Empire. Huh.
And this is how it's gonna work. This is how
we're gonna incorporate trade and the and this is how
we're gonna value different religions. And there was whoa, you
haven't even none of this stuff is even conquered yet.
What are you talking about, like the people who found
(21:49):
these these major movements and organizations? Like how often is
the complete structure baked into the original design? Almost never?
Each step is improvised. So you've got to make decisions
about design. You make design decisions on the fly as
issues come up. And the Internet was sort of the
same way. Yeah, you gotta leave it to Kubla to
(22:11):
figure out the rest. So um, there's a wonderful Washington
Post article that that that goes into this in depth.
It's titled a Flaw in the Design and uh And
I'll include a link to this on the landing page
for this episode it's Stuff to Blow your Mind dot com.
In the article, the author speaks with such Internet forefathers
as M. I. T scientists David D. Clark, David H. Crocker,
(22:35):
and Vinton G. Surf. They point out that they thought
they could simply exclude untrustworthy people. Is that great, We'll
just we'll just keep the security risks off of the Internet. Yeah.
So when they thought about security, and they did think
a little bit about security, they thought about it mainly
in terms of protecting the system from outside intruders or
(22:56):
military threats. So there's a quonderful quote in this article
from Surf, and he says, quote, we didn't focus on
how you could wreck this system intentionally. You could argue
with hindsight that we should have, but getting this thing
to work at all was non trivial. They dealt with
problems as they arose, and the problems then we're making
it work, and now this is going to have some
(23:18):
very interesting parallels once we start talking about neurotechnology again.
They also point out that at the time there really
wasn't much of value in the Internet. Uh. The analogy
they make again to the bank people break into banks
despite the security because or the attempt to break in
because there's money there, because there's something of value, right,
You wouldn't break into a bank vault or you know,
(23:40):
risk prison time for a bank vault that was full
of what transcripts of messages back and forth between academics. Yeah,
I mean, unless you're just in it for the pure
artistry of it. But that rarely seems to be the
case outside of like a Hollywood movie. But then, of course,
in the early days of computers, you did start actually
encountering security threats. Yeah. The big one that really changed
(24:01):
everything was the Morris worm attack. Crashed thousands of machines,
It got millions of dollars in damage, and it helped us.
You're in the multibillion dollar computer security business. Bless the
maker and his water. Yeah, I mean at this point
that the party was over. Oh by the way. Interestingly enough,
the article also points out that the big idea behind
(24:22):
the Internet is at least partially attributed to American engineer
Paul Barron, who worked for the Rand Corporation at the time,
and he saw the need for a communication system that
would function after a nuclear at strike on the United
States that would help us with eight efforts help us
preserve democratic governance and even enable a counter strike, all
(24:44):
in order to help quote the survivors of the Holocaust
to shuck their ashes and reconstruct the economy swiftly. What yeah, um,
I mean that bringing this Cold War mentality to the Internet. Uh,
it's so crazy thinking about the way contextual frames completely
(25:06):
shift around the technologies that we create. Yeah. And the
crazy thing too is that every time there's a headline about, say,
meddling in elections with the you know, with with various
online initiatives, or hacking initiatives, or the recent ransomware attack,
it makes me realize, well, this is this is what
(25:27):
cyber war looks like. This is this is the shape
of global cyber warfare. And you look back and here's
this guy dreaming of the The Internet is this thing
outside of war, this thing that's just a communication system.
It helps rebound from assaults by by various estates. But
it's also this is creating the Internet in the context
(25:48):
of thinking about international warfare, but thinking about it in
terms of overt frontal warfare, missile bombardments and troop advancements
and all the traditional types of war people knew about,
not realizing that the Internet would enable a state of
constant covert war between great powers that would just be
(26:10):
constantly secretly or semi secretly undermining one another. Yeah, and
there would the kind of this gray area about how
how you respond? How do you how do you react?
How does what are the rules of cyber warfare? And
people still haven't figured that out. So that Washington Post
article is a great exploration of Internet security history. But
the basic answer to our initial question is this, you know,
(26:32):
why is the Internet when of the Internet end up
becoming so vulnerable? It's because the Internet wasn't built to
be secure. All security concerns are an add on an
aftermarket edition a patch. Security is always difficult, especially when
it wasn't baked into the design and and and with
these systems that we're talking about, it almost never is.
We see the same situation occurring with with some of
(26:54):
the gadgets and implants and proposed neural technologies we're going
to discuss, because a primary goal ends up to being
what to aid the patient to somehow to achieve the
goal of the device or the technology. Right, if you're
designing neurotechnology to help regain some lost functionality and somebody
with a brain injury or a body injury, or to
(27:16):
cure some kind of neurodegenerative disease, or at least offset
it's negative effects. Security. I mean, that's just such a
far down the road concern. You're you're worried about fixing
people's problems and helping their lives. That's what you're worried about.
And it's the same thing you talked about with the
Internet earlier, that you know, people were just wondering if
(27:37):
they could make it work. Security is so far down
the list of concerns. Yeah again, security becomes this add on.
It's this thing that you you you end up implementing
or worrying about once the threat becomes more apparent. Oh yeah, security,
not not in the kool aid man sense of oh yeah,
but like the oh yeah, oh yeah, this there's a
(27:59):
man sized hole in the wall. We need to do
something about this. So I want to go back to
that paper I mentioned earlier in Neurosurgical Focus from two
thousand nine, where they try to lay out a framework
for approaching the topic of neuro security. And so the
authors make the point that neurotechnology is becoming more effective
and one of the things that we can draw out
(28:21):
from this is that as it becomes more effective, it's
going to become more useful, And as it becomes more useful,
it's going to become more widespread. And is it becomes
more widespread, it will be more fully integrated into our
bodies and our lives. And so there are a lot
of current and potential uses of neurotechnology. One thing would
(28:42):
be treating brain disorders. Another thing might be making paralyzed
limbs usable again, or allowing users to control robotic limbs
with thoughts alone. One thing might be remote controlling robots
with thoughts. That's a fun one. Another one might be
enhancing human cognitant of capacity. So up until the present,
(29:03):
most research into the safety of neurotechnology is focused on
making sure the device itself doesn't hurt you when it's
functioning the way it's supposed to write. Safety concerns are
about making sure that the intended use of the neuro
technology is safe. And this makes sense because back in
two thousand nine, at the time this paper was published,
(29:24):
most of these devices were number one, contained in lab environments,
you know, they're just not getting out into the wild
very much. And number two were self contained systems, meaning
that they had very little limited transaction with the other
information systems in the outside world. Back when you had
an Apple two and no Internet connection, you're probably not
(29:46):
going to get a virus, right unless you're one of
the very few unlucky people to get handed and infected
floppy disk and you stick it into your disk drive.
Of course, once you start connecting your computers to the Internet,
your security vulnerability goes way up. Uh. And here's a
cybersecurity mantraid posit for you. The more your device plays outdoors,
(30:07):
the more vulnerable it is. Yeah. And this means that
as a device gets connected to the Internet, interacts with
a larger number of devices, adds wireless capabilities, all all
those things, there are more ways for it to be
compromised by malicious adversaries. And as devices become more useful
and more widely adopted, they tend to play outdoors more
(30:30):
and more. If and so, what the authors are saying
is that if we don't design robust security features in
them from the get go, we could end up with
neurotech that works like the Internet we just talked about,
where it's it's an ad hoc system of security fixes
It's this constant race between security updates and malicious hackers,
(30:50):
and every time the bad guys pull ahead, they have
the ability to bring a little taste of doom with them,
except this time the target isn't your computer or even
your bank account or your Facebook account. It's your nervous system.
And people probably will not find that acceptable, the victims
of such accounts. That is, because inevitably the trend we
(31:10):
see is that there will be somebody who decides that
such an attack on an individual is a good idea
for whatever kind of whatever reasons they have, or just
sort of the the impersonal nature of online victimization. Oh yeah,
I mean, of course they're going to be motives. One
of the things the authors point out is just straightforward cruelty.
Think of the cruelty, both random and targeted, that we've
(31:33):
talked about of those epilepsy strobe attacks. I mean, that's
just sick. But apparently people think that's okay to do.
There are people out there who are willing to inflict
injury on others because they think they can get away
with it. But then think about the possible financial and
blackmail incentives that would be open to someone who compromised
(31:54):
your brain itself, and then think about malicious interfering that
is self directed. One example of this might be UH,
in a minute, we're going to talk about the idea
of deep brain stimulation or DBS. But there's a possibility,
for example, of neurotech users to hack their own devices
(32:15):
in an act of harmful self medication. So in the
same way that you might abuse a prescription for painkillers,
you could potentially abuse your neuro technology using it in
a way that it's not intended that could be harmful
to you in the long run but feels good in
the present. Well, or to frame that another way, you
could you could have a situation from where people are
(32:36):
optimizing that their technology in a way that demand does
not approve of. Yeah, you could have that, or you
could have One example is overclocking computers. Right, people want
to over clock their CPUs. Sometimes you see people messing
around with their hardware and it means that you know,
I know I can get more power out of my
CPU if I if I make some adjustments to what
(32:58):
it will allow itself to do. But that comes along
with risks, right, you could risk overheating your computer or
something like that. If people decide they want to do
the same thing with their brains. Yeah. Yeah, then I
mean basically you could say, I'm going to trade the
potential to have more power in my brain with the
(33:19):
potential for some risk. Yeah. I mean people do that
every day when they they look at various pharmaceutical ways
to potentially augment their brain for the completion of a
task or some sort of creative endeavor. And reminds me.
I believe it was Jimmy Page of led Zeppelin who
has in some interviews like look back on past drug
(33:40):
use and and said, well, yeah, that was probably too much,
but look at what I got done. Look look at
the work that came out during that time. I'm not
sure if this is something he said or this was
commentary on his life, but I mean you could see
someone making the argument with technology to say, yeah, I
overclocked my uh neural augmentations, but look what I got done, right,
(34:05):
Sure did a lot of homework, so you could say,
and I think the authors of this two thousand nine
articles say that we're at a similar stage in the
evolution of neural engineering as we were at the inception
of the Internet. Neuro Security is not really much of
an issue today, but it could be a huge, massively
(34:25):
important concern in the near future. And the consequence of
a neuro security breach can be a lot worse than
a breach of Internet security. Instead of protecting the software
on somebody's computer, you're protecting a humans ability to think
and enjoy good health. Right, Yeah, you're getting far because
because that's the thing, right, when all these bad things happen,
So you suffer, even an identity threat, you can always say, well, hey,
(34:48):
at least nobody was hurt. At least I didn't you know,
nobody physically attacked me. But here we see that line
somewhat erased. So a few current trends in neuro technology
that are definitely going to up the stakes and increase
the risks. One of these things is wireless connectivity. The
(35:08):
authors in this paper they recognize that security vulnerabilities do
exist in wireless implanted medical devices, and in past research
they demonstrated that a hacker could certainly compromise the security
and privacy of such a device, such as a two
thousand three model implanted cardiac defibrillator. You see, you could
they found using homemade, low cost equipment wireless lee change
(35:32):
a patient's therapies, disable those therapies, or induce a life
threatening heart rhythm. And they and this was a two
thousand and two thousand nine publication, but the things have
moved on a bit since then. But even then at
the time they said, look, the current risk is very low,
but such threats have to be taken into account with
future designs. So there's increasing wireless connectivity of all types
(35:54):
of implanted devices, including neural peripherals. But the other thing
we need to take into account for increasing security risks
is the increasing complexity of these devices. The more a
device does, the more there is to worry about from
a security concern. Yeah, and this is an area where
I to make sense to this. I keep thinking about
(36:16):
a chessboard. In a chess game, as many of you
might be aware, Like chess has a set number of pieces,
a set playing field, and uh in a set rule system.
And people have been playing with these limitations for a
long time, and in doing so they've kind of figured out,
you know, all the basic moves that can take place
(36:36):
early in the game and it it's opening the openings. Yeah,
and you it takes a while to get to that
point where you're actually in fresh territory where you're playing
a game that has not been played before. So it's chess.
Is not like a modern board game where the board
game comes out and then they might have a new rule,
suit supplement, or a new expansion, and each time a
(36:57):
new expansion comes out, ohops, they broke the game will
a bit or the rules, this rule clashes with this
older rule, and we need somebody to to weigh in
and tell us, uh, you know how we actually play
the game. Now, Chess doesn't change. Chess remains the same,
and our our our world of interconnected devices does not
(37:18):
stay the same. It changes. It is a It is
a modern board game that gets gets a larger playing field,
gets more pieces and more complex rules pretty much every day. Now,
I think we should zero in on some specific examples
of neurotechnologies. One would be robotic limbs. Now, I think
(37:38):
this is a great example of something that has made
enormous strides just in the past couple of decades. Robotic
limbs are not the future, this is the present. Multiple
labs and inter organizational projects have already created robotic limbs
that can be controlled directly by brain activity, just like
the muscles and a natural limb, and these things are
(38:00):
better all the time. That's right. Now, to go back
in time just a little bit back in two thousand thirteen,
Bertold Meyer wrote an article for Wired titled ethical questions
are Looming for prosthetics. Now. Meyer had a unique insight
in this article because he wears a prosthetic arm himself,
and he tried out many different models over the years,
(38:21):
so he's not you know, he's an insider when it
comes to prosthetics and the use of high tech prosthetics.
So at the time he was using an i limb
which connected to his iPhone, which of course was connected
to the Internet, and he wrote, technically, a part of
my body has become hackable and uh, and he pointed
to concerns by crime futurist Mark Goodman, also a Wired writer,
(38:42):
and Mark Goodman covered the fact had previously covered the
fact that hackers had developed a Bluetooth device that can
cause portable insulin pumps used by certain diabetics to give
their where a lethal dose. Oh yeah, this is another
one of the vulnerabilities of hackable implanted devices that don't
even necessarily connect to the nervous system or the brain right.
(39:03):
And if anyone out there is a fan of the
TV show Homeland, there's actually a plot involving in a
vice presidential assassination attempt in the show U utilizing just
such a strategy. I've never watched that show is a
good I enjoyed. I only watched the first two seasons,
but I enjoyed it. So Meyer argues that we we
have to recognize and address such hacking sensitivities before the
(39:26):
technology is widely adopted and hacking becomes a full fledged threat,
which just is exactly what we've been we've been saying
over again. So Myer's on the same page as the
authors of this article we were talking about earlier that
they're saying. The main thing is we got to get
ahead of it. We've got to start thinking about neurosecurity
before these threats really become an issue. Now, there are
(39:46):
plenty of labs that have been working on robotic limbs,
thought controlled robotic limbs. One more example I wanted to
mention there was a good article in the New York
Times in May about the Johns Hopkins University Applied Physics
Lab where they've got this robotic arm. It's got twenty
six joints and it can curl up to forty five pounds.
(40:06):
Is that more than my biological arm concurl. I wonder
get into an arm wrestling contest with one of these
robot arms. I don't know. I feel like we're getting
into the weight of human hair scenarios. Well, anyway, these
things it's controlled entirely by the brain, and so it
is just that you connect this to what your natural
(40:27):
neural impulses would be, and you can't. You don't have
to operate any external controls or machinery. You just control
it with your brain. There was also a good article
I saw and wired from last year. I think about
President Obama just freaking out when he was watching somebody
control a prosthetic arm with his brain. Uh, he like
(40:48):
couldn't contain his glee. Yeah, I mean, it's it's amazing,
Like and that's the thing that the technology has so
many wonderful applications, um and and all of these additional
threats kind of come second to that at least when
you're you're focusing on the wonder Yeah. And so there
are a couple ways you could say that neural devices
would come into limb control. So one of them would
(41:12):
be restoring the use of a disabled limb, Like if
you have some kind of neurological damage or disease that
means you still have an arm or a leg, but
that you can't control it with your brain. One thing
a neural device could do is give that control back
to you. Another thing would be that you've lost the
arm or the limb and that you have a robotic
(41:32):
replacement that you can control with your brain. Yeah, these
are These are two possibilities that that come up time
and time again in you know, wired magazine articles and
another cool uh cool, you know forward facing technological publications.
Now we've mentioned a couple of arms, but that they're
prosthetic legs too, right, Yes, I was looking around. I
came across Blatchford's links prosthetic and this communicates from knee
(41:57):
to ankle four times a second. So it's a stem
in which the foot and knee of the prosthetic limb
work to work together to predict how it's where is
going to move and respond to the position. And this
too features a bluetooth connection to a smartphone to help
manage this interaction to like uh ad just settings and
(42:18):
things like that. Yeah, so I mean, I guess that
doesn't rely on the smartphone CPU to do it's computation.
I guess that would be difficult. Um, my understanding is
that this was about like tweaking performance. Okay, so but
it's easy. That's the thing when you can see a
lot of these technologies. Perhaps they start with using a
wireless connection to tweak performance, but then it becomes more right,
(42:39):
then it becomes about downloading new firmware. Then it becomes
about actually using the computational power of the device or
the cloud even to control the prosthetic. All right, so
maybe it's time to think about what would be the
security concerns of a robotic limb or a neurotechnologically enabled limb.
(43:00):
One thing I got to think about is the concept
of a ransomware attack. So we've recently seen ransomware attacks
all over the world, right. I think they're now saying
that the North Korean government might be behind these ransomware
attacks that have hit like the British NHS and all
these other places. As of this recording, I believe there
is some speculation that that might be the case. But
(43:22):
I was reading an expert, an expert who was saying well,
we're still looking at it, so okay. But the basic
concept of ransomware attack is you know, I've seen this
on relatives computers before, where you boot up Oh yeah,
like you. I mean, this is a classic type of attack.
You boot up your computer and there's a message that says,
(43:43):
like from Microsoft Anti Virus Protection or something, Uh, your
system is not secure. You must pay to renew your
anti virus license in order to boot up your system,
and they ask for your credit card number, and so
they're they're holding your technology hostage. In that case, they're
pretending to be somebody legitimate, but they could just come
(44:04):
right out and say, look, I've got all your files.
I'm not going to let you into your phone unless
you give me a hundred bucks. And that was basically
how this this recent want to cry um ransom bot attack. Right.
But imagine if this was applied to neurotechnology that re
enabled you to move your limbs. So let's say you're
out hiking with your amazing robotic legs. You've lost control
(44:28):
of your legs, or you lost your legs in an accident.
You've got robotic legs, and you're out walking around in
the mountains and suddenly they lock in place and refuse
to move. And then you get a text message demanding
a ransom payment of five hundred dollars worth of bitcoin
or your attacker will not unlock your legs. What do
you do? You you gotta at least take a chance
that they're going to make good on the promise. Yeah,
(44:49):
I mean, it's amazing how how sci fi things can
get so quickly. I mean that that's really does not
seem all that far fetched to me. Another thing might be,
how about a confidentiality attack, so that that lass type
of remember the three categories we mentioned earlier of neuro
security categories, you want to protect to the availability, the confidentiality,
(45:12):
and the integrity. That would be an availability attack, right
they say you will not be able to use your
device unless you pay up. You could have a confidentiality attack.
That would be a skimming attack on your robotic arms.
Say an attacker gains control of your robotic arms, then
uses motion data to infer what you're typing whenever you
use your fingers to type a message. Or how about
(45:34):
an integrity attack the attacker literally makes you punch or
choke yourself, or punch or choke others by taking control
of your robotic hands. Well, this reminds me of a
video clip that I actually included in the notes here.
I'm not sure if you've seen this, but it was
just a was a gentleman on a believe of French
news program showing off a prosthetic arm and he activates
(45:55):
and it it begins to malfunction, and it just kind
of starts pounding on the table and then pounding on
the man's thigh and he can't get it to turn off.
So you can easily imagine where even something wouldn't have
to be is is is uh as precise as making
you go karate chop crazy on people around you. But
what have you just started making your arm, you know,
(46:15):
go into sort of spasms that that could be bad enough,
especially I mean if you were driving a car at
the time, if you were if you were giving a
public presentation. Uh, there any number of scenarios were just uh,
the utter malfunction of the device would be bad enough. Now,
I know a lot of you out there are probably
thinking like, well, I wouldn't I wouldn't get a robotic
(46:36):
limb like that if there are risks like this. But
you're probably not putting yourself really in the frame of
mind that that somebody who has lost control of a
limb or lost a limb would experience. I mean, imagine
not having that ability and having the technological capability to
regain it. This is not something that I think people
(46:57):
can really be faulted for wanting. No, I mean, I
mean that's the thing, like is the is the if
it improves the technology, it makes the technology better able
to you know, let an in a visual cope with
a lost limb, and then is that that technology becomes
the standard. Of course people are going to adapt it. Yeah,
this is the thing people are gonna want for good reason.
(47:19):
And it's definitely, especially at the beginning, going to seem
like the risks are very low, and hopefully they will be. Yeah.
So uh, I do want to say here, you know,
when it comes to hackable prosthetic limbs, it isn't all
black mirror paranoia. There is a lighter side as well,
and this is where the Lego Prosthetic Arm comes into play.
(47:40):
Designed by Chicago based Colombian designer Carlos Alturo Torres, and
it's a modular system that allows children to customize their
own prosthetics. So this is a lesson in engineering programming
and a way to help them overcome the social isolation
they might feel over their condition. So I just found
it to be an interesting little side note. Well, yeah,
(48:01):
we already mentioned in a perhaps dangerous or detrimental context
the idea of hacking your own neuro pros theses, which
that that could certainly be the case. But I can
also see hacking your own neuro pros thesis to be
something that's very like fun and adventurous and exciting. I
guess it would just depend on what the risks and
and the dangers were. Oh man, what have we reached
(48:22):
the point to just have a little fun with it?
What have they a hacked? Either you hack it or
you know, someone outside hacks your prosthetic arm and it
makes a hand puppet and then it's able to talk
specifically in the void. Who is the famous hand puppet here?
I don't know. It would do kind of like a
kind of like a you know, a cartoony Spanish accent
to the to the talking hand. You know what I'm talking.
(48:45):
I don't know what you're talking about, Senior Wins. Is
that I don't know? No, I have no idea anyway,
I can't help but imagine like a hacked robotic arm
suddenly just becoming this little talking fifth It's like that
it starts screaming at you. So add that to the
list of near future concerns. Well, I can think of
a good one is that you'd hack your own arm
(49:07):
to just make it, at random intervals throughout the day
throw up the rock horns. Then you'd have no warning
when it was going to happen. You just say, like,
fair warning to all my friends and family. Every now
and then, I'm going to rock out. You've got to
get the horns or every now and then I may
just flip you off. It's not because I don't like you,
it's just I've been hacked. Sorry. Uh, it becomes a
(49:29):
great excuse. So yeah, there are multiple sides definitely to
having systems that are flexible and can be manipulated. I mean,
you could see that as a security risk, which it
probably is, but you can also see it as an
opportunity for people to express their themselves and and try
new things with their own bodies. Indeed, well, in that note,
(49:49):
you know, we should take a quick break and when
we come back, we will get into some some some
other possible areas where neuro technology could become hacked. Alright,
we're back, okay, Robert. One more type of neurotechnology that
is highlighted in this original paper on neuro security is
(50:13):
the concept of deep brain stimulation. Yes, now we I
think we've talked about this some of the podcast before,
but deep brain stimulation is basically putting electrodes deep inside
the brain to stimulate certain regions with electrical impulses. It's
uh in the basic idea is fairly simple. Of course,
the implementation is very complex. Yeah. We get into this
(50:34):
in our brain to brain communication episode, which I'll include
a link to on the landing page for this episode
of stuffitably your Mind dot Com. But yeah, essentially you
have sort of the external version, it's kort of the
god helmet scenario right where you're doing, you know, electromagnetic
cranial stimulation, and then the idea of of actually putting
the the the the the the devices inside the head
(50:56):
actually having implants in the brain that are manipulating cognitive Yeah,
and there's all kinds of uses of putting electrodes in
the brain. Deep brain stimulation specifically is is putting them
deep down in there to help with multiple types of
chronic medical conditions. Specifically, it's been effective at dealing with
Parkinson's disease and with tremors what you might see called
(51:18):
essential tremor, but also contains uses that have been tried out,
such as for treating major depression or for chronic pain.
And so obviously, the better we get at correcting problems
that begin in the brain with with electrical impulses, that
that is a great thing for the people who suffer
(51:39):
from these conditions. But when you're putting the capability to
send electrical impulses deep within the brain in the hands
of a piece of technology, you want to make really
sure that that technology is doing what it's supposed to do.
As you can guess, there could be a lot of
problems with unwanted electrical mulation of the brain. And one
(52:02):
thing I just want to quote a paragraph from the
two thousand nine paper we mentioned quote, the hacker strategy
does not need to be too sophisticated if he or
she only wants to cause harm. It is possible to
cause cell death or the formation of meaningless neural pathways
by bombarding the brain with random signals. Alternately, a hacker
(52:24):
might wirelessly prevent the device from operating as intended. We
must also ensure that the deep brain stimulators protect the
feelings and emotions of patients from external observation. So you
can see there are a lot of avenues here. Also,
deep brain stimulation was one of the things we had
in mind when we talked about the idea of of
(52:46):
illicit or dangerous self use, like if you are self
administering patterns of electrical impulses that may feel pleasurable to
you at the moment, but could be harmful to you
in the long run. And of course this this is
another another area we can imagine it being hacked for
you know, on both sides, someone saying, all right, I
know this device was just about you know, treating a disorder,
(53:06):
but I'm I'm going to tinker with it, and now
it gives me orgasms when I push a button. But
then the reverse of that, of course, is someone actually
monkeying with your cognitive performance. Yeah, uh yeah. And you
can only think as things like this become more complex,
there will be more and more opportunities for dangerous exploits
(53:27):
as well. You know, basically, the possibility for dangerous exploits
seems to track along with the potential for helping the brain.
Right as we as we have more power to heal,
we have more power to destroy. You see that with
any technology, right You see those parallel tracks of the
beneficial applications for humanity and then the negative, self destructive ones. Totally,
(53:48):
it's a it's a nuclear power at the neural level. Yeah,
it's it's chemistry. You know, the same advancements that gave
us all the beneficial applications of chemistry also produced chemical weapons. So,
uh so I want to look at one more potential
neurotechnology that could have great rewards and great risks. And
so this one is going to be cognitive augmentation. So
(54:11):
one commonly discussed example is memory augmentation. This comes with
its own benefits and risks. The risks are fairly obvious.
If you have the capability to augment memory, you may
also have the capability to degrade a race or alter
existing memories, or to create false memories or impressions. Uh
and and alter the entire integrity of a person's memory system.
(54:35):
But I got another idea, what about computational upgrades? Assuming
such a thing as ever possible. We we don't really
know if it is, but we'll assume for now that
it could be possible to upgrade the brain's ability to say,
do math or you or computational reasoning. Okay, just an
implant that boosts some sort of cognitive function in your
(54:55):
brain at your point being like either you're you're you're
handling of mathematics or your memory, etcetera. Yeah, So, Robert,
I got a scenario for you. Somebody offers you a
free surgery that they say has a chance of increasing
your i Q by twenty five points. Would you take it?
(55:16):
Mm hmmm, I don't know that. That's pretty good, Uh,
pretty good odds of success. Yeah, you don't have to answer. Now.
I got one to make it a little more obvious
if you if you're a person out there who's listening,
and you'd say, hell, no, you know, I'm not messing
around with my brain. I like my brain the way
it is. I'm not going to introduce all these risks.
Then consider this, What if everybody else around you has
(55:39):
taken this. Yeah, so all of your friends, your co workers,
everybody in your professional circle, all of your professional rivals,
they all take the upgrade. This is a This is
a big issue in trans humanist thought. You know, who
gets to be trans human? And what does it mean
to say no to some sort of trans human experience
such as a you know, a surgical implant that that
(56:03):
boost your cognitive ability. Well, I'm just talking about voluntary willingness.
So of course the question of who this is available
to is a big question, But it's a different question
I'm saying. Let's let's just assume we're in a crazy
scenario where it's freely available to everyone, and the only
question is do you want it? Will you voluntarily take it?
I'm not sure you're you're in the situation where if
(56:25):
you're if you're the first person, you'd probably say like, no,
I don't think I want that, it's too weird. If
you're the last person, you would probably be desperate to
catch up, Right, Would you voluntarily choose to remain at
a cognitive deficit to everybody else around you who has
upgraded themselves. I mean that's the thing. People are going
to take the risk. People are going to be hungry
(56:47):
enough to take the risk. Some people are going to
be comfortable enough not to but for how long? Yeah,
this is where we get into a scenario of something
that I would call maybe this isn't the best term
for it, but I'm going to try this out. The
term is irresistible availability, and so I'm going to posit
that bring computer interfaces and certain types of neural augmentation
(57:10):
cognitive augmentation, if they're possible, they are going to fall
into this category of irresistible availability. So I would say,
you know, consumer technology that looks scary at first tends
to go through several phases. Of course, you've got the
lab phase, right, You've got the alpha and beta phase.
It's fairly contained, constrained. It's testing with people who are
(57:32):
part who are in on the game basically, and then
you've got to you've got a release, and you've got
early adopters. These are people who are technologically adventurous and
they start using this new thing. They tend to like
to show off its advantages. They're more willing to accept
risks that are, you know, that haven't been worked out yet.
They're willing to get along with the kinks that haven't
been solved. Then the intermediate adopters weighed in and at
(57:56):
some point a new technology that originally seemed scary weird
and unnecessary reaches a tipping point of convenience advantage and
widespread adoption. And I would say there's definitely a social
element to this. It's not just the true, you know,
financial or time convenience it provides, but it's the fact
(58:18):
that everybody else is doing it. And at some point
it goes from something that I don't need and that
scares me to something I couldn't imagine living without. And
you can see this in many in many contexts. You
think back to cell phones. How cell phones went from
a like weird and unnecessary thing that extravagance. Yeah, like
(58:39):
characters and movies had cell phones, especially in their car.
Remember when I still enjoy watching like, you know, eighties
films where there's a super villain of some sort in
a crappy be movie. Of course they have that big,
bulky car phone. You're like, Oh, imagine a world which
someone makes a phone call from their car. Do you
remember when paying for something online with a credit card
(59:01):
was this really weird, scary and unnecessary thing. I specifically
remember that thing, like, why would anybody ever use a
credit card to pay for something on the internet. That's insane? Yeah,
that's that's what you do. You call an eight hundred
number and credit card that way, and then think about
maybe mobile banking and transactions, ride sharing apps like Uber
(59:24):
and lift. You just think about this progression from scary
and unnecessary to fundamental. It's the progress of irresistible availability.
And I very much think that neurotechnology could easily go
in the same direction as it because the advantage has
become more clear as the risks sort of get blurry
(59:44):
and and go out of focus because so many people
are using it. Uh, it just starts to look more
and more like something that you can't go without. And
then once you've tried it, you're in the pool. Yeah.
I mean, I just keep thinking back to flying, like
if if if flying makes sense and an airline, then
everything makes sense. Yeah. You're clearly defining the will of
God by getting in this machine and ascending like a bird. Um.
(01:00:08):
So yeah, everything else is on the table too. Yeah. Man,
And they don't even try to make it pleasant anymore,
and people still can't stop doing it. Yeah, like they
don't have to sugarcoat it. Yeah, you're in a flying
death machine. I'm on board. Well, actually, to speak of
death machines, of course, the classic comparison here is the car,
the automobile, which is far more potentially deadly than just
(01:00:29):
flying on an airline. Yeah. Imagine cars were new and
nobody drove them, and they were just brand new invention,
just now being debuted. And they told you, Okay, on average,
about thirty three thousand people a year are going to
die in these machines in the United States alone. Do
you want one? Yeah, you would, you would say, I
(01:00:50):
don't know. That sounds kind of dangerous. But the thing
is we were born into this world. We're born into
the world of the automobile, and so you just take
it for granted. Yeah, these are the these are this
is the roll of the dice we take every day.
So of course it's normal. The convenience creeps in, the
widespread adoption makes it look normal and okay, and so
(01:01:11):
it's irresistible availability. It's just ubiquitous and you can't get
around it. Yeah, and even things that are not available
to everyone just become increasingly normal. Like I keep thinking
back to uh too, like Time magazine headlines about test
tube babies back in seven nineties. Oh yeah, the like
original stigma about IVF. Yeah, and of course that has
(01:01:35):
become It just became increasingly normal, just increasingly every day,
like today. It's just another reproductive option that's on the table.
And I mean, I think that was also influenced then
by by social social stigma and certainly like misogyny, certain
ideas about about you know, people trying to control what
women's bodies are for. But yes, there is Yeah, just
(01:01:58):
the technological ask act alone certainly has become more more accepted. Yeah,
so it seems undeniable that will see the same thing
occur with these various the idea of neural implants. Yeah,
and this stuff may becoming a lot sooner than we think.
So we we've talked about how there's already deep brain
stimulation and robotic limbs. These are already in development. They
(01:02:22):
already in some cases work pretty well. It's just a
question of them being deployed more in the wild and
becoming more widely available. But the question of cognitive augmentation,
that's still more of a future concern. We haven't really
discovered any strong entry ways into that arena of technology yet,
but we could be closing that gap really fast, is
(01:02:44):
what I'm saying. So, how about neural lace elon musk Oh, yeah,
the neural lace. I love this idea because of course
that the the The guy who coined the term neural
lace is sci fi author Ian in Banks, one of
my personal favorites always comes back to the Banks here. Yeah,
and in his books, there's the culture which is this
uh anarco utopian um far future society, and everybody in
(01:03:08):
the culture has all these transhumanist adaptation such as like
drug glands that they can they can gland various substances
to change how they're feeling, and they all have this
neural lace that enhances cognitive ability and kind of gives
them a Basically, they're they're they're tied into a vast
sea of information that they can call up as they need.
So basically the idea is it is a way of
(01:03:31):
robustly connecting the brain to the external information systems of
the Internet or whatever their future version of the Internet is. Yeah,
it would be like Google Brain. Yeah, uh yeah. So
that's pretty close to what Musque seems to have in mind. Obviously,
we're not there yet, but we do have prototypes of
this sort of technology. It's it's nowhere near uh In
(01:03:52):
in Banks level yet, but in March, Elon Musk was
in the news promoting this new neurotext artupp called neural Link,
which he basically plans to use as the vanguard of
the coming neuro cyborg movement. And the idea of the
neural lace is really the short version is it's this
ultrafined mesh material that can be injected into the brain
(01:04:17):
case with the needles. You get the needle inside the
skull and you inject this mesh material over the outside
of the brain, where it naturally unfurls to cover the
outer surface of the cortices, and from here it melds
with the brain and can offer supposedly extremely precise electrical
feedback and electrical control of brain activity what they would
(01:04:38):
call a quote direct cortical interface. And supposedly trial versions
of this have been deployed in mice with apparently very
few side effects, and so in the short term this
might prove a useful treatment for various neurological disorders, you know,
age associated neurodegenerative diseases like Alzheimer's and other neurological disorders.
(01:05:00):
But Musk is not shy about the sci fi stuff.
He's He's into his other motive, which is that ultimately
he's interested in cognitive upgrades. He wants cognitive augmentation of
the human brain. And one of the main reasons he's
given publicly is that Musk is one of these people
who's concerned about existential risk from artificial intelligence. So we've
(01:05:23):
talked about this a little bit on the podcast before.
We I think we talked about it in our Transhumanist
Rapture War episodes, but maybe we should do a whole
episode or episode series on this sometimes, because I do
think the question of the risks posed by artificial intelligence
is interesting, and one of the reasons it's interesting is
that it's one of these questions where really smart people
(01:05:44):
who really know what they're talking about, are totally on
both sides of the issue. You hear people saying we
need to be worrying about existential risk from AI right now,
and other smart people are saying these people are lunatics.
You this is not a concern. And not sure which
side of the the issue I fall on. Yeah, it
kind of depends on whose argument I'm reading. I kind
(01:06:07):
of fall in line with with whoever whatever the last
rational argument happen on the matter. Yeah, I guess I'm there.
I I consider myself highly persuadable on this topic still,
but anyway, Musk is one of these people who says, look,
creating superhuman artificial intelligence is a genuine risk to us.
We at the very least risk becoming irrelevant, if not
(01:06:28):
risk being destroyed. And so he thinks that in order
to avoid becoming irrelevant or worse in the face of
superhuman AI, we've got to be willing to upgrade our
brains to keep up with the machines. In other words,
the only way to make sure that you don't fall
victim to machine intelligence is to merge with it to
become Yeah, and in his view, neural lace might be
(01:06:49):
one way to get us there, giving us the power
to augment our bio brains with your neurotechnology to become
superhuman mind hybrids. So if the if the the AI
got is essentially a cat's cradle design, we want to
make sure we're the fingers. We want to make sure
we're uh, you know, an important aspect of its spiritual body. Yeah,
(01:07:11):
even if we're not, it's and I mean, we also
don't want to be just some irrelevant obstacle to whatever
its goals. Are we want to be thoroughly integrated with
it and its motives. Yeah, which kind of comes back
to n in Banks. That's that that's kind of how
he he weaves the humans and the humanoids of the
culture into everything, Like they're the minds, the ai s
(01:07:32):
that that ultimately rule everything and are making all the
hard decisions. They see the value and having human operatives,
and they also have this is kind of like like
hard hard part of their programming, Like I guess they're
sort of corporate culture, is that there's something important about
human life. Yeah. Now, if you're still one of those
(01:07:54):
people out there saying, Okay, I'm just never going to
get any kind of neurological implants that By the way,
I'm not advising people never get neurological implants. I'm more
saying that the people designing these things really need to
be thinking super hard about security from day one. I
guess we're way past day one, but from day whatever.
This is right now, But you don't just have to
(01:08:17):
worry about the future of neurological influence from technology if
you get an implant. There are other ways to influence
the brain with technology. Yeah, yeah, I mean I mentioned
this a little bit earlier. But I think another potential
exploit exploitation would be, uh, you know, if you had
some manner of external fine tuned electromagnetic cranial stimulation device,
(01:08:39):
perhaps when the aids of the treatment of a psychological condition,
or perhaps even works recreationally. Imagine malware or hacking scheme
that turns such brain functioning management on its ear. You know,
how fast would you be able to rip the thing off?
And oh, I can't use it uh in anymore? You know,
I'm gonna have to go a day. I'm gonna have
to bring this thing into the shop. How am I
going to get across down without my uh my, my
(01:09:02):
god helmet to get me there? Now, these external devices
I think are a little less plausible on this account
than implanted devices are because they're less precise. Right, So
you've got transcranial electrical stimulation, also transcranial magnetic stimulation, these
things that you know, yeah, play electromagnetic force to the
outside of the head. Uh. When I've seen experiments with
(01:09:23):
these types of things so far, the results they're able
to induce in the brain are very very blunt and broad,
if you know what I mean. They're not nearly the
kinds of minute targeted results that you would get by
implanting electrical devices inside the skull or inside the brain. Still,
if it keeps me from from auditing my body theting's appropriately,
(01:09:46):
then it's gonna ruin my week. Yeah. So so on
one hand, I do think this is a real concern.
And I should also mention that one of the other
papers we looked at was a paper by Saldi Costa,
Dale R. Stevens, and Jeremy A. Hansen in UH from
the International Conference on Human Aspects of Information Security, Privacy
(01:10:09):
and Trust from and essentially what they look at is
trying to create a broad architecture for an intrusion prevention
system for brain computer interfaces. That's kind of a hard
thing to design at this point, because you know, you
don't know exactly what all these systems are gonna look like.
But the basic system they come up with is that,
you know, you'd you'd have a two tiered security system
(01:10:30):
where any UH in Internet or external input coming into
the brain has to go through what's known as an
intrusion prevention system, which is just a system that tries
to screen traffic passing into a network or a machine,
and if traffic looks suspicious, it says, sorry, you can't
go in, And then you'd have to pair that with
And I love this sort of the brain equivalent of
(01:10:53):
an anti virus program. Anti virus program looks at what
code is executing on this computer right now, what what
executive functions are happening, and if it sees suspicious activity,
it shuts it down. The brain version would have to
use some kind of signal processing to look at what's
happening in the brain, or in the or in the
(01:11:13):
neural device and say, does any of this looks suspicious,
like something the brain wouldn't normally be doing, and if so,
you might have to disconnect the neural device or shut
it down. Yeah, and this is an area where I
can just imagine this kind of instead of going just
full fledged face forward into a thought police scenario, we're
kind of backing into one, because you end up with
(01:11:34):
a situation where potentially where you have we're human cognition.
Is this byproduct of organic and machine right become increasingly
cyborg it's it becomes and then therefore any kind of
intrusive thoughts or even criminal thoughts, it becomes kind of
like bad behavior in a dog, right, not a wild dog,
but a pet animal because whatever, what always happens, Like
(01:11:57):
people are saying, oh, well is this because the is this?
The owner's thought is that the dog's fault. And it
is there any way to really distinguish between all of
these things because the condition of the dog is so
manipulated and so changed by its relationship with humans. Yeah,
that's a very good point. I mean, I can see
a scenario where in the future people might have say,
(01:12:19):
you've got a deep brain stimulator in your head, or
you've just got neural lace or something like that, some
kind of neuroperipheral technology that changes the way your brain works,
and then you do something that you say that didn't
seem like me. Did the neuroprosthetic make me do it.
Let's say I went and robbed a bank. Could I
(01:12:41):
sue the company that made my neuroprosthetics and say this
is totally out of character for me. I don't know
why I did that. I never would have robbed a
bank normally, And I think what happened is that my
neuroprosthetic malfunctioned and it artificially pumped up my aggressiveness and
lowered my inhibitions and did all this stuff that temporary
really turned me into a bank robber, and that's not
(01:13:02):
my biological brain's fault. Or it could be that you
went to the wrong website, you clicked on something you
you shouldn't have, and that somehow that managed to like
follow up the chain to your brain itself and alter
your behavior. Oh, I didn't even think about that with neuroprosthetics.
So it could be something you click on on the Internet,
or some search you do on Amazon can now not
(01:13:25):
only follow you around it showing you ads at different websites,
but it can follow you into your brain. Or maybe
they didn't even hack you, say they hacked an advertisement
that you passed, and that advertisement communicates with devices that
you have that you know, so that it can you
can figure out what your behavior is and you know,
feed you the right advertisements, maybe in your dreams or something.
(01:13:46):
Yeah uh yeah. So the main thing, my main point
in this episode is that I think that we cannot
depend on the consumers opting out as a way to
avoid these risks because of this this irresistible availability thing.
As these things become more available, become more widespread, and
(01:14:08):
become more useful. People are just not going to be
able to resist the urge to use them. And uh
and in some cases there you know, if you suffer
like an injury or a disability or something, there's no
reason you you you should want to resist them, right,
They will give you lost functionality back. I mean, unless
there's an end to the advancement technology or say there's
(01:14:31):
a buttle arian jahad and and people as as you
know in mass decide no, we we you know, we're
not going to cross this point. We're gonna put in
place laws that keep us from augmenting ourselves and becoming
and thinking like machines. Yeah. So I'm saying you can't
depend on the individual consumer or patient to opt out.
(01:14:52):
That that is not something that should be part of
the thinking on this. It should be that security concerns
are absolutely take can into consideration from day whatever this
is now, because it's never from day one, it's always
going to be like day you gotta you gotta be
ahead of those brain hackers. Yeah, all right, so there
(01:15:13):
you have it. Hopefully we gave everyone some you know ace,
you know, definitely some some room for a little paranoia
and a little sci fi wondering, for sure, but also
just some some some real facts about technology and security
and how the how the the footfalls tend to go
in this trek, and one hopes those footfalls are chosen
(01:15:35):
by one's own free will. Yes indeed so hey. If
you want to uh learn more about this topic, head
on over to stuff to Blow your Mind dot com.
That's where we'll find all of our podcast episodes, blog posts,
links out to our various social media accounts such as
face but Twitter, Tumbler, Instagram. We're on all those things,
and the landing page for this episode will include some
links out to the some of the sources we talked
(01:15:56):
about here today. And if you want to get in
touch with us directly to of us feedback on this
episode or any other, or to let us know if
you think you would accept a voluntarily opt in neuro enhancement,
or if you want to suggest topics for the future
or anything like that, you can always email us at
blow the Mind at how stuff works dot com for
(01:16:27):
more on this and thousands of other topics. Is it
how stuff works dot com, lovel bl