All Episodes

June 1, 2019 78 mins

While hackers and malware can certainly cause a great deal of damage and misery, these dangers can’t hurt us physically, right? In this episode of the Stuff to Blow Your Mind podcast, Robert and Joe are sorry to say you’re wrong -- and as more biotechnology and brain-computer interface creeps into our lives, the risk will only go up. Tune in for a discussion on inherent vulnerability, looming technology and what needs to happen to protect us. (Originally published June 1, 2017)

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, welcome to Stuff to Blow your mind. My name
is Robert Lamb and I'm Joe McCormick, and it's Saturday.
Time to go into the vault. We're going into the
vault for an episode from June one that was about
the concept of neuro security. What happens when you have
to take the concepts of cybersecurity and apply them to
your brain and your nervous system. Yeah, there's a nice

(00:27):
anxiety inducing episode about protecting your brain. And is this
the one that we uh we did a scanner skit
at the stop the top of I don't think. I
think that was a different episode. All right, Well, I
don't want get everybody's hopes up about, but I'm sure
there's some scanners references in this episode. I mean, this
is a very creepy concept, and it's not as far
fetched as you might think. In fact, I would say

(00:48):
that we're we're creeping there every day. It's we're getting
closer and closer. I'd say, the connections between our brains
and the devices and the contents displayed on those devices
are becoming ever more seamless. The tentacles are reaching into
our skulls. It's it's only a matter of time before
you've got full on security protocols for protecting your nervous
system from hackers. All right, well, let's go ahead and

(01:09):
throw up those neural firewalls and jump right into this episode.
Welcome to Stuff to Blow Your Mind from how Stuffworks
dot Com. Hey, welcome to Stuff to Blow Your Mind.

(01:31):
My name is Robert Lamb, and I'm Joe McCormick. And
Robert I got a question for you, Jimmy, have you
ever wondered if it's possible to create a piece of
digital information, like a computer file, a bit of computer code,
a computer virus that could literally kill or injure somebody. Oh,
of course I have. I mean having watched and enjoyed

(01:53):
such films as The Ring, as Video Drone, like just
the idea of there being some sort of in these
cases media. But we can easily extrapolate that to to
digital media or just digital information. Yeah, you can't help,
but think, is there is there something like that that
could exist that would have a devastating or even lethal
effect on anyone who interacted with it. Yeah, video file,

(02:13):
audio file, computer programs, something that comes out of the
digital interface and actually harms you. Yeah, well, it's not
hard to see how you could harm somebody indirectly with
something like that. One example would be a computer virus
that takes down a lot of systems or causes widespread
economic damage. That's been happening since the eighties as well

(02:36):
discuss Yeah, certainly, so, widespread economic damage means people lose
their jobs, and statistically we know that that will indirectly
lead to some number of deaths above the mean mortality rate.
But I mean something more direct obviously, you know, I'm
talking about the cyborg ninja kind of stuff, but take
away the cyborg ninja. I'm not talking about robot assassins
or leak personal data like there's been another big one too.

(02:59):
I've seen it counts where people have said this individual
is potentially suicidal over the leakage of images, video, or
personal information. Sure, that's the devastating effect of digital gossip.
But could a malicious hacker injured or assassinate somebody just
with a digital file directly a piece of computer code video? Yeah,

(03:23):
I mean, this is of course an increasingly important consideration,
you know, because we just look at all the things
around us that are becoming connected to the Internet that
you know, years ago, I would have thought, why would
I need my uh let's say, my thermostat to be
connected to the internet. It seems crazy, and yet here
I am in the future. Especially during the cold months,

(03:44):
I enjoyed waking up, grabbing my phone and adjusting the thermostat,
warming up the house. And at the same time I'm thinking,
is this a little crazy that this electric you know,
gas powered fire in my home is now controlled by
a divide that is connected to the internet and all
the horrors of the Internet. I end up just having

(04:05):
to like stick, you know, push that out of my
brain and just focus on the fact, oh I can
before I get out of bed, I can, I can
make it a little warmer now. Fortunately, there are limits
to what your thermostat can do. Right, You're not worried
about some crazy kid on four Chan deciding that he
wants to cook you alive and turning your house thermostat
up to five degrees. But the more we think about
a smart house, like there was some horrible sci fi

(04:29):
movie that came out years and years ago, and it
had a smart house that with a robot, you know,
goes completely how on everybody, and it had like a
terminator arm that hangs from the ceiling and like travels
around the house. Uh. I always I keep thinking back
to that the more interconnected our homes become. Now that
you know, the whole idea of like your house becoming

(04:49):
self aware and killing you is one thing. But yeah,
just the idea that all these things are connected, at
least in a small way to everyone else in the world.
It can be a little much. This was explored to
great effect in the wonderful Stephen King movie Maximum Overdrive. No,
I'm just kidding, not not such a great movie, but

(05:09):
the premises all our machines turn against us, right, our
consumer technology, from trucks to household appliances start trying to
kill us. I think in the movie it's aliens, right,
I can't remember in the movie. In the book, I
mean the short story rather, it was delightfully vague. Um.
And then of course Maximum Overdrive the film as its
own experience. But I guarantee you there's got to be

(05:32):
a script out there where someone has taken Maximum Overdrive,
or at least Trucks the original story and upgraded it
to the you know, the so called Internet of things. Yeah,
and the most obvious analogy from the movie is going
to be autonomous vehicles. Autonomous vehicles if have if they
have the wrong security exploits, if people can manipulate them

(05:52):
in the wrong ways, it's not hard at all to
see how they can be deadly. But I want to
get even more insidious about devices that we personally hold
in our hands and used to mediate our relationship with
regular information like text and video and you know ideas.
I've got an archived Wired magazine article entitled hackers assault

(06:16):
epilepsy patients via computer and this is from March two
eight And what happened in this uh incident is that
somebody attacked an epilepsy support message board hosted by a
group called the Epilepsy Foundation. And just to read a
quote about what happened quote, the incident, possibly the first

(06:36):
computer attack to inflict physical harm on the victims, began Saturday,
March two, when attackers used to script to post hundreds
of messages embedded with flashing, animated gifts. The attackers turned
to a more effective tactic on Sunday, injecting JavaScript into
some posts that redirected users browsers to a page with

(06:57):
a more complex image designed to your seizures In both
photosensitive and pattern sensitive epileptics. And then later in the
article they note, and this is worth noting, epilepsy affects
about fifty million people worldwide, but only about three percent
of those people are photosensitive, meaning you've often heard, you know,
the old Pokemon story that flashing lights or flashing images

(07:21):
can cause seizures and people with epilepsy. That is true
for some people with epilepsy, but not all. So the
risk here is not necessarily like a wide attack where
you just end up hitting that small percentage of people
who are affected. But what have you targeted it after
a specific individual? And this is apparently happened. Now we
have this story from twenty sixteen where there's an American

(07:42):
journalist named Kurt Aikenwald who was known publicly to have
photosensitive epilepsy and during the election, so he's a political journalist,
and of course being a political journalist you make enemies,
and somebody who did not like his political coverage sent
him a series of tweets with drobing light images, uh,
and allegedly this caused a seizure. And so he has

(08:06):
now witnessed in a criminal prosecution against these digital attackers
who attacked his physical body, and we're able to cause
a physical injury with just information through an interface. It's
interesting that it took place on Twitter too, because, i
mean Twitter is known to be this place where, like
a lot of the Internet, where where people feel like

(08:28):
they can be just as nasty and awful as they
possibly can without any repercussions. And and here we see
a situation where it ends up transcending merely the hurting
of feelings or or psychological damage, but actual physical attack. Yeah.
But while almost anybody can be psychologically harmed by information

(08:50):
received through an interface, it's really difficult in general to
physically harm somebody with information received through a standard you know,
digital media interface. It's really rare, Like there is this
one specific exploit in the brains of three percent of
people or so who have epilepsy. That means that certain

(09:11):
types of light images projected on a screen can cause
physical injury or it can trigger a seizure. Not everybody's epileptic, not,
not all people with epilepsy have this condition, so it's
it's pretty rare. But this is one neurological vulnerability to
information based weapons built right into some of our brains.
Most of the time, for most people, the brain is

(09:33):
very secure, right. It's it's hard to cause direct injury
to somebody's body, or steal their innermost secrets or do
anything like that with information interfaces alone. But today we
want to talk about how that state of affairs is
very likely changing, and it may be changing very soon,
because we want to talk about the coming age of

(09:54):
neuro security. And the crazy thing here is that we're
not talking about something that may come to hass We're
talking about, as you'll see, as we would discuss this further,
this is something that is definitely going to happen, that
needs to happen, an inevitable next step. Yeah, and unless
basically life or technological progress on Earth stops right now.

(10:15):
This is another singularity issues. This is not this is
a very near future concern, yes, and and very very
plausible based on things that we already have today. So
there are several different things that you could call neuro security.
One thing would be using neuroscience principles in the general
field of security, right protecting your borders with f m

(10:35):
r I brain scanners during border stop interrogations or something
right picking up say, for instance, if you could use
this technology to pick up on like extreme levels of
nervousness that might need to be inspected with additional questions,
or if it was even possible to tell that there
was some sort of malicious intent or stocking up your
t s A the ranks of your t s A

(10:55):
agents with scanners, I mean, like from the movie Scanners
Psychic Psychic t s A. Okay, yes, it's basically scanner
Cops sequel to Oh my god, you're right, scanner Cops.
Can you imagine the faces they make while you're standing
in line? Would that make flying better because it'd make
it funnier or worse because it'd be even creepier. Probably creepy,

(11:18):
I'm guessing creepy. Sorry, Well, that's an interesting subject, but
a subject for a different day. Today, we're talking about
the security of our biological information systems, essentially applying computer
cybersecurity principles to your brain and your nervous system. Now
you might be asking, that sounds ridiculous. Why would you

(11:42):
ever talk about that. I mean, that's that's just such
a weird sci fi scenario. Nothing like that's ever gonna happen,
right right, I mean to bring up scanners again, It
just makes me think of the first Scanners movie that
at the time I thought ridiculous moment where that the
scanners are interfacing with the computer with their brain, and
that threw me completely out of the movie because I'm

(12:04):
because I'm like, all right, you, I'm on board with
brain to brain these psychic connections, but you're throwing me
off when I'm trying to imagine a brain to machine
connection that's just purely based on psychic power. It does
seem to violate the magic of the film, right, It
gets it gets the mythology out of whack because there's
a scene in the movie Scanners where one of the
Scanners he gets on a telephone and he calls into

(12:27):
a computer system and he reads the mind of the
computer system. Yeah, and he wasn't even making facts machine
noises with his mouth that I would have been on
board with, but yeah, not the way Cronenberg decided to
to display it. Michael Ironside could have sold those facts
machine noises with his mouth, but not in the guy
they had play in the Here, so we're going to

(12:47):
talk about a particular study that I'll get to a
couple different times in this episode. But actually it might
be wrong to call it a study because it's really
more an attempt at definitions, right, trying to lay out
what the concept of neuro security would be and what
are some things we need to watch out for. And
so this was published in two thousand nine in the
journal Neurosurgical Focus. It's called neuro Security, Security and Privacy

(13:12):
for Neural Devices from Tamara Dinning, Yoki Matsuka, and tada
Yoshi Kono. So the authors of this paper note that
there are three primary goals in computer security. You've got confidentiality, integrity,
and availability. So confidentiality means what you think it does.

(13:33):
It means an attacker of your computer system should not
be able to exploit the device to learn or expose
private information. Standard example would be hacker steals your bank
account info, or your private emails or your private photos. Yeah,
these are essentially externalizations of my private thoughts, and I
don't want anyone to have access to either exactly. Now.

(13:56):
The next one was integrity. Integrity means that an attack
cker should not be able to quote change device settings
or initiate unauthorized operations. In other words, the attackers should
not be able to use this device whatever it is, computer,
cell phone, anything like that should not be able to
use it for their own purposes or change what the

(14:18):
device does for the primary user. An example here might
be that a hacker could take over your computer to
turn it into a bot that's part of a bot
net to have a d D O S attack against
some website. Maybe they had a bad meal at Olive Garden.
They want to take down the Olive Garden homepage, so
they hijack your computer and make your computer one of

(14:40):
many computers that bombard Alive Garden with with requests to
load the page. Okay, and obviously I would not want
that to happen in my brain either to change the
settings on my brain and ultimately change my behavior, change
my motivations, like even if it's done in a very
slap dash awkward way, you know, hands out of my

(15:01):
brains exactly. So the last one is availability. Availability means
that the attacker should not be able to destroy or
disable the device, or do anything that would render it
unavailable to the user. Classic example, hacker deletes all your
files or alters the computer's boot procedures so that it
won't load your operating system on startup, and it just

(15:22):
becomes useless. Likewise, I don't want anyone to strategically remove
memories from my brain, to wipe my memories from my brain,
or to even temporarily deactivate certain cognitive centers or networks
in my brain. Yeah. Now, in these examples you're talking about,
you're talking about sort of whole brain functionality, but there

(15:42):
could be dire consequences for much lower stakes questions. Somebody
might not necessarily be able to disable your entire brain.
But in a minute, we're going to talk about some
particular types of neurotechnology. And in many of these cases,
for example, just disabling your neuro technology could have devastating
consequences for you. They wouldn't have to be able to

(16:04):
turn off your brain. They might just be able to
turn off your neural implant at a time and place
that would make you very vulnerable or could hurt you.
I experienced something like this the other weekend. I had
to drive to a major phone service provider's brick and
mortar store that I'd never been to before, and I
had to do it without a functional phone. So I

(16:26):
had managed it, I ended up printing out the wrong
directions map quest or you know, whatever map program I
I used It's amazing how dependent would become. Yeah, so
in a way, it was it was like a part
of my brain was not functioning because the phone was
not functioning. You have offloaded part of your traditional capability
something maybe ten or fifteen years ago. You would have

(16:47):
probably had better internal mechanisms for locating a store you
needed to get to. Uh, And now you've said, well,
I don't have to worry about that anymore. That's in
this peripheral that I used to supplement my brain. If
the peripheral breaks, you're you're messed up now. Yeah, and
that is technology that exists, you know, quite literally, you know,
arms distance away from the brain, right. But the thing is,

(17:10):
we're seeing the technology creep increasingly closer to the brain.
And what happens when that stuff goes offline or becomes compromised.
So to quote the authors of the study I mentioned,
they say, quote, we define neuro security as the protection
of confidentiality, integrity, and availability of neural devices from malicious parties,

(17:30):
with the goal of preserving the safety of a person's
neural mechanisms, neural computation, and free will. Now we're going
to look directly at some of the neural technologies that
might be vulnerable to security concerns like this. But before
we do that, I think we should look more broadly
at the idea of information security, because if you're not

(17:53):
all that familiar with the history of the Internet, it
might be kind of puzzling to you, Like, why is
is the Internet so horrible in terms of security? We
we've got this global thing, what would you even call it?
Would you call it a technology? We've got this global
information civilization that is just terrible. It is just terrible

(18:15):
in terms of security. There there is not an overarching
strategy to keep everything safe. I keep thinking of it
like the cat's cradle that you create with the with
the length of string and your fingers and you, you know,
you interweave your fingers and you create a pattern so
that the Internet in this case, it's not the string,
it's not the fingers. It couldn't exist this without the
string and the fingers, but it's ultimately that shape, you know. Yeah,

(18:39):
that's kind of a loose way I think about it. Well,
maybe we should take a quick break and when we
get back we can look at how the Internet ended
up becoming so vulnerable to security concerns, all right, we're back.
So yes, how did the Internet end up becoming so vulnerable?

(19:00):
Now this is on one can take me on a
trip on fear and paranoia memory lane, and I'll try
and keep this this fun too. I guess if you're
a fan of like Halt and catch Fire, it's kind
of fun to think of it in those terms. But yeah,
I mean, on one hand, it's an easy question to ask,
because we live in a day where we have ransomware attacks,
identity theft, docks, sinning, invasion of privacy. But on the

(19:23):
other hand, so many of us were born into this
system of the Internet, or you know, or we if
you're like me, you entered into it during college, and
it's easy to just assume that the systems that run
the world and the organizations that were in the world
haven't figured out to some extent. You know, you expect
the bank to be secure, you expect some security to

(19:47):
be in place. There's some rather significant security options to
be in place, so you would assume that the the
virtual bank would be much the same. Right if you
were to just find out that there are human sized
holes in the bank vault that keeps your money, that
would be rather surprising. It's not surprising at all to
find out that there are hacker sized holes in the

(20:08):
digital systems that protect your private information. I mean, how
many times now if we had this story about some
online retailer or maybe even physical retailer that you did
business with, and they have your credit card number they
got hacked and now your information belongs to somebody out
there and you have to get a new credit card
or something like that. Yeah, and I think we can

(20:30):
all relate to that kind of anxiety. Yeah. Now, I
mentioned earlier that that so many of us were a
number of our listeners here were born into the Internet age.
So I'm going to try and put this in turn,
like the origins of the Internet, in terms of something
that maybe more of us could understand, but then not
maybe not the younger people, but for a number of
you out there. I bet you can remember the day

(20:50):
that your mom joined Facebook, so you remember the realization that, oh, crap,
this this internet thing, this this really is for everyone.
It's that horrible moment where something that you and your
friends do online that you think of as not real
life comes crashing into real life and you realize, like, oh,

(21:12):
this is connected to the same universe where I live. Yeah, yeah,
I remember there was like a sharp contrast certainly between
the my Space aide. Remember what it felt like to
be on my Space was totally different than the early
stages of Facebook, which was which was completely different from
what Facebook would become in terms of you know, just

(21:32):
earlier models just felt like, oh, I'm just surrounded by
like minded people who share my my same sort of
attitude on life and and values about what's important and granted,
so there was already a certain amount of bubble construction there,
but but yeah, we we just we we had expectations
about what this this technology was and who this technology

(21:53):
was for. And the same situation really applies to the
Internet as a whole. It's architects did not build it
from the ground up as a global system for the
masses a cat's cradle weaved around the fingers of so
many computers. No, they designed it as an online community
for a few dozen researchers. It was a research project.

(22:13):
It was like, Hey, we've got these computers at different
universities and institutions. What if we could link them together
so they could trade information? Would that be weird? I
don't think that it maybe some people did, but it
doesn't seem like generally they predicted that this would become
a hub of commerce and recreation for everybody on the planet. Yeah,

(22:35):
I mean it's you can you can't help. But compare
it to say empires, right, can you imagine you have
you know Genghis Khan and he's sitting around and uh,
and they're saying, hey, Genghas, what's up? And he said, oh,
not much. I got this idea though, I'm going to
call it the Mongol Empire. Huh. And this is how
it's gonna work. This is how we're gonna incorporate trade
and the and this is how we're gonna value different religions.

(22:57):
And there was wall. You haven't even none of this
stuff is even hanker yet, what are you talking about,
like the people who found these these major movements and organizations,
Like how often is the complete structure baked into the
original design? Almost never? Each step is improvised. So you've
got to make decisions about design. You make design decisions

(23:19):
on the fly as issues come up. And the Internet
was sort of the same way. Yeah, you gotta leave
it to Kubla to figure out the rest. So um,
there's a wonderful Washington Post article that that that goes
into this in depth. It's titled a Flaw in the
Design and uh and I'll include a link to this
on the landing page for this episode at stuff to

(23:41):
Blow your Mind dot com. In the article, the author
speaks with such Internet forefathers as M. I. T. Scientists
David D. Clark, David H. Crocker, and Vinton G. Surf.
They point out that they thought they could simply exclude
untrustworthy people. Is that great, Yeah, We'll just we'll just
keep the security risks off of the Internet. Yeah. So

(24:03):
when they thought about security, and they did think a
little bit about security, they thought about it mainly in
terms of protecting the system from outside intruders or military threats. Wow.
So there's a quanderful quote in this article from Surf,
and he says, quote, we didn't focus on how you
could wreck this system intentionally. You could argue with hindsight
that we should have, but getting this thing to work

(24:24):
at all was non trivial. They dealt with problems as
they arose, and the problems then we're making it work.
And now this is going to have some very interesting
parallels once we start talking about neurotechnology again. They also
point out that at the time, there really wasn't much
of value in the Internet. Uh. The analogy they make
again to the bank. People break into banks despite the

(24:47):
security because or the attempt to break in because there's
money there, because there's something of value. Right, You wouldn't
break into a bank vault or you know, risk prison
time for a bank vault that was full of what
transcripts of messages back and forth between academics. Yeah, I mean,
unless you're just in it for the pure artistry of it.
But that rarely seems to be the case outside of

(25:07):
like a Hollywood movie. But then, of course, in the
early days of computers, you did start actually encountering security threats. Yeah.
The big one that really changed everything was the Morris
worm attack and crashed thousands of machines. It caused millions
of dollars in damage, and it helped usher in the
multibillion dollar computer security business. Bless the maker and his water. Yeah,

(25:30):
I mean at this point that the party was over. Oh,
by the way. Interestingly enough, the article also points out
that the big idea behind the Internet is at least
partially attributed to American engineer Paul Barron, who worked for
the Rand Corporation at the time, and he saw the
need for a communication system that would function after a
nuclear at strike on the United States, that would help

(25:52):
us with eight efforts, help us preserve democratic governance, and
even enable a counter strike, all in order to help
quote the survivors of the Holocaust to shuck their ashes
and reconstruct the economy swiftly. What yeah, um, I mean
that bringing this Cold War mentality to the Internet. Uh,

(26:15):
it's so crazy thinking about the way contextual frames completely
shift around the technologies that we create. Yeah. And the
crazy thing too, is that every time there's a headline
about say, meddling in elections with the you know, with
with various online initiatives, or hacking initiatives, or the recent

(26:37):
ransomware attack, it makes me realize, well, this is this
is what cyber war looks like. This is this is
the shape of global cyber warfare. And you look back
and here's this guy dreaming of the The Internet is
this thing outside of war, this thing that's just a
communication system to help us rebound from assaults by by

(26:57):
various estates. But it's also this is creating the Internet
in the context of thinking about international warfare, but thinking
about it in terms of overt frontal warfare, missile bombardments
and troop advancements and all the traditional types of war
people knew about, not realizing that the Internet would enable

(27:18):
a state of constant covert war between great powers that
would just be constantly secretly or semi secretly undermining one another. Yeah,
and there would the kind of this gray area about
how how you respond? How do you how do you react?
How does what are the rules of cyber warfare? And
people still haven't figured that out. So that Washington Post

(27:40):
article is a great exploration of Internet security history. But
the basic answer to our initial question is this, you know,
why is the Internet when of the Internet end up
becoming so vulnerable? It's because the Internet wasn't built to
be secure. All security concerns are an add on an
aftermarket edition a patch. Security is always difficult, especially when
it wasn't baked into the design and and and with

(28:03):
these systems that we're talking about, it almost never is.
We see the same situation occurring with with some of
the gadgets and implants and proposed neural technologies we're going
to discuss because the primary goal ends up to being
what to aid the patient to somehow to achieve the
goal of the device or the technology. Right, if you're
designing neuro technology to help regain some lost functionality and

(28:28):
somebody with a brain injury or body injury, or to
cure some kind of neurodegenerative disease or at least offset
it's negative effects. Security, I mean, that's just such a
far down the road concern. You're you're worried about fixing
people's problems and helping their lives. That's what you're worried about.
And it's the same thing you talked about with the

(28:48):
Internet earlier that you know, people were just wondering if
they could make it work. Security is so far down
the list of concerns. Yeah again, security becomes this add on.
It's a thing that you you you end up implementing
or worrying about once the threat becomes more apparent. Oh yeah, security,
not not in the kool aid man sense of oh yeah,

(29:11):
but like the oh yeah, oh yeah, this there's a
man sized hole in the wall. We need to do
something about this. So I want to go back to
that paper I mentioned earlier in Neurosurgical Focus from two
thousand nine, where they try to lay out a framework
for approaching the topic of neuro security. And so the
authors make the point that neurotechnology is becoming more effective,

(29:33):
and one of the things that we can draw out
from this is that as it becomes more effective, it's
going to become more useful. And as it becomes more useful,
it's going to become more widespread. And as it becomes
more widespread, it will be more fully integrated into our
bodies and our lives. And so there are a lot
of current and potential uses of neurotechnology. One thing would

(29:56):
be treating brain disorders. Another thing might be making paralyzed
limbs usable again, or allowing users to control robotic limbs
with thoughts alone. One thing might be remote controlling robots
with thoughts. That's a fun one. Another one might be
enhancing human cognitive capacity. So up until the present, most

(30:18):
research into the safety of neurotechnology is focused on making
sure the device itself doesn't hurt you when it's functioning
the way it's supposed to. Write. Safety concerns are about
making sure that the intended use of the neuro technology
is safe, and this makes sense because back in two
thousand nine, at the time this paper was published, most

(30:39):
of these devices were number one, contained in lab environments.
You know, they're just not getting out into the wild
very much. And number two were self contained systems, meaning
that they had very little limited transaction with the other
information systems in the outside world. Back when you had
an Apple two and no Internet connection, you're probably not

(31:00):
going to get a virus, right unless you're one of
the very few unlucky people to get handed and infected
floppy disk and you stick it into your disk drive.
Of course, once you start connecting your computers to the Internet,
your security vulnerability goes way up. Uh. And here's a
cybersecurity mantrad posit for you. The more your device plays outdoors,

(31:22):
the more vulnerable it is. Yeah. And this means that
as a device gets connected to the Internet, interacts with
a larger number of devices, adds wireless capabilities, all all
those things, there are more ways for it to be
compromised by malicious adversaries. And as devices become more useful
and more widely adopted, they tend to play outdoors more

(31:44):
and more. If and so, what the authors are saying
is that if we don't design robust security features in
them from the get go. We could end up with
neurotech that works like the Internet we just talked about,
where it's it's an ad hoc system of security fixes
this constant race between security updates and malicious hackers, and

(32:05):
every time the bad guys pull ahead, they have the
ability to bring a little taste of doom with them,
except this time the target isn't your computer or even
your bank account or your Facebook account. It's your nervous system.
And people probably will not find that acceptable, the victims
of such accounts. That is, because inevitably the trend we
see is that there will be somebody who decides that

(32:28):
such an attack on an individual is a good idea
for whatever kind of whatever reasons they have, or just
sort of the the impersonal nature of online victimization. Oh yeah,
I mean, of course they're going to be motives. One
of the things the authors point out is just straightforward cruelty.
Think of the cruelty both random and targeted, that we've
talked about of those epilepsy strobe attacks. I mean, that's

(32:52):
just sick. But apparently people think that's okay to do.
There are people out there who are willing to inflict
injury on others because they think they can get away
with it. But then think about the possible financial and
blackmail incentives that would be open to someone who compromised
your brain itself, and then think about malicious interference that

(33:15):
is self directed. One example of this might be UH
In a minute, we're going to talk about the idea
of deep brain stimulation or DBS. But there's a possibility,
for example, of neurotech users to hack their own devices
in an act of harmful self medication. So in the
same way that you might abuse a prescription for painkillers,

(33:37):
you could potentially abuse your neuro technology using it in
a way that it's not intended that could be harmful
to you in the long run but feels good in
the present. Well, or to frame that another way, you
could you could have a situation from where people are
optimizing their their technology in a way that demand does
not approve of. Yeah, you could have that, or you
could have One example is overclocking computer rs. Right, people

(34:01):
want to over clock their CPUs. Sometimes you see people
messing around with their hardware and it means that you know,
I know I can get more power out of my
CPU if I if I make some adjustments to what
it will allow itself to do. But that comes along
with risks, right, you could risk overheating your computer or
something like that. If people decide they want to do

(34:22):
the same thing with their brains, yeah, yeah, then I
mean basically you could say, I'm going to trade the
potential to have more power in my brain with the
potential for some risk. Yeah. I mean people do that
every day when they they look at various pharmaceutical ways
to potentially augment their brain for the completion of a

(34:45):
task or some sort of creative endeavor. And it reminds me.
I believe it was Jimmy Page of Led Zeppelin, who
has been some interviews like look back on past drug
use and and said, well, yeah, that was probably too much,
but look at what I done. Look look at the
work that came out during that time. I'm not sure
if if this is something he said or this was

(35:07):
commentary on his life, but I mean you could see
someone making the argument with technology to say, yeah, I
overclocked my neural augmentations, but look what I got done. Right,
Sure did a lot of homework, So you could say,
and I think the authors of this two thousand nine
articles say that we're at a similar stage in the

(35:28):
evolution of neural engineering as we were at the inception
of the Internet. Neuro Security is not really much of
an issue today, but it could be a huge, massively
important concern in the near future, and the consequence of
a neuro security breach can be a lot worse than
a breach of Internet security. Instead of protecting the software

(35:49):
on somebody's computer, you're protecting a humans ability to think
and enjoy good health. Right, Yeah, you're getting far because
because that's the thing, right, when all these bad things happen,
So you for even an identity threat, you can always say, well, hey,
at least nobody was hurt. At least I didn't you know,
nobody physically attacked me. But here we see that line

(36:10):
somewhat it raced. So a few current trends in neurotechnology
that are definitely going to up the stakes and increase
the risks. One of these things is wireless connectivity, and
the authors in this paper they recognize that security vulnerabilities
do exist in wireless implanted medical devices, and in past

(36:31):
research they demonstrated that a hacker could certainly compromise the
security and privacy of such a device, such as a
two thousand three model implanted cardiac defibrillator. You see, you
could they found using homemade, low cost equipment wirelessly change
a patient's therapies, disabled those therapies, or induce a life
threatening heart rhythm. And they and this was a two

(36:54):
two thousand nine publication, but they things have moved on
a bit since then, But even then at the time
they said, look, the current risk is very low, but
such threats have to be taken into account with future designs. Yeah.
So there's increasing wireless connectivity of all types of implanted devices,
including neural fiferals. But the other thing we need to
take into account for increasing security risks is the increasing

(37:18):
complexity of these devices. The more a device does, the
more there is to worry about from a security concern. Yeah,
and this is an area where I to make sense
of this. I keep thinking about a chessboard in a
chess game, as many of you might be aware, Like
chess has a set number of pieces, a set playing field,

(37:40):
and uh in a set rule system. And people have
been playing with these limitations for a long time, and
in doing so they've kind of figured out you know
all the basic moves that can take place early in
the game, and it it's opening the openings. Yeah, and
you it takes a while to get to that point
where you're actually in fresh territory where you're playing a
game that has not been played before. So it's Chess

(38:04):
is not like a modern board game where the bad
board game comes out and then they might have a
new rule, suit supplement, or a new expansion. And each
time a new expansion comes out, oh oops, they broke
the game a little bit, or the rules, this rule
clashes with this older rule, and we need somebody to
to weigh in and tell us, uh, you know, how
we actually play the game. Now, Chess doesn't change. Chess

(38:26):
remains the same, and our our our world of interconnected
devices does not stay the same. It changes. It is
a It is a modern board game that gets gets
a larger playing field, gets more pieces, and more complex
rules pretty much every day. Now, I think we should
zero in on some specific examples of neurotechnologies. One would

(38:50):
be robotic limbs. Now, I think this is a great
example of something that has made enormous strides just in
the past couple of decades. Robotic limbs are not the future,
this is the present. Multiple labs and inter organizational projects
have already created robotic limbs that can be controlled directly
by brain activity, just like the muscles and a natural limb,

(39:13):
and these things are getting better all the time. That's
right now, to go back in time just a little
bit back in two thousand thirteen, bertold Meyer wrote an
article for Wired titled ethical questions are Looming for prosthetics. Now.
Meyer had a unique insight in this article because he
wears a prosthetic arm himself, and as he tried out
many different models over the years, so he's not you know,

(39:36):
he's an insider when it comes to prosthetics and the
use of high tech prosthetics. So at the time he
was using an eye limb which connected to his iPhone,
which of course was connected to the internet. He wrote, technically,
a part of my body has become hackable and uh,
and he pointed to concerns by crime futurist Mark Goodman,
also a Wired writer, and Mark Goodman covered the fact

(39:59):
had previously covered the act that hackers had developed a
bluetooth device that can cause portable insulin pumps used by
certain diabetics to give their where a lethal dose. Oh yeah,
this is another one of the vulnerabilities of hackable implanted
devices that don't even necessarily connect to the nervous system
or the brain. Right. And if anyone out there is

(40:19):
a fan of the TV show Homeland, there's actually a
plot involving in the vice presidential assassination attempt in the
show U utilizing just such a strategy. I've never watched
that show is a good I enjoy I only watched
the first two seasons, but I enjoyed it. So Meyer
argues that we we have to recognize and address such
hacking sensitivities before the technology is widely adopted and hacking

(40:42):
becomes a full fledged threat, which just is exactly what
we've been we've been saying over again. So myer's on
the same page as the authors of this article we
were talking about earlier that they're saying. The main thing
is we've got to get ahead of it. We've got
to start thinking about neuro security before these threats really
become an issue. Now, there are plenty of labs that
have been working on robotic limbs, thought controlled robotic limbs.

(41:05):
One more example, I wanted to mention There was a
good article in the New York Times and Mad about
the Johns Hopkins University Applied Physics Lab where they've got
this robotic arm. It's got twenty six joints and it
can curl up to forty five pounds. Is that more
than my biological arm concurl? I wonder get into an

(41:26):
arm wrestling contest with one of these robot arms. I
don't know. I feel like we're getting into the weight
of human hair scenarios. Well, anyway, this thing is it's
controlled entirely by the brain, and so it is just
that you connect this to what your natural neural impulses
would be and you can't. You don't have to operate

(41:46):
any external controls or machinery. You just control it with
your brain. There was also a good article I saw
and Wired from last year. I think about President Obama
just freaking out when he was watching somebody control ale
a prosthetic arm with his brain. Uh. He like couldn't
contain his glee. Yeah, I mean, it's it's amazing, like

(42:08):
and that's the thing that the technology has so many
wonderful applications um and and all of these additional threats
kind of come second to that, at least when you're
you're focusing on the wonder. Yeah, and so there are
a couple ways you could say that neural devices would
come into limb control. So one of them would be
restoring the use of a disabled limb. Like if you

(42:30):
have some kind of neurological damage or disease that means
you still have an arm or a leg, but that
you can't control it with your brain. One thing a
neural device could do is give that control back to you.
Another thing would be that you've lost the arm or
the limb and that you have a robotic replacement that
you can control with your brain. Yeah, these are These

(42:50):
are two possibilities that that come up time and time
again in you know, Wired magazine articles and other cool
uh cool you know, forward phasing technol logical publications. Now
we've mentioned a couple of arms, but they're prosthetic legs too, right, Yes,
I was looking around I came across Blatchford's links prosthetic
and this communicates from knee to ankle four times a second.

(43:14):
So it's a system in which the foot and knee
of the prosthetic limb work to work together to predict
how it's where is going to move and respond to
the position. And this too features a bluetooth connection to
a smartphone to help manage this interaction to like adjust
settings and things like that. Yeah. So, I mean, I

(43:34):
guess that doesn't rely on the smartphone CPU to do
its computation. I guess that would be difficult. Um, my
understanding is that this was about like tweaking performance. Yeah, okay,
so but it's easy. That's the thing when you can
see a lot of these technologies. Perhaps they start with
using a wireless connection to tweak performance, but then it
becomes more right, then it becomes about downlighting new firmware.

(43:56):
Then it becomes about actually using the computational hour of
the device or the cloud even to control the prosthetic.
All right, so maybe it's time to think about what
would be the security concerns of a robotic limb or
a neurotechnologically enabled limb. One thing I got to think
about is the concept of a ransomware attack. So we've

(44:19):
recently seen ransomware attacks all over the world. Right, I
think they're now saying that the North Korean government might
be behind these ransomware attacks that have hit like the
British NHS and all these other places as of this recording.
I believe there is some speculation that that might be
the case, but I was reading an expert, an expert

(44:40):
who was saying, well, we're still looking at it, so okay.
But the basic concept of ransomware attack is you know,
I've seen this on relatives computers before, where you boot
up Oh yeah, you like you. I mean, this is
a classic type of attack. You boot up your computer
and there's a message that says, like from Microsoft Anti

(45:00):
Virus Protection or something, your system is not secure. You
must pay to renew your anti virus license in order
to boot up your system, and they ask for your
credit card number, and so they're they're holding your technology hostage.
In that case, they're pretending to be somebody legitimate, but
they could just come right out and say, look, I've
got all your files. I'm not going to let you

(45:21):
into your phone unless you give me a hundred bucks.
And that was basically how this this recent want to
cry um ransom bot attack. Right, But imagine if this
was applied to neurotechnology that re enabled you to move
your limbs. So let's say you're out hiking with your
amazing robotic legs. You've lost control of your legs, or

(45:43):
you lost your legs in an accident. You've got robotic
legs and you're out walking around in the mountains and
suddenly they lock in place and refuse to move. And
then you get a text message demanding a ransom payment
of five hundred dollars worth of bitcoin or your attacker
will not unlock your legs. You know, what are do you?
You gotta at least take a chance that they're going
to make good on the promise. Yeah, I mean, it's

(46:05):
amazing how how sci fi things can get so quickly.
I mean that that's really does not seem all that
far fetched to me. Another thing might be, how about
a confidentiality attack, so that that lass type of remember
the three categories we mentioned earlier of neuro security categories,
you want to protect to the availability, the confidentiality, and

(46:27):
the integrity. That would be an availability attack, right they
say you will not be able to use your device
unless you pay up. You could have a confidentiality attack.
That would be a skimming attack on your robotic arms.
Say an attacker gains control of your robotic arms then
uses motion data to infer what you're typing, whenever you
use your fingers to type a message. Or how about

(46:48):
an integrity attack the attacker literally makes you punch or
choke yourself or punch or choke others by taking control
of your robotic hands. Well, this reminds me of a
video clip that I actually included in the notes here.
I'm not sure if you've seen this, but it was
just it was a gentleman on a believe a French
news program showing off a prosthetic arm, and he activates

(47:09):
and it begins to malfunction, and it just kind of
starts pounding on the table and then pounding on the
man's thigh and he can't get it to turn off.
So you can easily imagine where even something it wouldn't
have to be is is is as precise as making
you go karate chop crazy on people around you. But
what have you just started making your arm, you know,

(47:29):
go into sort of spasms that that could be bad enough,
especially I mean if you were driving a car at
the time, if you were if you were giving a
public presentation. Uh, there any number of scenarios were just
the utter malfunction of the device would be bad enough. Now,
I know a lot of you out there are probably
thinking like, well, I wouldn't I wouldn't get a robotic

(47:51):
limb like that if there are risks like this, But
you're probably not putting yourself really in the frame of
mind that that somebody who has lost control of a
limb or lost a limb would experience. I mean, imagine
not having that ability and having the technological capability to
regain it. This is not something that I think people

(48:12):
can really be faulted for wanting. No, I mean, I
mean that's the thing, like is the is the if
it improves the technology and makes the technology better able
to you know, let an in a visual cope with
a lost limb, and then is that that technology becomes
the standard. Of course people are going to adapt it. Yeah,
this is the thing people are gonna want for good reason.

(48:34):
And it's definitely, especially at the beginning, going to seem
like the risks are very low, and hopefully they will be. Yeah.
So uh, I do want to say here, you know,
when it comes to hackable prosthetic limbs, it isn't all
black mirror paranoia. There is a lighter side as well,
and this is where the lego prosthetic arm comes into play,

(48:55):
designed by Chicago based Colombian designer Carlos Alturo Torres, and
it's a modular system that allows children to customize their
own prosthetics. So this is a lesson in engineering programming
and a way to help them overcome the social isolation
they might feel over their condition. So I just found
it to be an interesting little side note. Well, yeah,

(49:16):
we already mentioned in a perhaps dangerous or detrimental context
the idea of hacking your own neuro pros theses, which
that could certainly be the case, But I can also
see hacking your own neuropros theses to be something that's
very like fun and adventurous and exciting. I guess it
would just depend on what the risks and the dangers were.
Oh man, what have we reached the point to just

(49:38):
have a little fun with it? What are they a
hacked Either you hack it or you know, someone outside
hacks your prosthetic arm and it makes a hand puppet
and then it's able to talk specifically in the void.
Who is the famous hand puppeteer? I don't know. It
would do kind of like a kind of like a
you know, a cartoony Spanish accent to the to the

(49:58):
talking hand. You know, I'm talking. I don't know what
you're talking about, Senior Winces. Is that I don't know. No,
I have no idea anyway, I can't help. But imagine
like a hacked robotic arm suddenly just becoming this little
talking fifth. It's like that, it starts screaming at you.
So add that to the list of near future concerns. Well,

(50:19):
I can think of a good one is that you'd
hack your own arm to just make it, at random
intervals throughout the day throw up the rock horns. Then
you'd have no warning when it was going to happen.
You just say, like, fair warning to all my friends
and family. Every now and then, I'm going to rock out.
You've got to get the horns or every now and
then I may just flip you off. It's not because
I don't like you, it's just I've been hacked. Sorry.

(50:43):
It becomes a great excuse. So yeah, there are multiple
sides definitely to having systems that are flexible and can
be manipulated. I mean, you could see that as a
security risk, which it probably is, but you can also
see it as an opportunity for people to express themselves
and try new things with their own bodies. Indeed, well

(51:04):
and that no, you know, we should take a quick
break and when we come back we will get into
some some some other possible areas where neurotechnology could become hacked. Alright,
we're back, okay, Robert. One more type of neurotechnology that
is highlighted in this original paper on neuro security is

(51:27):
the concept of deep brain stimulation. Yes, now we I
think we've talked about this some of the podcast before,
but deep brain stimulation is basically putting electrodes deep inside
the brain to stimulate certain regions with electrical impulses. It
it's uh in the basic idea is fairly simple. Of course,
the implementation is very complex. Yeah, we get into this

(51:49):
in our brain to brain communication episode which I'll include
a link to on the landing page for this episode
is Stuffable your Mind dot Com. But yeah, essentially you
have sort of the external version, It's kind of the
god helmet scenario right where you're doing you know, electromagnetic
cranial stimulation, and then the idea of of actually putting
the the the the the the devices inside the head

(52:11):
actually having implants in the brain that are manipulating cognitive function. Yeah,
and there's all kinds of uses of putting electrodes in
the brain. Deep brain stimulation specifically is is putting them
deep down in there to help with multiple types of
chronic medical conditions. Specifically, it's been effective at dealing with
Parkinson's disease and with tremors what you might see called

(52:32):
essential tremor, but also contains uses that have been tried out,
such as for treating major depression or for chronic pain.
And so obviously, the better we get at correcting problems
that begin in the brain with with electrical impulses, that
that is a great thing for the people who suffer

(52:54):
from these conditions. But when you're putting the capability to
send electrical imp holses deep within the brain in the
hands of a piece of technology, you want to make
really sure that that technology is doing what it's supposed
to do. As you can guess, there could be a
lot of problems with unwanted electrical stimulation of the brain.

(53:16):
And one thing I just want to quote a paragraph
from the two thousand nine paper we mentioned. Quote the
hacker strategy does not need to be too sophisticated if
he or she only wants to cause harm. It is
possible to cause cell death or the formation of meaningless
neural pathways by bombarding the brain with random signals. Alternately,

(53:38):
a hacker might wirelessly prevent the device from operating as intended.
We must also ensure that the deep brain stimulators protect
the feelings and emotions of patients from external observation, so
you can see there are a lot of avenues here. Also,
deep brain stimulation was one of the things we had
in mind when we talked about the idea of of

(54:00):
illicit or dangerous self use, like if you are self
administering patterns of electrical impulses that may feel pleasurable to
you at the moment, but could be harmful to you
in the long run. And of course this this is
another another area where you can imagine it being hacked
for you know, on both sides, someone saying, all right,
I know this device was just about, you know, treating
a disorder, but I'm I'm going to tinker with it,

(54:22):
and now it gives me orgasms when I push a button.
But then the reverse of that, of course, is someone
actually monkeying with your cognitive performance. Yeah, yeah, and you
can only think as things like this become more complex,
there will be more and more opportunities for dangerous exploits
as well. You know, basically the possibility for dangerous exploits

(54:46):
seems to track along with the potential for helping the brain. Right,
as we as we have more power to heal, we
have more power to destroy. You see that with any technology, right,
you see those parallel tracks of the beneficial applications for
human city and in the negative self destructive ones. Totally,
it's a it's a nuclear power at the neural level. Yeah,
it's chemistry. You know. The same advancements that gave us

(55:09):
all the beneficial applications of chemistry also produced chemical weapons.
So so I want to look at one more potential
neurotechnology that could have great rewards and great risks. And
so this one is going to be cognitive augmentation. So
one commonly discussed example is memory augmentation. This comes with

(55:30):
its own benefits and risks. The risks are fairly obvious.
If you have the capability to augment memory, you may
also have the capability to degrade a race or alter
existing memories, or to create false memories or impressions. Uh,
and and alter the entire integrity of a person's memory system.
But I got another idea. What about computational upgrades? Assuming

(55:54):
such a thing as ever possible. We we don't really
know if it is, but we'll assume for now that
it could be possible to upgrade the brain's ability to say,
do math or you or computational reasoning. Okay, just an
implant that boosts some sort of cognitive function in your brain. Uh,
at your point being like either you're you're handling of

(56:14):
mathematics or your memory, etcetera. Yeah, So, Robert, I got
a scenario for you. Somebody offers you a free surgery
that they say has a nine chance of increasing your
i Q by twenty five points? Would you take it?
M M, I don't know that. That's pretty good. Uh,
pretty good odds of success. Yeah, you don't have to answer. Now.

(56:37):
I got one to make it a little more obvious
if you if you're a person out there who's listening,
and you'd say, hell, no, you know, I'm not messing
around with my brain. I like my brain the way
it is. I'm not going to introduce all these risks.
Then consider this, What if everybody else around you has
taken this. Yeah, so all of your friends, your co workers,

(56:58):
everybody in your professionals circle, all of your professional rivals,
they all take the upgrade. This is a This is
a big issue in trans humanist thought. You know, who
gets to be trans human? And what does it mean
to say no to some sort of trans human experience
such as a you know, a surgical implant that that
boost your cognitive ability. Well, I'm just talking about voluntary willingness.

(57:21):
So of course the question of who this is available
to is a big question, but it's a different question
I'm saying. Let's let's just assume we're in a crazy
scenario where it's freely available to everyone, and the only
question is do you want it? Will you voluntarily take it?
I'm not sure you're you're in the situation where if
you're if you're the first person, you'd probably say like, no,

(57:43):
I don't think I want that, it's too weird. If
you're the last person, you would probably be desperate to
catch up. Right, Would you voluntarily choose to remain at
a cognitive deficit to everybody else around you who has
upgraded themselves. I mean, that's the thing people are going
to take the risk. People are going to be hungry
enough to take the risk. Some people are going to

(58:03):
be comfortable enough not to but for how long. Yeah,
this is where we get into a scenario of something
that I would call maybe this isn't the best term
for it, but I'm going to try this out. The
term is irresistible availability, and so I'm going to posit
that bring computer interfaces and certain types of neural augmentation,

(58:24):
cognitive augmentation, if they're possible, they are going to fall
into this category of irresistible availability. So I would say,
you know, consumer technology that looks scary at first tends
to go through several phases. Of course, you've got the
lab phase, right, You've got the alpha and beta phase.
It's fairly contained, constrained. It's testing with people who are

(58:46):
part who are in on the game basically, and then
you've got to you've got a release, and you've got
early adopters. These are people who are technologically adventurous and
they start using this new thing. They tend to like
to show off its advantages. They're more willing to accept
risks that are you know, that haven't been worked out yet.
They're willing to get along with the kinks that haven't
been solved. Then the intermediate adopters weighed in, and at

(59:10):
some point a new technology that originally seemed scary and
weird and unnecessary reaches a tipping point of convenience advantage
and widespread adoption. And I would say there's definitely a
social element to this. It's not just the true, you know,
financial or time convenience it provides, but it's the fact

(59:32):
that everybody else is doing it. And at some point
it goes from something that I don't need and that
scares me to something I couldn't imagine living without. And
you can see this in many in many contexts. Think
back to cell phones. How cell phones went from a
like weird and unnecessary thing that extravagance. Yeah, like characters

(59:54):
and movies had cell phones, especially in their car. Remember
when I still enjoy watching like, you know, eighties films
where there's a supervillain of some sort in a crappy
B movie, of course they have that big, bulky car phone.
You're like, Oh, imagine a world which someone makes a
phone call from their car. Do you remember when paying
for something online with a credit card? Was this really weird,

(01:00:18):
scary and unnecessary thing. I specifically remember that thing, like,
why would anybody ever use a credit card to pay
for something on the internet? That's insane? Yeah, that's that's
what you do. You call an eight hundred number and
you use credit card that way, And then think about
maybe mobile banking and transactions, ride sharing apps like Uber

(01:00:38):
and lift. You just think about this progression from scary
and unnecessary to fundamental. It's the progress of irresistible availability.
And I very much think that neurotechnology could easily go
in the same direction as it as the advantage has
become more clear, as the risks sort of get blurry

(01:00:59):
and and go out of focus because so many people
are using it. Uh, it just starts to look more
and more like something that you can't go without. And
then once you've tried it, you're in the pool. Yeah.
I mean, I just keep thinking back to flying, like
if if the flying makes sense in an airline, then
everything makes sense. Yeah, you're clearly defining the will of

(01:01:19):
God by getting in this machine and ascending like a bird.
Uh so yeah, everything else is on the table too. Yeah. Man,
And they don't even try to make it pleasant anymore,
and people still can't stop doing it. Yeah, like they
don't have to sugarcoat it. Yeah, you're in a flying
death machine. I'm on board. Well. Actually, to speak of
death machines, of course, the class of comparison here is

(01:01:39):
the car, the automobile, which is far more potentially deadly
than just flying on an airline. Yeah. Imagine cars were
new and nobody drove them, and they were just brand
new invention, just now being debuted. And they told you, Okay,
on average, about thirty three thousand people a year are
going to die in these machines in the United States alone.

(01:02:01):
Do you want one? Yeah, you would, You would say,
I don't know. That sounds kind of dangerous. But the
thing is we were born into this world, right, We're
born into the world of the automobile, and so you
just take it for granted, like, yeah, these are these
are this is the roll of the dice we take
every day. So of course it's normal. The convenience creeps in,

(01:02:22):
the widespread adoption makes it look normal and okay, and
so it's irresistible availability. It's just ubiquitous and you can't
get around it. Yeah, and even things that are not
available to everyone just become increasingly normal. Like I keep
thinking back to uh to, like Time magazine headlines about
test two babies back in seven nineties, like original stigma

(01:02:46):
about IVF. Yeah, and of course that has become it
just became increasingly normal, just increasingly every day, like today.
It's just another reproductive option that's on the table. And
I mean I think that was also influence then by
by social social stigma and certainly like misogyny, certain ideas
about about you know, people trying to control what women's

(01:03:09):
bodies are for. But yes, there is Yeah, just the
technological aspect alone certainly has become more more accepted. Yeah,
so it seems undeniable that will see the same thing
occur with these various the idea of neural implants. Yeah,
and this stuff may becoming a lot sooner than we think.

(01:03:29):
So we we've talked about how there's already deep brain
stimulation and robotic limbs. These are already in development. They
already in some cases work pretty well. It's just a
question of them being deployed more in the wild and
becoming more widely available. But the question of cognitive augmentation
that's still more of a future concern. We haven't really

(01:03:50):
discovered any strong entry ways into that arena of technology yet,
but we could be closing that gap really fast, is
what I'm saying. So how about neural lace, Elon musk Oh, Yeah,
the neural lace. I love this idea because, of course,
the the the the guy who coined the term neural
lace is a sci fi author Ian M. Banks, one

(01:04:11):
of my personal favorites. Always comes back to the Banks here. Yeah. Yeah.
In his books, there's the culture which is this, uh
a narco utopian um far future society, and everybody in
the culture has all these transhumanist adaptations, such as like
drug glands that they can they can gland various substances
to change how they're feeling. And they all have this

(01:04:33):
neural lace that enhances cognitive ability and kind of gives
them a Basically, they're they're they're tied into a vast
sea of information that they can call up as they need.
So basically the idea is it is a way of
robustly connecting the brain to the external information systems of
the Internet or whatever their future version of the Internet is. Yeah,

(01:04:54):
it would be like Google Brain. Yeah, uh yeah, So
that's pretty close to what Musque seems to have in mind.
Now obviously we're not there yet, but we do have
prototypes of this sort of technology. It's it's nowhere near
uh in in Banks level yet. But in March Elon
Musk was in the news promoting this new neurotech startup

(01:05:15):
called neural Link, which he basically plans to use as
the vanguard of the coming neurocyborg movement. And the idea
of the neural lace is really the short version is
it's this ultra fine mesh material that can be injected
into the brain case with the needles. You get the
needle inside the skull and you inject this mesh material

(01:05:37):
over the outside of the brain, where it naturally unfurls
to cover the outer surface of the cortices, and from
here it melds with the brain and can offer supposedly
extremely precise electrical feedback and electrical control of brain activity
what they would call a quote direct cortical interface. And
supposedly trial versions of this have been deployed in mice

(01:06:00):
with apparently very few side effects, and so in the
short term this might prove a useful treatment for various
neurological disorders, you know, age associated neurodegenerative diseases like Alzheimer's
and other neurological disorders. But Musk is not shy about
the sci fi stuff he's He's into his other motive,

(01:06:21):
which is that ultimately he's interested in cognitive upgrades. He
wants cognitive augmentation of the human brain. And one of
the main reasons he's given publicly is that Musk is
one of these people who's concerned about existential risk from
artificial intelligence. So we've talked about this a little bit
on the podcast before. We I think we talked about

(01:06:41):
it in our Transhumanist Rapture War episodes, but maybe we
should do a whole episode or episode series on this sometimes,
because I do think the question of the risks posed
by artificial intelligence is interesting, And one of the reasons
it's interesting is that it's one of these questions where
really smart people who really you know what they're talking about,
are totally on both sides of the issue. You hear

(01:07:04):
people saying we need to be worrying about existential risk
from AI right now, and other smart people are saying
these people are lunatics. You this is not a concern.
And I'm not sure which side of the the issue
I fall on. Yeah, it kind of depends on whose
argument I'm reading. Yeah, I kind of fall in line
with with whoever whatever the last rational argument happen on

(01:07:27):
the matter. Yeah, I guess I'm there. I I consider
myself highly persuadable on this topic still, But anyway, Musk
is one of these people who says, look, creating superhuman
artificial intelligence is a genuine risk to us. We at
the very least risk becoming irrelevant, if not risk being destroyed.
And so he thinks that in order to avoid becoming

(01:07:47):
irrelevant or worse in the face of superhuman AI, we've
got to be willing to upgrade our brains to keep
up with the machines. In other words, the only way
to make sure that you don't fall victim to machine
intelligence is to merge with to become Yeah, and in
his view, neural lace might be one way to get
us there, giving us the power to augment our bio
brains with your neurotechnology to become superhuman mind hybrids. So

(01:08:12):
if the if the the AI god is essentially a
cat's cradle design, we want to make sure we're the fingers.
We want to make sure we're uh, you know, an
important aspect of its spiritual body. Yeah, even if we're
not its enemy, we also don't want to be just
some irrelevant obstacle to whatever its goals are. We want

(01:08:32):
to be thoroughly integrated with it and its motives. Yeah,
which kind of comes back to the end in Banks.
That's that That's kind of how he he weaves the
humans and the humanoids of the culture into everything, Like
they're the minds the aies that that ultimately rule everything
and are making all the hard decisions. They see the
value and having human operatives, and they also have this

(01:08:56):
is kind of like like hard hard part of are programming,
Like I guess they're sort of corporate culture, is that
there's something important about human life. Yeah. Now, if you're
still one of those people out there saying, Okay, I'm
just never going to get any kind of neurological implants.
By the way, I'm not advising people never get neurological implants.

(01:09:19):
I'm more saying that the people designing these things really
need to be thinking super hard about security from day one.
I guess we're way past day one, but from day whatever.
This is right now, But you don't just have to
worry about the future of neurological influence from technology. If
you get an implant, there are other ways to influence

(01:09:39):
the brain with technology. Yeah, yeah, I mean I mentioned
this a little bit earlier, but I think another area
of potential exploit exploitation would be, uh you know, if
you had some manner of external fine tuned electromagnetic cranial
stimulation device, perhaps when that aids with the treatment of
a psychological condition, or perhaps even works recreationally. Imagine mal

(01:10:00):
where a hacking scheme that turns such brain functioning management
on its ear. You know, how fast would you be
able to rip the thing off? And oh I can't
use it uh in anymore? You know, I'm gonna have
to go a day. I'm gonna have to bring this
thing into the shop. How am I going to get
across town without my my my god helmet to give
me there? Now, these external devices I think are a

(01:10:20):
little less plausible on this account than implanted devices are
because they're less precise. Right, So you've got transcranial electrical stimulation,
also transcranial magnetic stimulation. These things that you know, yeah,
play the electromagnetic force to the outside of the head. Uh.
When I've seen experiments with these types of things so far,
the results they're able to induce in the brain are

(01:10:43):
very very blunt and broad, if you know what I mean.
They're not nearly the kinds of minute targeted results that
you would get by implanting electrical devices inside the skull
or inside the brain. Still, if it keeps me from
from auditing my body themein's appropriately, then that's going to
ruin my week. Yeah. So so on one hand, I

(01:11:07):
do think this is a real concern. And I should
also mention that one of the other papers we looked
at was a paper by Saldi Costa, Dale R. Stevens,
and Jeremy A. Hansen in UH from the International Conference
on Human Aspects of Information Security, Privacy and Trust from
and essentially what they look at is trying to create

(01:11:27):
a broad architecture for an intrusion prevention system for brain
computer interfaces. That's kind of a hard thing to design
at this point, because you know, you don't know exactly
what all these systems are gonna look like. But the
basic system they come up with is that you know,
you'd you'd have a two tiered security system where any uh,
Internet or external input coming into the brain has to

(01:11:51):
go through what's known as an intrusion prevention system, which
is just a system that tries to screen traffic passing
into a network or a machine. If traffic looks suspicious,
it says, sorry, you can't go in. And then you'd
have to pair that with And I love this sort
of the brain equivalent of an anti virus program. Anti
virus program looks at what code is executing on this

(01:12:13):
computer right now, what what executive functions are happening, and
if it sees suspicious activity, it shuts it down. The
brain version would have to use some kind of signal
processing to look at what's happening in the brain or
in the or in the neural device and say, does
any of this looks suspicious like something the brain wouldn't
normally be doing, and if so, you might have to

(01:12:35):
disconnect the neural device or shut it down. Yeah, And
this is an area where I can just imagine this
kind of instead of going just full fledged, face forward
into a thought police scenario, we're kind of backing into
one because you end up with a situation where potentially
where you have we're human cognition. Is this byproduct of
organic and machine, right, become increasingly cyborg it's it becomes

(01:12:59):
and then therefore any kind of intrusive thoughts or even
criminal thoughts, it becomes kind of like bad behavior. And
a dog, right, not a wild dog, but a pet
animal because whatever what always happens, Like people are saying, oh, well,
is this because the this is the owner's faults, is
the dog's fault, and it is there any way to
really distinguish between all of these things because the condition

(01:13:21):
of the dog is so manipulated and so changed by
its relationship with humans. Yeah, that's a very good point.
I mean, I can see a scenario where in the
future people might have say, you've got a deep brain
stimulator in your head, or you've just got neural lace
or something like that, some kind of neuro peripheral technology

(01:13:42):
that that changes the way your brain works. And then
you do something that you say that didn't seem like me.
Did the neuroprosthetic make me do it. Let's say I
went and robbed a bank. Could I sue the company
that made my neuroprosthetics and say this is totally out
of care for me. I don't know why I did that.
I never would have robbed a bank normally, And I

(01:14:04):
think what happened is that my neuroprosthetic malfunctioned and it
artificially pumped up my aggressiveness and lowered my inhibitions and
did all this stuff that temporarily turned me into a
bank robber. And that's not my biological brain's fault. Or
it could be that you went to the wrong ret website,
you clicked on something you you shouldn't have, and that

(01:14:26):
somehow that managed to like follow up the chain to
your brain itself and altered your behavior. Oh, I didn't
even think about that with neuroprosthetics. So it could be
something you click on on the Internet or some search
you do on Amazon can now not only follow you
around it showing you ads at different websites, but it
can follow you into your brain. Yeah. Or maybe they
didn't even hack you, say they hacked an advertisement that

(01:14:48):
you passed, and that advertisement communicates with devices that you
have that you know, so that it can it can
figure out what your behavior is and you know, feed
you the right advertisements, maybe in your dreams or something. Yeah. Yeah,
So the main thing, my main point in this episode
is that I think that we cannot depend on the

(01:15:10):
consumers opting out as a way to avoid these risks
because of this this irresistible availability thing. As these things
become more available, become more widespread, and become more useful,
people are just not going to be able to resist
the urge to use them. And uh and in some

(01:15:31):
cases there you know, if you suffer like an injury
or disability or something, there's no reason you you you
should want to resist them, right, they will give you
lost functionality back. Yeah, I mean, unless there's an end
to the advancement technology or say there's a buttle Arian
Jahad and and people as as you know in mass
decide no, we we you know, we're not going to

(01:15:53):
cross this point. We're gonna put in place laws that
keep us from augmenting ourselves and becoming and thinking machines. Yeah.
So I'm saying you can't depend on the individual consumer
or patient to opt out. That that is not something
that should be part of the thinking on this. It
should be that security concerns are absolutely taken into consideration

(01:16:16):
from day whatever this is now, because it's never from
day one, it's always going to be like day you
gotta you gotta be ahead of those brain hackers. Yeah,
all right, so there you have it. Hopefully we gave
everyone some you know ace, you know, definitely some some
room for a little paranoia and a little sci fi
wondering for sure, but also just some some some real

(01:16:38):
facts about technology and security and how the how the
footfalls tend to go in this trek, and one hopes
those footfalls are chosen by one's own free will. Yes, indeed,
so hey, if you want to uh learn more about
this topic, heading over to stuff to Blow your Mind
dot com. That's where we'll find all of our podcast

(01:16:59):
episode blog post links out to our various social media
accounts such as Facebook, Twitter, Tumbler, Instagram. We're on all
those things, and the landing page for this episode will
include some links out to some of the sources we
talked about here today. And if you want to get
in touch with us directly to give us feedback on
this episode or any other, or to let us know
if you think you would accept a voluntarily opt in

(01:17:21):
neuro enhancement, or if you want to suggest topics for
the future or anything like that, you can always email
us at blow the Mind at how staff works dot
com for more on this and thousands of other topics.

(01:17:43):
Is it how stuff works? Dot com b b b
b b blas, wait to wait, to wait to people
lass as far back

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.