Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
I Heart Radio and How Stuff Works and a love
of all things tech and A couple of recent stories
in the summer of twenty nineteen have been about the
(00:27):
subject of brain computer interfaces or b C eyes. That's
a topic I've touched on with previous episodes of Tech Stuff,
and if you listened to the show Forward Thinking, we
covered it on that show as well. But since we've
now got people like Elon Musk and Mark Zuckerberg behind
the efforts of creating BC eyes, I figured to be
(00:49):
a good time to revisit the topic, talk about what
it is, how far along or not far along we
are with the technology and the ethical can iterations we
need to keep in mind when we're developing tech like this.
So a brain computer interface is exactly what it sounds like.
It's a methodology to allow a user to control or
(01:12):
interact with a computer directly through brain activity through thought.
It marries the complicated subjects of neuroscience and computer science
and a lot of media outlets sort of gloss over
how truly complicated this is. We have a tendency to
either think of our brains as being kind of like computers,
(01:35):
or of computers as being kind of like brains, but
really they're quite different, and creating an interface that can
translate the operations of one so that it makes sense
to the other is harder than it sounds. The goal
of a brain computer interface is to strip away as
much of the barrier between our intent and the computer's
(01:58):
actions as possible, is to get beyond the limitations of
other types of interfaces. So let's talk about those other
interfaces for a second to kind of have a comparison here. So,
in the very early days of computers, like the earliest
electro mechanical computers, the interface was incredibly complicated. It consisted
(02:19):
of switches and plugs, so you'd have to physically make
changes to the machine to run a different calculation. You
programmed it by physically changing the connections. Operating a computer
required learning a pretty intricate system, so it was a
very high barrier to using computers. But on the other hand,
(02:40):
there were hardly any computers to use, so it wasn't
like people were stumped all the time. It wasn't like
you were in the I. T Department looking at a
manual that was five thousand pages long. There are only
a few computers in the world at all. Now, gradually
this gave way to other interface systems, and it first
they were still incredibly complicated, at least by today's standards.
(03:04):
The punch cards of yesterday are a really good example.
You could feed a series of punch cards which represented
a program, to a computer. The computer would read the
punch cards, make whatever calculations were indicated by those punch cards,
and then it might in turn spit out a different
set of punch cards, or it might light up some
(03:25):
indicator lights. Maybe if you were lucky, you had a
printer and it would print out a result. But boy,
it was still a pretty tough barrier of entry as
far as computer use was concerned. It wasn't something the
average person could tackle on his or her own. Now.
A huge breakthrough was the incorporation of computer displays and keyboards,
(03:47):
and there were other advances in computers at the time
that also made a huge difference, like the development of
operating systems and high level programming languages. And we obviously
still use keyboards and displace today. So these were really
sticky types of interfaces so to speak. Actually that could
be literal if you tend to drink sugary sodas while
(04:07):
you're computing, but I'm mostly talking about the metaphorical here. Anyway.
The computer mouse would then expand how we would interact
with computers, as would the graphic user interface or gooey.
This would allow us to have new ways to interact
with our machines, and then we would see further advancements
like voice recognition systems and touch screen interfaces. It was
(04:31):
pretty typical that each advance in technology, if it was
implemented well, would make interactions with computers easier and more natural.
So when you see a kid look at a screen
and the kid has never really played with keyboard, mouse
or even touch screens, yet you might see them reach
(04:52):
out and try to touch things on the screen, Well,
that tells you, oh, a touch screen might work better
for certain things. Maybe not everything, but certain things. And
then you start to implement that kind of interface in
low and behold, you see I've created a new way
to interact with this machine. Well, brain computer interfaces would
(05:12):
remove even those small gaps between our intent and executing
a command on a computer. Ideally, you would have a
non invasive technology, meaning you wouldn't have to have any
kind of surgery or anything in order to actually use
this stuff, and that technology would be able to interpret
your thoughts as commands, and then the computer would carry
(05:34):
those out, and the computer could potentially send information back
to you through those same channels that you could interpret
in some meaningful way. And there are a lot of
potential uses for this kind of technology, and many of
those uses are truly noble in their mission. For example,
and I'll talk a lot about this in this episode,
it could allow people who have severe mobility issues and
(05:58):
outlet for interacting with the world around on them that
they might not otherwise have. With the proper interface, someone
who is paralyzed and may not be able to move
or even speak could use the interface to activate commands
on a computer in order to communicate with others or
carry out tasks with the help of robotics and automated systems.
(06:18):
We've actually seen applications of brain computer interfaces do this
kind of thing already to a limited degree, and frankly,
it's amazing and inspiring. I I highly recommend you seek
out stories and videos about these types of projects because
they are phenomenal. But there are use cases beyond helping
(06:42):
people gain more autonomy, and some of them are a
bit well, let's say they're a bit questionable. So let's
walk down the history of brain computer interfaces and then
we will revisit these specific examples, as well as what
is currently going on with Elon Musk and Facebook getting
(07:03):
into the game. Well before such a thing could even
be theorized as a brain computer interface, we first had
to understand more about how the brain itself works. And
this is a non trivial thing. The brain was largely
an organ of mystery for a very long time. In
the late nineteenth century, physicians and scientists were first starting
(07:27):
to learn that there is electrical activity in the brains
of mammals. We started to get an understanding that our
nervous system is an electrochemical system, that electricity and chemicals
play a very important part in sending messages through this
system in a very sophisticated way. Now, this was the
same time that physicists were getting a better understanding about energy,
(07:52):
and so there was a curiosity about energy in the brain.
The brain does stuff, It must get energy, it must
use energy. What is that mechanism? What the physicists of
the time didn't yet understand was that the brain was
this electro chemical machine. They didn't have a complete picture yet.
So for a few decades, research mostly with animals like dogs, rabbits,
(08:15):
and monkeys, showed that brains generated electrical activity in some fashion,
and by the early twentieth century we had a rudimentary
understanding of brain waves. Then Hans Burger, a German psychiatrist
and physicist, recorded the first human E E G. In
the mid nineteen twenties. Now, Burger was interested in investigating
(08:38):
psychical energy in the brain. He was convinced that there
is some energy beyond what is needed to do quote
unquote work that would be thinking and operating the human body.
He never did uncover any sort of psychical energy and
his research, but his invention of the electro and cephalogram
(08:59):
would set the stage for neuroscience in the twentieth century.
And I'll have to do a full episode about Burger
in the future because he was a really interesting person.
His life story is full of drama. Now, over the
next several decades after Burger's invention of the e G,
(09:20):
or at least the the refining of the e G,
since there were sort of precursors to the E e
G before Burger got involved anyway. Over the following years,
scientists and doctors refined their understanding of electrical activity in
the brain, and they observed phenomena like R E M sleep.
They identified different types of brain waves. Neuroscientists also got
(09:43):
a deeper understanding about what the different parts of the
brain do and are responsible for, and that gets really
super complicated. Their sections of the brain that are dedicated
to very specific tasks and major parts of the brain
include stuff like the frontal lobe, the parietal lobe, the
temporal lobe, the occipital lobe, cerebellum, and more. And I
(10:06):
am no neuroscientist, and to go into deep detail on
all of these parts would necessitate at least a couple
of episodes plus an expert on the subject matter. So
I'm just gonna leave the general discussion of the brain
with an acknowledgement that they are really complicated. Now, there's
(10:27):
still a ton that we don't know about the brain,
and probably there's stuff we don't know that we don't
know but we've made a lot of progress, which has
led to some enterprising researchers, scientists, and technologists to look
into ways to create an interface between the machine in
our heads and the computers around us. In the nineteen sixties,
(10:51):
a neurophysicist named William gray Walter demonstrated that the electrical
signals and brains could do useful work outside of our noggins.
And it was a fairly primitive demonstration, but an effective one.
He had subjects who had electrode implants for e g s.
By the way, e g s can either involve having
(11:14):
surgical implants of electrodes or electrodes that are part of
you know, the sticky pads that stick against the scalp.
They have to be positioned in very precise places. But
you can have invasive or non invasive e g s.
What William gray Walter was working with were the invasive types.
(11:35):
So he had these people who are wired up e
g s and they were navigating through a slide show
with an old slide show projector, and they were using
a remote control to advance to the next slide. So
when they were done looking at a slide, or if
they were told to go to the next slide. They
would push a button and it would go to the
(11:55):
next slide. But what Walter didn't tell these people who
had their e g s hooked up to the system
was that the remote control was a nert it. It
didn't work at all. It was just a dummy remote. Rather,
when the subject's brain sent the command I'm going to
use the remote now, the electrodes would pick up that
(12:18):
brain activity and it would send those signals onto an amplifier,
which would boost the signal enough to send a command
to go to the next slide to the projector. It's
a very simple one, just the same sort of electrical
impulse that the projector would get if you push the
button now. The subjects reportedly were startled by this experience
(12:40):
because frequently they would make the decision that they were
going to go to the next slide, and they would
be in the process of pushing the button when the
slide would change in advance before they had pushed the button.
They said it started to feel like the slide projector
had anticipated their action. It had guessed that they were
(13:01):
ready to move on even though they had not yet
pushed the button. And in a way, that's exactly what
it had done, or rather it was able to act
faster than the subject was, and it raised some really
interesting questions about consciousness because the implication was that we
can arrive at a decision to do something before we
(13:23):
are actually aware of the decision we have made. And
so in theory, if you have a brain computer interface,
you might get the sensation that you're working with a
machine that's actually anticipating what you want to do before
you are aware that you wanted to do it, which
is both kind of creepy and amazing. Now, in reality,
(13:44):
it's because you wanted to do that thing, but your
awareness of your desire hasn't caught up yet. It's brains
are funny things. It's also possible that because of those
implanted electrodes, which can detect activity and relatively small regions
of the brain, allowed for more precision when looking for
signals that would indicate I'm going to push the button,
(14:06):
rather than signals that would indicate something like blink now
or eat soon or whatever. So, in other words, it's
very important to target the neurons that are going to
be responsible for whatever activity you're looking for. You can't
just have, you know, a general brain reading device that's
looking for any electrical activity in the brain. There's always
electrical activity in the brain, so you have to be
(14:28):
looking for precise activity, or else you would have a
system that's constantly activating under no particular impulse. Jacques J.
Vidal coined the phrase brain computer interface in nineteen The
DOLL presented a plan towards establishing the technology for such
(14:49):
an interface at the University of California at Los Angeles.
And it should come as no surprise that one of
the big organizations that has funded a lot of research
into brain computer interfaces is DARPA, or the Defense Advanced
Research Projects Agency in the United States. This is the
part of the Department of Defense that oversees money that
(15:10):
can be granted to projects that relate back to national
security and defense strategies for the United States. Sometimes these
projects have an obvious connection to national defense, such as
research into new types of weaponry. Other times the connection
might not be quite as clear, such as the DARPA
Grand challenges that help bootstrap the development of driverless car technologies.
(15:34):
But I think you could agree that brain computer interfaces,
you could think of a lot of different potential uses
to augment national defense with that kind of technology. So
DARPA has funded a ton of research into BC eyes
and much of that work has had incredible results. Now
I'm not just talking about device that would let you control,
(15:55):
say a computer cursor with your mind, but technologies that
would help people regain lost senses like hearing or vision.
And it's all through stimulating neurons in specific ways, so
it becomes a bidirectional communications channel. It's incredible stuff. And
again the subject matter is vast and it would require
lots of episodes. But the bit I wanted to focus
(16:16):
on in the early history was a project in nineteen
seventy four. It was called the Close Coupled Man Machine
Systems Project, and later on it would undergo a name change.
It would become known as bio cybernetics. To quote the article,
DARPA funded efforts in the development of novel brain computer
interface technologies in the April two thousand, fifteen Journal of
(16:39):
Neuroscience Methods. Quote. This program investigated the application of human
physiological signals, including brain signals as measured non invasively using
either E E G or magneto and cephalography m EG,
to enable direct communication between humans and machines, and to
monitor neural states associated with vigilance, fatigue, emotions, decision making, perception,
(17:04):
and general cognitive ability. The program yielded notable advancements such
as detailed understanding of single trial sensory evoked responses in
the e g. Of human participants. These efforts demonstrated that
neural activity in response to visual checkerboard stimuli alternating at
different frequencies at each of four fixation points could be
decoded in real time and used to navigate a cursor
(17:27):
through a simple maze. End quote. Fascinating stuff. Now we're
gonna take a quick break, but when we come back,
I'll give a little bit more about the history and
talk about the different approaches to brain computer interfaces. Now
(17:50):
to detail every bc I project since the early nineteen
seventies would take us hours. There have been countless. Some
of them have led to amazing sets and breakthroughs, some
revealed frustrating barriers and challenges that we've yet to overcome.
I'll talk about a few more examples in a moment,
and I should stress that I'm just kind of arbitrarily
(18:11):
picking these examples because there's been so much amazing work
in this field. But before I get into that, I
want to talk about one of the biggest challenges in
the way of a robust brain computer interface, and that's
reading the signals of the brain reliably. So there are
two broad categories you can consider when it comes to
monitoring brain activity, and those would be invasive methods and
(18:35):
non invasive methods, or surgical and non surgical. Typically, though
there are some methods that are considered non invasive that
still involve implanting stuff into the brain, it's just it
tends to be through less invasive procedures like an injection
as opposed to brain surgery. And uh yeah, But generally
(18:55):
we're talking about technology that has to be surgically implanted
on the brain, or to anology that can monitor brain
activity without first having to you know, crack open a skull.
And as you can imagine, this is a pretty big
difference right between these categories. So let's break down the
pros and cons of each of them. So the cons
with invasive approaches are pretty darn easy to anticipate, right,
(19:17):
I mean, we use brain surgery as a stand in
for any activity that requires an incredible amount of knowledge, understanding,
and skill to perform. It's right out there with rocket science.
We do that because we know brain surgery is freaking hard,
it's risky, and I think it's safe to say that
the vast majority of people out there aren't too keen
to undergo a surgical procedure unless the potential benefits are
(19:41):
truly impressive, maybe life saving or life changing. Invasive methods
typically involve either implanting electrodes directly into brain matter or
using small sensor pads that essentially stick to the exterior
of the brain. Implanting electrodes comes with its own set
of challenge is and one is that it can cause
(20:02):
scarring in the brain, and if scar tissue forms near
the electrode, it can interfere with the electrodes ability to
pick up that electrical activity from neurons, so the scarring
process can prevent the electrodes from being able to do
their jobs. Another challenge is that sometimes an electrode could
shift slightly in the brain, and even a small shift
could mean the electrode would no longer be able to
(20:22):
pick up signals from the targeted neurons. There have been
some impressive advancements in getting around these challenges. Philip our
Kennedy of Emory University, which is just down the road
from our office in Atlanta, developed a neural electrode with
a tip encased in a tiny glass cone. Neurons would
actually grow into the cone and reach the electrode. The
(20:45):
cone helped protect the electrode from scarring, and the neurons
growing into the cone helped it resist any shifting. Kennedy
worked with a few patients to test the design and
work out actual useful brain computer interactions. One of those
patients was a man named Johnny Ray, a man who
was nearly immobile and incapable of communication after a severe stroke.
(21:07):
Surgeons implanted electrodes in Ray's brain in March, and Ray
learned how to move a cursor on a screen. He
was imagining that he was moving the cursor with his
hand like he was making hand movements, or imagining that
because he didn't have that capability anymore. He later learned
to move a cursor on a screen to highlight letters,
(21:27):
and then he would click on them like with a mouse,
except he did it by twitching his shoulders, one of
the few muscle movements he could still do. When he
was asked by the media what he felt when he
moved the cursor, he spelled out the word nothing, which
doctors actually interpreted to mean that Ray no longer had
to even imagine moving his hand anymore. His brain had
(21:50):
become trained to move the cursor through thought alone without
having to have the hand as sort of an intermediate step.
And this highlights one of the biggest ed vantages that
the invasive methodology has over the non invasive version. Implants
have a more direct path to the neurons that they
are monitoring. They are more precise, they're more finely attuned.
(22:12):
They can pick up signals much more easily. Brown University
professor John P. Donohue is another pioneer using electrode implants
as part of research into brain computer interfaces. His team
created a system called brain Gate, which initially had ninety
six electrodes arrayed on a small implant, and by small
immunit measures about four millimeters per side. It's about the
(22:34):
size of a baby aspirin. As Science Daily put it,
the stories about brain Gate are pretty inspiring. People who
have become paralyzed have undergone the surgical procedure to have
the electrode array implanted in their brains, then they have
gone through an extensive training period to learn how to
use this technology. In that training period, they learn how
(22:54):
to control some exterior technology with their thoughts. It might
be a cursor on a screen, giving them the ability
to communicate and run applications kind of like a computer mouse.
It could be a robotic limb. And on top of that,
there's been work to create systems that can replicate a
sense of touch in the user. So not only can
the person with the implants in commands to an external
(23:17):
piece of technology, they can also experience tactile feedback as
if that external tech was one of their natural limbs.
So a person outfitted with a robotic arm connected to
this type of interface could not just pick stuff up,
which is already phenomenal with a robotic limb, they could
actually feel how tightly they were holding the thing they
picked up, and that becomes really important for things like
(23:39):
fine motor skills. And this is incredible stuff. But I
would still argue that it's fairly primitive in the sense
that I think we're just at the very dawn of
being able to harness this type of technology that we
We've made some incredible strides, but there's a long way
to go. Now, let's get back to the non invasive approach.
(24:01):
So a clear advantage here is that you don't have
to have any sort of surgical procedure to make use
of noninvasive technology. And an e G can be an
example of a noninvasive approach. Right, you just have those
electrodes that you slap onto your scalp, but you don't
have to have a transcranial system. Uh So you can't
(24:22):
have e G s that are transcranial, meaning that they
it requires brain surgery and you have wires that stick
out through your cranium, through your skull. But you can
have noninvasive ones too. But even with the electrodes on
the scalp, we run into other problems, and a big
one is that the signals in our brains aren't really
that strong electrically speaking, and their skulls are fairly decent
(24:44):
at muffling those signals. Plus, if we're moving around a
lot having the rig we're using it, it needs to
remain steady because otherwise we might end up misaligning things
and again we end up reading the wrong neurons, and
then an irrelevant brain signal could initiate a command that
we weren't intending to send. That's obviously a big challenge.
(25:07):
Now we can get a really good look at what's
going on inside the brain using noninvasive technology like an
m r I, But in an m r I requires
a person to lay very still inside a very large
and very noisy machine for quite a long time, so
it's not a practical solution. If you want to build
a brain computer interface for day to day use. There's
(25:30):
a lot of work going into finding a methodology to
read brain signals, either directly or indirectly through noninvasive means.
Getting a method to a point where the precision and
accuracy rivals the implanted electrodes is a non trivial challenge.
DARPA is funding a lot of research into that area. However,
it stands to reason that if the agency wants to
(25:51):
use bc I technology for divinse purposes, it would be
ideal to have a version that doesn't require the user
to first undergo a surgical for seizure. In May two
thousand nineteen, the agency announced it was working with six
different teams to explore non invasive bc I strategies and
what was called the next generation non Surgical neuro Technology
(26:13):
or in three program. Included in those teams are people
from Carnegie Mellon University, the Palo Alto Research Center or Park,
and Telendyne Scientific among others, and the proposals are really interesting.
One from Battel Memorial Institute proposes electro magnetic neuro transducers
(26:34):
that are quote non surgically delivered to neurons of interest
into quote. They will then take electrical signals from the
neurons and convert them into magnetic signals, which could then
be picked up by an external transceiver, and the neuro
transducers could also perform the same process in reverse, taking
incoming magnetic fields or magnetic fluctuations and transmitting them as
(26:58):
electric signals to neurons and the brain, so it could
be bi directional. Other methods including acousto optical approach, which
means the team responsible plans to use ultrasonic signals to
guide light into the brain to detect a neural activity.
There's a similar one, but it would use magnetic fields
rather than light, while still using ultrasonic signals to generate
(27:21):
localized electric currents in the brain. It's all really fascinating stuff,
and it also quickly gets beyond my understanding of neuroscience
and physics, so I won't spend a whole lot more
time talking about them, but they are pretty darn nifty.
In the meantime, researchers have been relying on the established
E e G. Technology to do a lot of the
(27:41):
groundwork for a noninvasive approach, but as I mentioned, that
has some big limitations to it, so it's just a
stepping stone, and there are other groups looking at different
ways to measure brain activity for the purposes of an interface.
Finding a method is replicable and accurate is still a
really hard thing to do, whether it's looking specific fickly
at neuron activity or maybe something like keeping tabs on
(28:03):
changes in blood flow in the brain, so you're looking
at sort of an indirect indicator in those cases. At
the same time, researchers are starting to rely upon machine
learning strategies to help train the technology to determine whether
or not any particular signal is a real hit or
a false flag. So this is actually a multidisciplinary endeavor.
It's going to rely on many different technologies as well
(28:26):
as our understanding of neuroscience, which continues to grow. Okay,
so we know about the tech and we know a
bit about history. We know that still in fairly early
stages of development. When we come back, I'll talk about
Elon Musk, Facebook, and brain computer interfaces. But first let's
take another quick break. So in July two thousand nineteen,
(28:55):
one of the many tech stories to come out about
Elon Musk, because there's never a shortage of them, had
to do with the startup company Neuralalink. Now, for some people,
this was the first they had ever heard of Musk's
interest in creating a brain computer interface, but in fact
he had been talking about this kind of thing since
at least two thousand and sixteen. At the CODE Conference
(29:17):
of two thousand and sixteen, he talked about a ton
of stuff, including a technology called neural lace. Neural lace
is a term for a mesh of electrodes that could
graft into the brain through a simple injection in the
ideal implementation, so no full brain surgery was would be needed,
and ideally it would be wireless and offer the chance
(29:40):
to interact with computer systems through thought alone, which is
pretty nifty, but it's also essentially science fiction, at least
in that incarnation. Not that the idea has no merit,
but rather, we hadn't any real clue on how to
go about doing it. Yet it's only a little bit
better than saying, you know, it's sure would be nice
if we had teleporters. Well, yeah, it would be nice,
(30:05):
But that doesn't mean we can suddenly build teleporters just
because it would be nice to have them now. In
two thousand sixteen, Musk said he was interested in developing
this neural lace technology and if nobody else was going
to pursue it, he would do it himself, meaning he
would fund it himself. The next year, two thousand seventeen,
for those keeping score, he announced he was backing a
(30:27):
startup called Neuralalink, which would attempt to bring this dream
to life. Musk said at the time that one of
the biggest challenges was around bandwidth, or how much data
can pass through an interface in a given amount of time.
I would argue that challenge it is a big one,
but it's further down the road than some of the
(30:48):
more immediate challenges. So why did Musk say that, I'll
get to that. The two thousand nineteen announcement was all
about giving a few more details about the general plan
to achieve this science fiction vision. And Neuralink is working
to create flexible threads of electrodes, and each thread would
have essentially an electrode array with a potential density of
(31:09):
three thousand, seventy two electrodes distributed across ninety six threads.
Now by comparison, brain Gates array had a hundred twenty
eight electrode channels in it, so this would be much
more dense. The threads themselves would only measure a few
microns in width and would be very very flexible, which
would hopefully cut down on the possibility of them shifting.
(31:33):
They would be able to to move with the brain
instead of remaining still with comparison to the brain and
Neuralink has worked on a robotic device that would automatically
embed the threads into the brain of a recipient. This
would require surgery. According to the Verge, this robotic device
looks like a cross between a microscope and a sewing
machine and it can implant up to six threads per minute. Now.
(31:57):
Musque stated the reason he was talking about neuralinks were
at the time was largely as a recruiting strategy to
get more talent to apply to work on the Neuralink team,
and his end goal is not to help those who
have severe mobility and communication limitations gained some autonomy, although
they will be some of the people that would first
be exposed to this technology. Instead, it's to create a
(32:21):
bridge between humanity and AI. And this might be why
Musk was talking about that barrier, that bandwidth barrier, because
for there to be a meaningful exchange of data, you
need to be able to move a lot of information
very quickly back and forth. Presumably, and Musk has made
it pretty clear that he is concerned about the possibility
(32:42):
that AI could bring about an existential crisis for humanity.
So to me, this sounds like if you can't beat them,
join them type of strategy. Musk seems to say the
interface would serve as a step toward merging human and
artificial intelligence, perhaps pushing humanity into a trans human state.
We'd no longer be human beings as we would classically
(33:05):
define the term. Now, I have to stress again that
such a future, if it is even possible, is still
a long way away. The neuralink approach has a long
way to go just for a basic functionality, and building
a meaningful interface that can bring together human and artificial
intelligence is another matter entirely. In fact, I'm not even
(33:27):
sure what such a thing would mean. That would it
mean enhancing human intelligence with AI? And and if so,
how would that work? How could a computer system and
a brain work together like that, not just communicating back
and forth, but working as a cohesive unit. I'm not
really sure. I'm not sure if anybody is sure. Now
(33:48):
that's not to say it's not possible. It very well
maybe possible, but it's way beyond my humble understanding. Musk's
vision is an interesting one, but it also raised is
a lot of ethical questions. Now, presumably this technology will
not come cheaply, so who exactly would be able to
(34:08):
afford such a bio enhancement. So let's assume, for the
sake of argument, that Must's vision becomes reality, and that
this technology works the way he intended it to, which
I'm still not convinced is actually possible. But let's say
it is possible and it happens. Would that mean we
would actually see a new class system, one that essentially
(34:29):
mirrors the massive divide between the most wealthy and the
poorest people of today. But more so, would we have
a very small population of elite rich and enhanced people
overseeing a massive you know, the rest of us, because
I know I don't make enough money to fall into
the cyber human tax bracket. Again, we're so far away
(34:51):
from this being oppressing matter, but it's the sort of
question we have to ask when we talk about an
amazing future. Whose future or are we talking about? Because
if it's not everyone's future, I think it kind of stinks.
Speaking of stinking, let's segue over to Facebook, and that
(35:11):
might betray my opinion on this next item in our
bc I discussion. So at the two thousand, nineteen f
eight or FATE Conference, which is Facebook's conference for developers,
one of the many presentations was on Facebook's efforts to
fund the development of what has been called a mind
reading device, So what gives well? Researchers at the University
(35:34):
of California at San Francisco are helming this project and
the ultimate goal, at least the ultimate short term goal,
is to create a non invasive device or method that
will allow a user to transmit words or commands to
a computer device through thought alone. And the short term
goal is to develop such a system that can handle
(35:56):
up to one words per minute with a one thousand
word vocabular larry, and an error rate below sevent Now
those parameters should already tell you that this goal is
a tough one. We have no way to take raw
brain data from the speech center of the brain and
figure out what a person is trying to say all
(36:16):
by itself, right, I couldn't just slap a headset onto
a person have them think words and no immediately what
they're saying. To get to that point, we actually have
to train a computer system to recognize certain brain patterns
that represent specific words and the speech center of the brain.
That's what the researchers have been working on. So, like
the other examples I've given, these researchers have been working
(36:39):
with volunteers who elected to have surgeons implant electrodes into
their brains. And these were volunteers who were already undergoing
surgical procedures to treat stuff like epilepsy, So it wasn't
like they just walked in off the street. They were
electing to do this in addition to other treatments they
were seeking. The subjects were then given a series of
(36:59):
multiple choice questions. Now these were questions that didn't have
a right or wrong answer, so you could get a
question like how are you feeling today, and then the
answers could include stuff like tired, happy, sad, lonely, that
kind of thing. That's just an example from my own head.
By the way, I don't know for a fact that
that was an example question from their procedure. The subjects
(37:22):
would then answer out loud. They would say what their
choice was verbally, and during the whole test, the researchers
would record the brain activity in the subject's speech center
as it was going on. Doing this over and over
would establish a sort of picture neurologically speaking of how
specific responses quote unquote looked in the brain. So when
(37:43):
you were ready to say happy, then the neurons in
your brain fire in a specific kind of pattern, and
the the the the e G picks that up and
it it's kind of like making a picture. So if
the computer sees a picture that looks like that one,
it might interpret that you have said the word happy.
(38:05):
After training machine learning algorithm on the data, the researchers
tried to test the system and they would feed brain
data into the system without telling the system what the
data referred to. It would say, all right, which question
was asked and which answer was given, so the system
tried to figure that out based upon the amount of
data it had gathered in its training process. It did
(38:28):
fairly well figuring out which question was asked, getting it
right s the time, so three times out of four.
It was slightly less successful at guessing what the answer
was by the subject. It was not as good about
that it was about success rate, but that's still pretty impressive.
It's a long way away from the stated goal of
(38:49):
the project to get that error rate down below sev
Especially with a vocabulary of a thousand words, it's got
to get more complicated as the number of words increases,
because the more words the system has to identify, the
harder it has to be. It has to be able
to recognize differences between each of those words to determine
which one was intended. Okay, so what does Facebook want
(39:11):
to do with this technology, assuming that they're able to
mature the technology and have it perform up to the
level that they want well, the company has said that
the goal is to create a system in which a
user can just think a command or message and send
it to a computer. So rather than look down at
your phone to dash off a quick text to your BFF.
(39:32):
You could concentrate and send that message by thought alone
to your phone, and then commanded to send the message
onward without every taking the phone out of your pocket
or out of a purse or whatever. You're just concentrating
and making it happen. Now, the skeptics among you might say, hey, Jonathan,
wouldn't you say that Facebook has a somewhat spotty reputation
(39:53):
when it comes to stuff like privacy and security? And
my response would be, you bet you. I'm sure the
company has anticipated this. Folks at Facebook have already said
that this system would only pick up words that were
in the speech center of the brain, and only words
that the system had been trained on for that matter,
and that it wouldn't pick up just random surface thoughts.
(40:14):
So you'd have to be thinking about saying the word
for it to be detected by the technology. Presumably this
makes everything okay, I'm not quite ready to sign on
to that just now, but anyway, that being said, I
would imagine for the system to work, each user would
first have to train their individual instance of that system.
(40:37):
It's sort of like the old voice recognition programs out there.
You first had to go through a fairly extensive calibration
process with voice recognition systems that had to learn your
voice in order for it to be able to respond properly.
I imagine you'd have to do something similar with a
mind reading system like this, where you'd have to actively
think about specific words in sort of a two to
(41:00):
oriole in order to train the system on how your
brain lights up when you are thinking those words. To
put it another way, the neurons in my head might
light up a slightly different way when I say the
word cat than they would in your head when you
say the word cat, and each person would need to
make sure their version of this technology understood how they thought.
(41:21):
But that also means putting in a lot more prep
time before you can actually use the technology to dash
off an email or something. Another potential use is for
a hands free interface for technology like augmented reality glasses,
which frankly makes me even more worried. You can see
the use of such technology right away. You could wear
one of these glasses which can overlay digital this information
(41:43):
on top of your view of the world around you,
so you could stare at a building, for example, and
think what address is that, and just by thinking that
the a r handset could consult the Internet and come
back with some information and say that is this address
on this street, which is pretty useful. But let's paint
a more terrifying scenario. Facebook has an enormous amount of
(42:07):
information on millions, in fact billions of people. So let's
say you've got a pair of Facebook branded augmented reality
goggles and it's got a brain computer interface as part
of the system, so you can just think commands and
the goggles will pick up on what you are asking
them to do. And because so many people use Facebook
(42:28):
and many people have public accounts, you could walk down
the street and get quick bits of information about all
the people you were looking at. You know, you get
facial recognition software, it recognizes who the person is starts
pulling up your information on them that's publicly available. Maybe
you even figure out how to exploit the system and
get access to information beyond what was allowed for the
(42:50):
general public. This could be a massive privacy problem. Now, again,
we are a long way away from that particular type
of technology becoming reality, but the possibility is there. There's
no denying Facebook has access to a stupendous amount of
information about all of us and we don't even need
the brain computer interface for that to be a problem.
(43:11):
You could just have the A R glasses themselves with
a deep connection to Facebook's databases and a way of
interacting with it, even if it's with voice commands or
mobile app or whatever, and you can still have these problems.
It's just it seems even more insidious if you don't
have to do anything other than just stare at someone
and think it. It seems pretty spooky and creepy. And
(43:34):
it's these sort of scenarios that remind us we have
to be careful as we develop technologies to make sure
that they are applied ethically without posing harm to others.
We've got to ask ourselves what are the consequences of
this technology, both the intended and unintended consequences, and who
benefits most from it and who could be who stands
to to be victimized by it anyway, I think brain
(43:58):
computer interfaces really have great potential to do an enormous
amount of good, especially for people who otherwise have a
really difficult struggle just being able to interact with the
world around them and to have any sort of autonomy
at all, and to even just be able to communicate
with others. I think that that alone makes it a
(44:20):
worthy endeavor to pursue, but we do need to make
sure that we're doing it for the right reasons and
we're not just doing it because somebody is scared that
robots are going to take over the world, or a
company really wants to know what you're thinking, because the
more data the company has about you, the better it
can sell things to you or sell you to other things.
(44:44):
Keeping that in mind is very important. That's it for
this episode. If you have any suggestions for future episodes
of tech Stuff, maybe something that's happy and fun and
not nearly as terrifying and orwellian, send me a message.
The email is tech Stuff at how stuff works dot com,
or you can pop over to our website that's tech
stuff podcast dot com. You'll find an archive of all
(45:05):
of our previous episodes on there, as well as links
to where we are on social media and a link
to our online store, where every purchase you make goes
to help the show and we greatly appreciate it, and
I will talk to you again really soon. Tech Stuff
is a production of I Heart Radio's How Stuff Works.
(45:26):
For more podcasts from I Heart Radio, visit the I
heart Radio app, Apple podcasts, or wherever you listen to
your favorite shows.