Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
What is a brain computer interface? How far along is
this field? Can we evesdrop on the brain so that
a person who has lost the ability to move can
use their brain to control a computer cursor or a
robotic arm. Can someone who has lost the ability to
(00:26):
speak send brain signals to a decoder and hear their
voice again? Can we restore autonomy and dignity and eventually
do so so seamlessly that the technology disappears and the
person reappears In the future, where will the ethical boundaries
(00:46):
be between restoring function and spying on private thought? And
who owns the stream of neural data that represents you?
Welcome to Inner Cosmos with me David Eagleman. I'm a
neuroscientist and author at Stanford and in these episodes we
(01:07):
sail deeply into our three pound universe to understand why
and how our lives look the way they do. This week,
(01:31):
we're talking about technology for reading the brain. Now. One
thing that I find fascinating is that ancient cultures didn't
care at all about the brain. They generally would just
throw it out at autopsy, and it's understandable why it
just looks and feels like a huge, squishy walnut. If
(01:53):
you could sit and stare at a brain in action,
you wouldn't see anything happening. So it's taken centuries and
a lot of technology to realize that, in fact, the
brain is alive with lots of tiny cells, microscopically tiny,
and these cells are transmitting electrical signals tens or one
(02:16):
hundred times every second for each cell. And you have
eighty six billion of these cells. So this big, squishy
walnut is one of the busiest things on the planet.
But because it is so fragile, Mother Nature surrounds the
brain with an armored bunker plating the skull, and that
(02:36):
provides a huge challenge if you want to go in
there and eavesdrop on what the cells are doing. Now,
why would you want to spy on these cells? Well,
imagine if your thoughts could exit the skull as easily
as words leave your mouth. Now, there's a sense in
(02:57):
which we always do this. We use keyboards, touch screens,
and voice assistants, but all of those are detours. They
force the brain to root its intentions through muscle, and
that's fine if your muscles work. The problem is that
lots of people, millions of our neighbors and friends don't
(03:17):
have a way to get the information out of their
brain because something about the brain or the brain's pathways
or the muscles are not working, and therefore their brain
knows what they want to do or say, but there's
no way to get that information out. And this is
where the idea of a brain computer interface comes in.
(03:40):
What you'll hear referred to as a BCEI brain computer interface.
The idea of a BCI is to listen directly to
the neural patterns that mean move or speak or select,
and then you use some device to translate those patterns
directly into activation in the outside world. Now, as I said,
(04:01):
this is a huge deal for all the people for
whom the path from intention to movement has been interrupted
by disease or injury. The intent is still alive and
well in the cortex, and BCIs are the bridge back.
They turn silent plans into text or voice or cursor
(04:22):
control or reaching and grasping. But the story will, at
least in theory, reach beyond the medical because once you
can read out the programs for say this word or
press that key, now you've built a communication channel between
biological tissue and silicon, and that opens new forms of
(04:44):
interaction that our species has barely begun to imagine. Now,
let me not get ahead of myself yet, because as
we're going to see today, we are still at the
earliest stages of this technology. But this is what we're
going to talk about at the end. Now, you can
build bceiyes in lots of flavors. Some rest on the scalp,
(05:05):
Others sit on the surface of the brain. Others poke
tiny wires called electrodes into the surface of the brain
or even down deep into the brain for some purposes.
Some of these BCIs only read the electrical activity. Others
will also write with electrical patterns that the brain experiences
as touch or sound or sight. In every case, the
(05:28):
principle is the same. Brains issue commands, and they're very
fast and complex internal language of electrical spikes. This is
a language that we haven't nearly decoded yet, but machines
can learn to translate that language through a lot of
trial and error. Huge populations of neurons are playing some
(05:49):
symphony piece, and these decoders learn how to hear the
music and root the commands to a cursor or a
speaker or a robotic arm or whatever. Now. The issue
is that when we talk about it, it all seems
very straightforward and easy, but actually getting in there and
getting technology that can record from these microscopic little cells,
(06:12):
having these little changes in their electrical potential of tens
of millivolts, and making a system that lasts, and then
putting all the data together to understand what this very
tiny sampling of neurons, maybe a few hundred out of
hundreds of billions of neurons. It turns out this is
a massive engineering challenge and there are a million practical questions.
(06:38):
How reliable are these systems outside the lab? Can they
survive infection and signal drift? What about battery life? What's
the surgical risk? When does insurance cover these? So there's
a huge gap between a beautiful proof of principle and
a device that changes lives every day, and crossing that
(07:00):
gap is the real work of the field right now.
Now there's also a second issue. As soon as we
start talking about reading the brain, the questions start to surface,
what exactly are we reading? Is it intended movements? That's
one thing is that inner speech? Is it where you
place your attention? You can imagine situations in which there
(07:22):
are things that you don't want everyone knowing. We're used
to the skull having some sort of sanctity. So where
will the ethical boundaries be between restoring function and evesdropping
on private thought? Who's going to own the stream of
data that is literally you? How do we guarantee consent
(07:44):
and security and dignity when the interface is not on
your desk but inside your skull. So, even in the
face of all the tough questions coming down the pike,
it's hard not to feel awe at what's already possible.
Who have been locked inside their bodies are communicating again.
They're talking with their loved ones for the first time
(08:07):
in years. And the technology keeps improving every month, smarter algorithms,
better sensors, cleaner signals, and crucially designs that move from
the hospital to the home. So today I want to
explore what that looks like and where we are in
the process and where things are going. So I sat
down with my colleague Sergei Stavisky. Sergei is at the
(08:29):
UC Davis Neuroprosthetics Lab, which he co directs with neurosurgeon
David Brandman. With their collaborators, they work on BCIs that
restore communication and they're pushing towards systems that are fast
and expressive and practical for everyday life. So here's my
interview with Sergei Staviski.
Speaker 2 (08:53):
A brain computer interface is a device that interacts between
technology and a brains. You have the brain, you have
some way of getting information in or out, and you
have some computation that's happening. And that computation it could
be happening inside the body, so it could be a
chip that does everything in the brain, or it could
be sending that information to a laptop next to the person,
(09:15):
or even to the cloud for more computation.
Speaker 1 (09:18):
Now, one of your interests is that you know, over
a century ago people figured out you could dunk an
electrode into the brain the thin wire and because cells
are communicating with little electrical signals, you're you can eavesdrop
on that and you can also stimulate the cell to
do whatever. So tell us about the history of this,
(09:41):
how people have thought about, let's eavesdrop on the brain
and turn that into something.
Speaker 2 (09:45):
So starting in the sixties and seventies and eighties, especially
working in animal models, people realized, yeah, you can put
electrodes into the brain, and you can get up close
next to an individual brain cell a neuron, and when
that neuron's firing, it's genera a big electric field, a
tiny electric field, but big relative to the electrode right
next to it, And so.
Speaker 3 (10:05):
We know that that neuron is firing.
Speaker 2 (10:06):
And then there was a whole decades of systems neuroscience
which was relating those patterns of activity to what typically
the animal was doing. So a classic example from the
eighties would be a monkey is moving his arm up
or down, or left or right, and you can see
that maybe a neuron fires more when the arm is
moving to the left, and say, okay, that neuron has
(10:28):
a left or preferred direction. We're starting to build some
mental map of how that brain activity relates to movements.
Of course, it's much more complicated, and the whole field
of neuroscience is trying to understand how individual neurons and
hundreds of neurons and whole large assemblies of neurons generate behavior.
Starting around the two thousands, the field had felt that
(10:50):
we had enough of a rudimentary understanding of how movement
is encoded in the brain that this could be used
for a medical application.
Speaker 3 (10:59):
And kind of in my world.
Speaker 2 (11:01):
That's been focused on restoring movement to people with paralysis.
Speaker 3 (11:04):
So in two.
Speaker 2 (11:05):
Thousand and four it was a big landmark event that
was when the original brain Gate trial. So this was
led by John Donahue in Lee Hagberg at Brown University
in Masteronal Hospital. They put what was called a multi
electro array, so instead of a single wire like you
mentioned in the beginning, now imagine a hundred of those
little wires kind of all stacked together, recording from thus
about one hundred neurons. And they showed that these arrays
(11:29):
could be put in a person with paralysis, and even
though that person hadn't moved in a decade. I think
the first guy was a young man in his twenties
who had been paralyzed from the neck down due to
a knife wound from like a bar fight. So he
hadn't moved in many, many years. But they put that
electro array in the motor cortex, the part of the
brain that normally sends commands to the arm, and when
(11:52):
he tried to move his arm, lo and behold, those
neurons fired away. And so kind of the main risk
had been solved, which is would the brain even still
try to generate movements because you might think, well, use
it or lose it. Right, the person's paralyzed, why would
their brain still generate movement commands. Fortunately it still does,
and people were able to decode those signals.
Speaker 1 (12:14):
And just as a quick reminder to everybody, the brain
is saying, okay, I want you to make these movements,
and then those shoot down down the spinal cord and
out to the peripheral nervous system and move the muscles.
And so in this case you're hearing the original command,
but there's some break in the roadway plunging down the
spinal cord and out such that the body never gets
(12:36):
the signals correctly exactly.
Speaker 2 (12:37):
We're bypassing the injury. We're going to the source. So
where's the command coming from?
Speaker 1 (12:41):
So this was back in two thousand and four, what
was his name, Matt Nagel. Is that researchers are able
to listen to what the neurons are intending, and then
the field has really taken off since then in the
past two decades. For example, with motor movement, originally it
was just on a computer screen you could move a
cursor around. Nowadays people are thinking about Hey, could you
(13:03):
actually use an exoskeleton to move the arm physically?
Speaker 3 (13:07):
Yeah, or even stimulate those paralyzed muscles.
Speaker 2 (13:09):
So there's these functional electrical stimulation systems or epidural spinal stimulation,
both for walking and for the arm. So you can
really close the loop. You can decode what movement the
person's trying to make.
Speaker 3 (13:21):
It.
Speaker 2 (13:21):
Oh, they're trying to move their arm forward to grab something,
and then you can have that move a robotic arm.
You could have that move an exoskeleton, or if they
also have a stimulator that's implanted under the skin with
wires going to the muscles or going outside of the spine,
you can stimulate the body and actually have the person's
own formally paralyzed muscles make that movement. It's not at
(13:44):
the level that you or I let a healthy person
is moving their arm, but it does work. There's been
some really amazing studies in the last decade doing that.
Speaker 1 (13:51):
Yeah, exactly right, Okay, great, So that's how people have
been using brain computer interfaces to move a paralyzed body. Now,
something that several groups have gotten interested in in recent
years is what if somebody can't speak anymore? So, what
are the reasons. First of all, that somebody can't speak.
Speaker 2 (14:08):
So one common one is neurodegenerative diseases like ALS. So
ALS is a terrible disease, hemiotrophic lateral sclerosis, right and
right now there's no cure. We can't stop it with
a drug or other therapy.
Speaker 1 (14:21):
Also known as Luke Gerrig's disease.
Speaker 2 (14:22):
That's right, yeah, and almost everyone who has ALS will
gradually lose the ability to move their body. But also
that means what we call the speech articulators, so their lips,
their jaw, their tongue, their diaphragm, and so their speech
becomes harder and harder to understand, and eventually you wind
up what's called locked in, so really not able to
move at all. And of course this is a terrible situation.
(14:45):
And if there were a way to restore the ability
to communicate, so like before decoding not now not they
are movements that're trying to make, or the leg movements,
but what are the words that're trying to make, or
what are the movements of those articulars that they're trying
to make. What's are they trying to produce? Then we
can have this person communicate again and talk again through
(15:05):
a computer.
Speaker 1 (15:06):
If you want to figure out what somebody is trying
to say, where do you put the electrodes?
Speaker 3 (15:11):
Yeah, and that is the big question. So there are
a lot of ideas.
Speaker 2 (15:14):
One idea would be the broker's area, which was thought
to plan speech. Another idea would be the motor cortex,
which would be kind of the last planning to command generation.
So the part of the brain that's really sending signals
to the muscles. And then there's a wide part of
the brain that are called the language network.
Speaker 3 (15:34):
So this is the temporal lobe.
Speaker 2 (15:36):
It's canonically thought of for perceiving language, but also heavily
involved in producing language. So there are a lot of
possible choices. One of the challenges for developing a speech
ne or prosthesis is there's no animal model. So when
the field was trying to have people walk again or
people move their arms again, we had a huge head
start because you could say, okay, where can you code
(15:58):
the walking or the arm moved of a rat or
a monkey or another animal. Well, animals don't talk, they
don't have language, so we don't have that kind of
guidance for us, and what we do have are less
precise measurements from other humans. A lot of the really
important work from the last decade or twenty years was
(16:19):
done with electrocorticography. So people with epilepsy often will have
electrodes put under their skull, typically on top of their
brain or even in their brain to for the neurologists
to identify.
Speaker 3 (16:30):
Where the teacher is coming from.
Speaker 2 (16:32):
But these people are then in the hospital for a
couple of weeks, and this is a gold mine for
human neuroscience. A lot of what we know about direct
brain recordings and how they relate to human specific behaviors,
whether that's speaking or language, or imagination or memory.
Speaker 3 (16:46):
Or mood, all of these things.
Speaker 2 (16:48):
A lot of that comes from this sort of opportunistic
recording people who are they're in the hospital anyway, they're
kind of bored, they're waiting for the neurologists to have
enough data, and so it's very easy to ask them, hey, do.
Speaker 3 (16:58):
You want to read a sentence off a screen.
Speaker 2 (17:00):
So from that we already knew that this sensory motor cortex.
So the motor and the sensory cortex was a prime area,
and in our brain Gate clinical trial, that's where we
ended up putting electrodes, so in the motor part, basically
the part of the brain that would typically send commands
to the muscles.
Speaker 1 (17:18):
Great, so it's essentially like the last train station before
it plunges down towards the muscles. Okay, so you're eavesdropping
there and you're sticking these little electrode or raise these
little square jobs where they have sixty four electrodes on
the one and four of those.
Speaker 2 (17:35):
We used four of them, so yeah, four all along
this precentral gyrus.
Speaker 1 (17:40):
So you're listening to these neurons and you're trying to
decode what the person is intending to say from that.
And one question, were you worried at the beginning that
that wouldn't be enough data or did you feel like, look,
with two hundred fifty six neurons, we can figure out
what's going on in terms of what was trying to
(18:02):
be articulated.
Speaker 2 (18:03):
When I started the project, I was pretty worried. So
kind of the prior work is we had shown that
with about one hundred electrodes in a different part of
the brain, the hand part of motor cortex, we could
decode speech, but very poorly. There I was classifying between
the thirty nine phonemes in American English, if I recall
about thirty three percent accuracy, So that's way better than chance.
(18:25):
It showed there's information, but that is not good enough
to understand.
Speaker 3 (18:28):
What someone's saying.
Speaker 1 (18:29):
Tell us what a phoneme is.
Speaker 3 (18:31):
A phoneme is a building block of speech.
Speaker 2 (18:33):
So I think most people are familiar with the syllables,
think of a phoneme as a little bit smaller than that.
So good, ooh E. Right, there's consonants, there's vowels. Different
languages have different phonemes, but in English, depending on the
dialect or accent, between thirty nine forty one. These are
the typical ways we break down English.
Speaker 1 (18:54):
Got So you're recording from these neurons, and you were saying,
can I figure out what phoneme person is trying to
say right now and right now just from looking at
this array of neural activity?
Speaker 3 (19:04):
That's exactly right.
Speaker 2 (19:05):
And a little bit before that, my colleagues at Stanford,
and that was also the lab that I did my
post doctoral training, and so I started that project then
moved on. They had implanted one hundred and twenty eight
electrodes in the motor cortex of a woman with als,
and with that they were able to decode what words
(19:26):
she was saying with about seventy five percent accuracy with
a large vocabulary of one hundred and twenty five thousand words.
So that was a really really exciting moment for the
field because that was really banging at the door of
making this useful for general communication. Now, three out of
four words correct is amazing. It was way better than
anything that ever been done before. But you can't have
(19:48):
a conversation that way. It's just too frustrating. There's too
many mistakes.
Speaker 1 (19:52):
And so when we will give us a sense of
the type of mistake, So the person is intending to
say the word brain, but the neural activity is decoded
by the computer, and the computer says, oh, he's trying
to say panda bear or whatever.
Speaker 3 (20:05):
Well it could be panda bear, it's more likely.
Speaker 1 (20:07):
So the the.
Speaker 2 (20:11):
Way that these systems work is well, one way they work.
The way our systems work is we're decoding from neural
activity to phonemes and then those phonemes get assembled into
words using a dictionary.
Speaker 3 (20:22):
And a language model.
Speaker 2 (20:23):
And in fact, if you look at a dictionary, there's
that phonetic spelling which most people don't use but if
you want to figure out how to actually pronounce a word.
Speaker 3 (20:30):
You can look at that.
Speaker 2 (20:31):
So the types of mistakes it would more likely make
would be similar sounding words.
Speaker 3 (20:36):
So if someone's trying to say brain, maybe they'd get barn.
Speaker 1 (20:40):
Yeah.
Speaker 2 (20:40):
And in some contexts you can understand, oh, I hurt
my barn, I think you maybe you know you got
an accident, you hurt your brain. But if there's enough
of those, it just kind of breaks down. And the
analogy I'd give is when you're typing on your smartphone.
Most of us are a little bit clumsy. We make
a lot of typos. The autocorrect can help up to
a point, but there's this sort of steep cliff where
(21:03):
if we're making too many typos, the autocrack so the
language model cannot keep up, and all of a sudden
you just get gibberish coming out.
Speaker 3 (21:10):
So that's kind of where things were.
Speaker 2 (21:13):
You could it wasn't gibberish, right, that's overstating it, but
it was not there for communication day to day.
Speaker 1 (21:33):
So you worked with a man who is forty five
years old, if I'm rememory correctly, and he had als
and hadn't articulated in about five years. Is that right?
Speaker 2 (21:43):
Yet he was severely disarthuric, meaning most people couldn't understand him,
and he volunteered for this brain gate to clinical trial
that we are one of four sights of which meant
that after a bunch of tests and imaging scans and
other things, once we determined that it was a good
fit and it was safe to move forward. He'd had
(22:04):
this surgery where doctor Brandman, my collaudrator, put these four
multi electro to rays into his speech motor cortex.
Speaker 3 (22:12):
We waited a couple of weeks.
Speaker 2 (22:13):
For everything to heal up, and then we went to
his house where all of our equipment was already pre staged.
We literally plugged him in. So there's this system is wired,
so it's not wireless yet. And the way we started
it was we needed what's called training data in the
machine learning sense, so we needed the algorithms to see
a bunch of examples of him trying to say words,
(22:35):
and then what the neural activity looked like, and what
this actually looked like in the room was picture a
person in a wheelchair looking at a computer screen. We
put up what seemed like random sentences. The text would appear,
it would turn green, he would try to speak, and
then he would stop. And we just did this for
about thirty minutes. And one of the big questions at
the time was how much data do you need to
make this work? And the conventional wisdom would it was
(22:58):
that it would take a lot of data. Previous studies
had waited many, many weeks before they tried to decode
what's someone was trying to say. The AI fields that
we were borrowing tools from, for example, automated dictation when
you talk to your smartphone, those models are trained with
millions of hours so huge scrapes data sets to get
(23:20):
them to be able to understand speech. But it turned
out that because we had these electrodes in the part
of part of the brain that's controlling speech movements, it
has what's called a very high signal to noise ratio.
There's a really clear signal about what movements the body's
trying to make and thus what sounds is trying to produce.
And so after just thirty minutes of him reading these sentences,
(23:42):
we were looking at our little dashboard on the side
on our computers and it was showing us what we
call the word error rate. Or the phoneme error rate,
so how many words or phonemes were being incorrectly decoded.
And we saw that that was at the point where
we thought, okay, this thing can actually work, and so
we said, okay, now we're gonna do something very special.
We're gonna kind of flipless, which so to speak, and
now as you try to speak, you're going to see
(24:03):
words hopefully appearing at the bottom of the screen. And
we have a cool video of this, and so everyone's
kind of holding their breath and very excited, and the
prompt appeared, and he tries to speak, and the first
two words appeared correctly, and actually, at that point everyone
broke out in tears and laughter and clapping.
Speaker 3 (24:22):
We actually paused.
Speaker 2 (24:23):
For a few minutes and hugs, and his family was
there to watch it, in a really amazing moment, and
then we said, all right, let's get back to work,
and we kept going. And on that day we had
set a relatively modest goal. So we were using what's
called a fifty word vocabulary, meaning the sentences he could
say with it were restricted to fifty words, and you
can still say a few things, and that's obviously not
(24:46):
pragmatically useful, but that was to just to get going.
We had less than a one percent error rate using
this fifty word vocabulary, so almost every word was correct.
Speaker 3 (24:56):
That was huge.
Speaker 2 (24:56):
So we'd already established that, like some previous clinical throw participants,
his brain was still active when he was trying to speak.
So good, all right, that was the big one of
the bigger risks. Were we getting good in neural signals
from these electroder arrays? Yes, we were getting beautiful neural signals,
in fact, some of the best I've seen in my career.
And then did we need a ton of data? And
(25:17):
the answer was no, we were getting enough that we
could train these machine learning algorithms to map the neural
activity patterns to the words okay.
Speaker 1 (25:24):
And for the listeners, I'm going to link the video
which shows when the family started to cry and so
I found that very moving. And so how long will
these electrodes last? And you'd be able to get good
signal out of this?
Speaker 2 (25:40):
For Casey that is a key question, and the answers
we just don't know. So at this point he has
had this for about two years. We just had a
preprint a few months ago showing that out past six
hundred and fifty days the system is still going strong.
So this is huge because there was always some concern
(26:01):
that maybe these electrodes would stop recording neurons after a
few months or.
Speaker 1 (26:06):
And why it's because of scar tissue building up around
the electrode.
Speaker 2 (26:09):
There are a lot of potential factors. So yeah, whenever
you have a foreign body in the brain, the body
in the brain does not want that thing, So scar
tissue can form, can be at the microscale, just around
the electrode tip, which makes it harder to record individual neurons.
That sort of think of it like you're moving further
away from someone you're listening to, or there's padding between
(26:31):
you and them. It kind of it muffles the signal.
It could be at a more of a macro scale
where it can actually pull the electrodes out of the brain,
and that's happened in some other studies.
Speaker 1 (26:40):
The way that your skin pushes a splinter out.
Speaker 2 (26:42):
Yeah, I think that's a good analogy. So that's on
the biological response. Also, these are electrodes, so the materials
can fail, The insulation can fail over time, the metal
can get kind of chipped away or even away at
the wires, could disconnect, and there's a lot of failure modes,
but in this case, the records offar is really really encouraging.
(27:05):
So two years out, it's working great. The accuracy has
actually gotten better, and our preprint is now ninety nine
percent accurate, both because we have more data and we've
had more time to just improve the algorithms and keep
trying new things. And he is now using this as
his primary means of communication.
Speaker 1 (27:20):
And so a couple of things. One is, when you
decode the neural activity, you could just print that as
words on the screen, but you guys went a step further.
Speaker 2 (27:28):
Yeah, So in our first few months, what we did
is called text to speech, So the words would appear
as text on the screen initially, and then when a
whole utter and so a sentence or it could be
a whole paragraph, he would use his eyes to look
at a button on the screen and basically there's a
done button, and after he hits the done button, the
computer will read out loud what he said, and we
(27:51):
basically made a deep fake of his voice, so it
sounds a lot like he did before he got als.
It's not perfect, but it really does sound quite a
lot like him. Technology has progressed a lot, even in
the last couple of years. Most of the time people
worry about all the ill uses of faking someone's voice,
but this is maybe one of the few cases where
it's actually a really wonderful thing.
Speaker 1 (28:12):
So you got his voice from videos when he was younger,
before the als had set in.
Speaker 2 (28:17):
Yeah, we asked him and his family and they provided
us a bunch of things. And actually he had done
a podcast before, so we had really good material.
Speaker 1 (28:25):
So when he thinks of a sentence, the neural activities decoded,
the sentence gets reconstructed, and then you turn it into
his voice. Yes, now that's what you showed in twenty
twenty four, and you just recently had a paper five
months ago or so. Tell us about that.
Speaker 2 (28:42):
Yeah, So everything before, even though it could be said
out loud, ultimately the informations in the form of text.
And I think we can all appreciate that a lot
gets lost just through texts.
Speaker 3 (28:55):
There's no intonation.
Speaker 2 (28:57):
You can't indicate that maybe you're being sarcastic. It's less expressive. Right,
There's a lot of rich nuance that we all convey
in our voice and through text that's lost, and the
other problem is the latency or the immediacy. So if
I was talking to you and I could only write,
it would be very easy for you to accidentally interrupt me,
(29:18):
or to just not for me not to be able
to get a word in, because by the time I've
finished a sentence and selected a bund to speak it
out loud, maybe you've already moved on to the next topic.
Maybe if there's other people in the room, they're talking right. So,
for all of these reasons, we really wanted to do
not what we call brain to text, but what we
call brain to voice, and that means go immediately from
(29:39):
neuroactivity to sound. This is a hard problem for a
lot of reasons, one of which is it has to
be in super fast. You want sound to happen within
about thirty millisecond. That's kind of matching the natural latency
of brain to moving the muscles to vibrating air that
someone can hear. And so because of that, first of all,
(30:00):
we had to decode these neuro signals very quickly. It
limits the kind of algorithms we can use. We have
less data to work with. Right, you can't look into
the future, there's no autocorrect. You can't look at the
entire sentence to figure out based on context, like, Oh,
I reached down to pet the cot. No, you probably
meant kat because you don't usually pet a cot. You
(30:21):
can't do that if you're doing brain to voice. As
soon as you try to say I, you need to
have the sound eye reached. Right. It just has to
flow constantly. But we were able to, through a bunch
of complicated engineering work, get really far in there. And
where the state of the art in that paper that
you're referring to is is it is very immediate, So
(30:43):
the latency is under thirty milliseconds, and it's mostly intelligible,
but not consistently intelligible. So about fifty six percent of
words could be understood by someone. It's a big step forward,
but it's not good enough for daily use. Right. I
already said earlier that we out of four words is
not good enough, So you know, one out of two
words is definitely not good enough.
Speaker 1 (31:05):
So when there's a mistake, what kind of mistake is it?
Is it barn for brain and therefore sort of intelligible,
or is it is it worse than that?
Speaker 2 (31:13):
Yeah, it tends to sound like slurry speech, or maybe
like if someone's mumbling, so sometimes you can get the
gist of it. The length tends to be the same
because it's still capturing we call the envelope of speech.
So if you're saying a short word or a long word,
that comes through it very clearly, but maybe some of
the phonemes are a little garbled, and so you can't
(31:33):
tell exactly what's being said.
Speaker 1 (31:35):
Got it, Because each phoneme that the brain is encoding for,
you're translating that right away. Thirty milli seconds later that's
coming out of the speaker.
Speaker 2 (31:44):
Yeah, we just don't have enough signal to noise ratio.
We don't have enough precisions. So it's like if you
have a really bad digital camera, really grainy camera, and
you're trying to parse the scene. You know, sometimes you
can see what's going on, and other times you just
can't quite make out. I know that is that a
person or a ball?
Speaker 3 (32:01):
Is that?
Speaker 2 (32:02):
You know? What does that word say? If it's really grainy,
you just can't see so well. And although we have
two hundred and fifty six electros, which sounds like a lot,
the brain has almost one hundred billion neurons. There's probably
multiple billions that are involved in just speech and language.
So in some ways as a miracle that works at all,
that we're sampling from such a small number of neurons
(32:23):
and able to reconstruct the sounds that the person's trying
to make.
Speaker 1 (32:27):
And if I'm remembering in that paper, you also showed
sort of short singing.
Speaker 2 (32:33):
Yeah, So we wanted to demonstrate that this approach could
do more than just transmit the words, because we kind
of already had that with brain to text. Now it
could do it immediately, so that solves that interruption or
being heard right away problem. But we wanted to provide
a proof of concept that this could also be expressive,
so we had a couple experiments that did that. In
(32:54):
one of them, he was asked to say sentences as
either a question or a statement. And in English, when
we ask a question, can we increase the pitch at
the end, So he was able to do that. We
had him emphasize specific words, and you know, you use
that to change the meaning of what you're saying. So
this is classic from a different study, sentence that you
can say in seven different ways, which is I never
(33:14):
said she stole my money. Now I can say I
never said she stole my money. I never said she
stole my money. Right, I'm slightly changing the connotation depending
on which word I'm stressing. And so we had a
task where he said that sentence emphasizing all the different
words and lo and behold.
Speaker 1 (33:30):
Yes.
Speaker 2 (33:31):
From the neuroactivity, we could identify which word he was stressing.
And so then we had another task where we would
give him a sentence and we would capitalize a word
and he was supposed to emphasize that. And then the
last one is what you were referring to is we
call a simple singing task. So it was only three notes,
but basically he could say whatever he wanted to say,
but at three different pitch levels, so you could say,
(33:52):
you know, like bah bah bah or like you know,
la law da. So that task he was able to
do quite well. He's not going to be singing in
the opera yet, but it shows the path forward and
where our lab and many others are working now is
how do we build on this? So does that mean
(34:12):
better algorithms? There's always new innovations in the artificial intelligence
world and just neuroscience making sense of these signals.
Speaker 3 (34:20):
Does that mean putting more electrodes?
Speaker 1 (34:22):
In.
Speaker 2 (34:22):
Certainly that's of interest, and there's a lot of really
exciting work happening in there. Does that mean maybe putting
electrodes in additional parts of the brain, so kind of
at a simplistic level, people think of left versus right
brain as having some differences with maybe more of these
what are called parlinguistic elements of voice encoded more on
the right side of the brain. That's something we'd like
(34:44):
to find out and we hope to in the future,
or do we need to put it in other parts
of the speech network.
Speaker 1 (34:50):
By the way, just to flesh that out for listeners.
You know, on the left side of the brain, you've
got a lot involved with language. When people get damage there,
they let's say, lose the ability to articulate, to produce sentences,
to understand census. But when people get damage in equivalent
areas mirror images on the right side, they can get
(35:10):
what's called a musia, which is the inability to understand
music anymore. Because as you say, that's where intonation, the
prosity of language seems to be encoded. So good, this
is a good segue into the future, then, which is
first of all, I'm curious what you think is the
answer you just posed. Is it getting better electrodes, more electrodes,
(35:31):
is it getting better algorithms? Is there a limitation in
the signals and noise ratio? Where's the lowest hanging fruit
for getting improvements? Here?
Speaker 3 (35:41):
Can I go with d all of the above? I
think we do need all of these things.
Speaker 2 (35:46):
So already we are seeing with our data and this
current participant that with the same electrodes, we are able
to squeeze more information out with better algorithms and just
better understanding what the brain is doing. And there's a
lot going on there. It's not just the movements. We're
seeing things like neural error signals. We're seeing prosody and
(36:07):
intonation encoded. Right. All of these things are kind of
mixed together in these brain signals we're measuring, and there's
a lot of science that goes into disentangling them and
figure out what they mean. What are you trying to
pay attention to for given application. So that's all moving forward,
and so we're just learning a ton about how the
human brain produces speech because we didn't have this opportunity
(36:28):
at this precision before. There's now only a handful of
humans in the whole world that have had electrodes that
measure individual neurons as they try to speak. So we're
learning a lot, but certainly more electrodes is better, So
in our trial as we move forward, we intend to
put more electrodes in. There are now multiple companies that
are building fully implanted intracortical electrodes, so similar type of
(36:49):
electrodes that go right up to the neurons, but they
all have a thousand or more electrodes or recording sites.
So we're talking about at least a four x if
not more improved in the density or the count of electrodes.
And I think that's going to make everything work just
so much better.
Speaker 1 (37:06):
And of course companies were working on making this wireless
as well, Neurallink being I guess the first one to
do it, but other companies moving that way as well,
so that you could have something that's fully packaged and
a person can just speak with no wires hanging out.
Speaker 3 (37:23):
Yeah, that is very important.
Speaker 2 (37:25):
So the wired systems we have now, they are what
is available. They're good for research there in some ways simpler.
They've been shown to be safe for quite a long time,
but they're limiting right fully implanted is the way to go,
and we can look at other medical devices. So there's
these wild photos of pacemakers in the fifties and it
(37:47):
was basically like a car battery on a cart with
you some amplifiers and kind of primitive. They're not computers,
they're electronics, and then there's a wire going to someone's chest.
Speaker 3 (37:57):
It kept them alive and it showed that this worked.
Speaker 2 (38:00):
But of course today millions and millions of people are
walking around very healthy with pacemakers that are small and
their packaged and titanium or other very inert safe materials.
Speaker 3 (38:11):
They have battery.
Speaker 2 (38:12):
Some of them now can be wirelessly recharged. So I
think this is a well trodden path and we're going
to absolutely see this with brain computer interfaces. They're going
to be fully implanted, they're going to be wireless. Data
is going to come out through radio or lasers or
other means to get data out of the brain, and
power is going to go in and it's going to
be great. Great.
Speaker 1 (38:32):
Now, Okay, let me ask you this. A lot of
people are very familiar with neuralink. They've heard about it.
Even though as I mentioned, this idea of recording from
brains has been happening for a very long time.
Speaker 2 (38:40):
Now.
Speaker 1 (38:41):
What neuralink is doing is implanting very tiny electrodes robotically,
and it's fully implantable, and so that's part of why
it's famous. But also part of why it's famous this
is because it's Elon and there's this mystique about it,
the sort of idea that everyone will someday get a neuralink.
Now I have my doubts because it's an open head
(39:03):
surgery still, even though it's with the robot. But let's
look towards the future in terms of what use would
it be to have a brain computer interface for somebody
without a problem speaking or moving.
Speaker 2 (39:17):
Yeah, I don't think that application, the killer app so
to speak, has been discovered yet.
Speaker 3 (39:23):
You know, there's times where I'm lying.
Speaker 2 (39:25):
In bed and I kind of wish i could send
a text message without having to reach for my phone.
But I'm not going to get a brain surgery to
do that. I'm going to just reach for my phone.
So what I think we're going to see is a
widening of the medical applications. So I think there's gonna
be many, many more medical needs that can be addressed
with brain technology, whether stroke, things like sustaining memory in
(39:48):
the longer term, or dealing with age related decline or
even Alzheimer's. So there's going to be different types of
BCIs for different problems. But in terms of fully implanted,
kind of invasivec eyes for really healthy people, no one
has yet shown a benefit that I think is worthwhile. Now,
(40:09):
could I imagine it? Certainly one could imagine it. So,
you know, if you could have a device in your brain,
let's say it would allow you to feel more alert
or to sleep less, right, so kind of modulating some
circadian rhythms or energy level or attention. One could imagine
that that kind of like a performance enhancing drug that
could be done with a neurotechnology or neural interface. But
(40:33):
no one's done that yet in a way that's compelling.
People have talked about could it be kind of like
a coprocessor for your brain, like you know, somehow you
just know things. It's like having a smart AI assistant,
but it's inside your mind and it's much more seamless.
Speaker 3 (40:49):
But that is a really long way away.
Speaker 2 (40:51):
I mean, we have we're struggling to get you know,
crude vision in so people can can read a page. Now,
I mean, that's amazing, that's like very state of the art.
Or someone can slowly walk who has a spinal cord injury,
or someone can talk but not as eloquently as before
their als or before their stroke. So, given where we
(41:11):
are now, I think we're quite a ways away from
like beaming information in Oh.
Speaker 1 (41:15):
I totally agree with you on that. I do wonder
twenty five years from now, let's say, right if you
just took a short cut of said, okay, look, I
(41:37):
want to listen to your covert speech things are not
saying out loud, and then I want to plug the
answer right back into your auditory cort text as though
you're hearing it, and then you know, beam wirelessly to
open AI or whatever exists in twenty five years from now. Yeah,
the question is could you ask a question and hear
the answer that way?
Speaker 2 (41:55):
My prediction is yes, I think that could be done.
I mean also, I think that could be done the
next five years. It just would still require a surgery
to be done accurately, And so would anyone want it?
Would we as a society choose to allow? It?
Speaker 3 (42:10):
Gets into debates of people's agency over their health.
Speaker 1 (42:13):
Are there moral or ethical questions about that.
Speaker 2 (42:15):
I think these are just general kind of medical and
societal questions of do we allow people to take medical
risks to get certain abilities that they otherwise wouldn't have.
Speaker 1 (42:28):
One of the issues is about brain privacy, right, the
question of let's say I'm doing something that's recording my
covert thoughts, by which I mean, you know something that
I'm thinking, but I haven't actually pushed it out to
my motor cortex to say it yet. Who's the company
who has access to that? Do I want anybody accessing that?
Speaker 2 (42:49):
I think that's yeah, that's a real concern. We're not
there yet, so to be clear, there's no BCI that
can decode covert thought yet exactly.
Speaker 1 (42:57):
I'm talking twenty five years from Yeah. Yeah, I mean,
this is one of the conundrums about where this is heading.
Speaker 2 (43:03):
Well, we're already dealing with inklings of that. So, for example,
in our system, because our participant is using this for
his day to day life. For example, one thing that
we implement was a privacy mode where if he toggles
a button, it no longer saves that data. This is
a academic clinical trial. In general, we're really loath to
give up any data I mean, it's so precious and
(43:24):
then these people are making these commitments to science, but
we also want to be respectful that he might need
to have a really private conversation and we don't want
to even have any ability to access that. So that's
already something we're dealing with in the context of a
medical trial from an academic medical center. I think this
is a very high trust scenario. Of course, when you
(43:44):
have companies that are building these, we're going to want
to think about we have what rights do in that
case patients or customers have to the data? Can the
data be used to improve the algorithms? Who owns the
benefit of that? What happens if a government subpoena?
Speaker 3 (43:59):
Is it? Right? Now, we have.
Speaker 2 (44:02):
This speech PCI for people with vocal tracked paralysis, meaning
that they know exactly what they're trying to say. The
words are clearly formed in their mind. They are trying
to speak it. Those commands are not reaching the muscles. Okay,
So we've shown that there is a very compelling therapy there.
Industry is going to come in and kind of productize it.
(44:22):
I think this is going to turn into medical device
in the next five years. There is a much larger
patient population though with aphasia due to stroke, So there
the problem is one step further upstream, meaning.
Speaker 1 (44:35):
I mean they can't speak language by the way face.
Speaker 3 (44:36):
Yes, well, there's different types.
Speaker 2 (44:38):
So sometimes within aphasia that means they can't understand language,
but with expressive aphasia that means in many patients cases
they want to communicate, they really know what they're trying
to say in sort of in a meaning sense, but
they can't find the right words for it. It's almost like,
you know, sometimes I can't remember a word, but that's
rare and I can usually remember it or explain in
(44:59):
other words. But if I couldn't remember most of the words,
that would be really frustrating and debilitating.
Speaker 3 (45:04):
And there's millions of.
Speaker 2 (45:05):
People that have strokes and partially recover but never fully recover.
They have a language disorder. Many of them have perfectly
normal intelligence and their personalities preserved and kind of everything
else is there, but they just can't form words.
Speaker 3 (45:21):
Can we help them?
Speaker 2 (45:22):
And this is something that our lab and many others
are starting to think about. The idea is, can we
basically do this thing that we've done with a speech BCI,
but now make a language BCI can we put electrodes
somewhere in the language network and that is a lot
of the brain that's both a good and a bad thing.
Speaker 3 (45:39):
Could we decode the meaning and this.
Speaker 2 (45:41):
Is kind of getting close to this idea of a thought,
which is not a very well defined term, but could
we decode the semantic meaning of what they're trying to
communicate and have let's say, a tablet in front of
them print out a sentence or speak a sentence where
they're saying, I'm happy to see you, or could you
hand me some water? Or my nose itches or I'm
not feeling well well right, that thought, that communication intent
(46:03):
is still in there for many of these patients. We're
trying to develop a medical technology to help them, but
that starts getting pretty close to sounding like mind reading.
And so yeah, I think as an ethical question this
will potentially become relevant in the coming years if this
medical project succeeds.
Speaker 1 (46:24):
It's interesting because we mean different things by mind reading.
There are all these different levels of it, so even
what somebody is trying to say often masks what they're thinking.
I'm trying to remember this quotation from the poet Oliver Goldsmith,
who said something like I think the real purpose of
language is not to communicate intent but to hide it.
(46:44):
So anyway, so if somebody says, hey, you know, I'm
happy to see you, or I you know, whatever the
thing is they're saying, it may or may not be
what their thoughts actually are. Is that's what their language is.
Speaker 2 (46:56):
Yeah, so we're still talking. I'm still talking about decoding
communication and tent and that's sort of I think we
find it a little bit reassuring because it's an active process.
It's not like right now that we're nowhere close no
one even has an inkling of how to make a
device that can like read everything you know. You know,
you're not actively thinking about it, but it just knows
your whole childhood and all your deepest secrets and you
(47:18):
know what you think about everyone around you. That I
would not even know how to start to do that,
But for thinking what you're thinking actively or what you're
trying to communicate, that seems plausible. And there's some studies
using imaging that kind of you know, can do above
chance dey coding which someone's trying to communicate. We have
some preliminary data others do as well, So I think
(47:40):
that might happen.
Speaker 1 (47:41):
So let me ask you a few things. When will
paralysis be solved?
Speaker 2 (47:44):
I think there will be approved BCIs for paralysis in
about five years. That doesn't mean they'll be available everywhere.
They might be only available in certain markets. Maybe only
a few hospitals will initially be providing them, but that
will grow rapidly.
Speaker 3 (48:01):
Will it mean.
Speaker 2 (48:01):
Paralysis is cured? I think that's too strong a term.
Maybe that means you can walk slowly, you can move
your arm, but you maybe can't tie your shoelace.
Speaker 3 (48:10):
Initially.
Speaker 2 (48:11):
You can move a computer cursor really well, but that's
not the same thing as playing the piano.
Speaker 3 (48:16):
So I think the capabilities will keep getting better.
Speaker 1 (48:18):
And with als and dysarthria where someone can't articulate, well,
what are we looking at?
Speaker 3 (48:26):
Your prediction, it's actually the same.
Speaker 2 (48:28):
I think that the speech bring computer interfaces are going
to move very fast. I think that and cursor will
probably be one of the first approved systems, even though
people have been trying to move robot arms or paralyzed limbs.
Speaker 3 (48:42):
For much longer.
Speaker 2 (48:43):
So if you're trying to decode what someone's trying to say,
or decode them trying to move a computer cursor or
right of the keyboard the thing that they're trying to
control as a computer, and those are ubiquitous, they're everywhere, they're.
Speaker 3 (48:55):
Cheap, they work really well.
Speaker 2 (48:56):
If you're trying to decode what someone's trying to move
with their arm, you either need to move a robot arm.
Robot arms are hard, they break often, they're not as
precise as people are.
Speaker 3 (49:07):
You know, where does it go? Does it go on
your wheelchair?
Speaker 2 (49:10):
Is it there with you in the shower, if it's
mounted on like if you have an amputation, is.
Speaker 3 (49:16):
It mounted on your stump or on your shoulder? That
is hard. There's a lot of challenges there.
Speaker 2 (49:22):
So kind of the readout part for speech is very
hard because it's very fast. There's a lot of information
per second. But once you have that solved, making use
of it is actually really easy. You just send texts
to their computer or their phone, or you have their
tablet talk mix sound and that's something you can carry
with you all the time and it's really reliable. So
(49:42):
because for all those reasons, I think we're going to
have speech and also computer use BCIs hopefully starting to
hit the market in the next five years.
Speaker 1 (49:51):
Great and when you think about fifty years from now,
when you think about as you're retiring and you look
around the field, what do you say.
Speaker 2 (50:00):
I think BCIs will be well, the term may not
even mean anything because it's going to be so wide.
I think many of the diseases that we struggle with
today are going to be treated with some sort of
technology inside the head or interacting with the head.
Speaker 3 (50:15):
Maybe it's somehow not.
Speaker 2 (50:16):
Invasive, whether that's paralysis, which is going to be I
think much faster than that. Or will we have systems
that help us regulate our mood, Will they treat psychiatric issues,
Will they perhaps reconnect parts of the brain that have
been disconnected due to aging or damage, or injury or stroke.
If we're talking about fifty years, a lot can happen
(50:38):
in fifty years, right, I mean technology is moving very quickly.
The interfaces will get better. So instead of talking about
instead of me being right now excited about recording from
a thousand neurons, in fifty years, could we be interfacing
with one hundred thousand or a million neurons.
Speaker 3 (50:53):
I think that's really plausible.
Speaker 2 (50:56):
Through tiny nano wires or biohybrids or focused beams that
are non invasive.
Speaker 3 (51:02):
A lot can happen.
Speaker 2 (51:03):
In fifty years, our neuroscience, I think, will be a
lot more advanced.
Speaker 3 (51:06):
We will not be limited to right now.
Speaker 2 (51:09):
We mostly understand the peripheres, We understand movement, We understand
the senses really well because it's really easy to experimentally
manipulate those.
Speaker 3 (51:17):
We as soon as you get.
Speaker 2 (51:18):
Into the kind of the inside the center cognition intelligence,
how do we problem solve creativity? We don't understand that
really well, but I think at fifty years we will.
And part of that is because as we make these
medical systems, we will have access to human brains. So
think of this as a flywheel. So let's say someone
(51:38):
has a few thousand electrodes because they have a stroke
and they want to communicate. Maybe these are spread across
several different brain areas because you get different pieces of it.
Or maybe you get the prosody in one area primarily
and you get what they're trying to say in the
motor cortex. But you get some planning benefit and language
benefit from the temporal lobe. Okay, so let's say you
have four or five six areas that you're recording from. Well,
(52:00):
now you have a wealth of information that you can
use for other things. So some of these patients are
going to develop dementia over time, or they might be depressed,
or they might have OCD, And instead of having to
do a new brain implant with all the new risks
of that, you can just look at the data you're
already collecting and try to relate that to their mood
(52:21):
or what are they looking at? What are they trying
to remember? Oh, they're trying to remember where they put
their keys. Hey, Actually, because we have electrodes in the
temporal lobe, it's close to the hippocampus, it's cortex, it's
part of the memory system as well, everything's kind of
spread out. Well, maybe now we're seeing some neural correlative
that memory process. Maybe we can even ask if they're
(52:44):
willing to do another clinical trail where we stimulate and
try to boost that memory, try to kind of help
nudget be remembered correctly. I think when we're talking about
fifty years that's going to happen. And so through this
process we're going to learn a lot more about how
the human mind works and thus how to fix it.
Speaker 1 (53:06):
That was my interview with Sergei Stavisky, a neuroscientist that
you see Davis and co director of the Neuroprosthetics Lab.
We talked about what BCIs can do, what they might
do soon, and how will navigate the human questions that
they raise. What we talked about today was how a
person's intention can find its way back into the world
(53:28):
when bodies have lost function. Brain computer interfaces are opening
a new lane right now. These technologies are crude in
some ways, but they're getting better fast. Each year they
get a little faster and more expressive. So this is
how BCIs can restore autonomy and intimacy and dignity. And
when it's done right, you don't see the technology at all,
(53:51):
You just see the person again. So here's how I
see it. In the next five years, BCIs are going
to start looking less like research product and more like appliances.
We're going to have fully implantable systems for communication. In
other words, at some point in the future, we'll be
looking at a small surgery, a wireless puck that goes in,
(54:13):
and a setup that takes minutes instead of hours. You'll
turn on your speech BCI or your BCI that controls
a computer cursor, and the key thing will be reliability,
these decoders will hold steady through years, and also identity.
The voice is going to sound just like you, your cadence,
(54:33):
your prosity, your humor at the end of a sentence.
Maybe rehab teams will have a neural therapist who tunes
your decoder the way that an audiologist tunes a cochlear implant.
And if I had a guess, this will all become
normal rather than newsworthy. Now around ten years out, we'll
get good feedback of signals moving in both directions. So
(54:56):
a person who is suffering from paralysis will can control
her hand through say electrodes in her motor cortex, and
you have another interface, say electrodes in her somatosentury cortex,
that's inputting information so that she feels a push back
with electrically evoked touch, and that loop makes the movements
(55:17):
smooth and automatic. This is all going to continue getting
smaller and better. Soon will have thin film options to
reduce the surgical footprints. The decoders will auto calibrate, they'll
borrow tricks from language models, and they'll figure out how
to adjust to your neural dynamics when you're tired or
(55:38):
stressed or boosted on caffeine. Eventually your BCI will speak
the same API language as your phone and home devices,
so that you can text or adjust the lights or
turn on appliances without moving a limb or making a sound.
And crucially, the privacy architecture is to evolve like inner
(56:02):
speech stays off limits by default, and your neural stream
lives behind consent gates. We'll need to have a kind
of airplane mode for the mind. Okay, And if I
were going to speculate on a quarter century from now,
I'm thinking that what we're looking at is very high
bandwidth arrays. These might be micro needles or flexible meshes,
(56:26):
or electrode stents living on the inside of the blood vessels.
Whatever the technology, it's going to give us coverage that
approaches the dexterousness of natural hand control. Imagine playing a
piano with one of these. Imagine prosthetics and exoskeletons that
feel less like machines and more like natural limbs because
(56:49):
the brain sees and feels them just as part of
the body. And for communication, we'll get the full richness
of natural speech. Just imagine talking with a person with
a BCI and you hear the emphasis of ups and
downs of speech, and their laughter and their little half
swallowed syllables when people are negotiating, turn taking and singing.
(57:15):
And soon enough, I think, in our lifetimes for sure,
the science fiction edge of this all is going to
start to glow. So imagine a scene like this when
you step onto a train maybe thirty five years from now.
People are sitting there. It's crowded, and they're all speaking
private messages to their friends who are somewhere else. There's
(57:36):
no sound, the train is quiet. Each person's decoder is
locked onto their attempted speech, not their idle thoughts, and
every message is signed with a cryptographic water mark that
proves it came from that person's neural key. So you're
looking at a silent train car, but it's filled with conversations.
(57:57):
Or just imagine something simpler. Here's a carpenter who lost
his hand, but he's back at work with a prosthetic
hand that streams touch information into the brain pressure and temperature.
But also he can feel the details of the grain.
He can tell the difference between pine and oak just
by running his sensory packed robotic fingers over it. And
(58:21):
the key is that He doesn't think about the device
at all. He just builds, just like you use the
high bandwidth sensory devices on your own hand, and you
rarely stop to think about it. Eventually, there'll be a
lot of legislation in place, because there are going to
be hard lines we choose as a society not to cross.
(58:43):
Not all thoughts should be digitized. We're going to need
neuro rights with teeth, will need on device processing that
keeps data local where maybe you have your own descendant
of modern day LLMS living with you in your brain.
Whatever the case, will presumably keep asking philosophical questions about
(59:05):
our brains and ourselves, but we'll get to do it
with better and better tools than we have now. And
I think what this means is that we have more
in common with our ancestors of a thousand years ago
than we do with our descendants a century from now.
(59:29):
Go to Eagleman dot com slash podcast for more information
and to find further reading. Send me an email at
podcasts at eagleman dot com with questions or discussion and
check out Subscribe to Inner Cosmos on YouTube for videos
of each episode and to leave comments. Until next time.
I'm David eagleman, and this is inner cosmos.