All Episodes

September 25, 2023 49 mins

Can your thoughts be read with neurotechnology? When is measuring the brain like reading the mind? How close or far are we from being able to know if you're thinking about some particular thing you did or intend to do? What's hype and what's real? Join Eagleman for a deep dive into mind reading: what it means, where we are now, and whether your thoughts could ever be readable with new technologies.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Can we read minds?

Speaker 2 (00:07):
For those of you who follow the media around neuroscience,
it certainly seems like it. You've seen articles and heard
things about mind reading.

Speaker 1 (00:16):
So what is it? Could we run mind reading in.

Speaker 2 (00:20):
Airports so that we know who is carrying a bomb
or hatching a plot?

Speaker 1 (00:25):
Where are we right now with this technology?

Speaker 2 (00:28):
How close or far are we from being able to
know if you were thinking about some particular thing you
did or intend to do. Welcome to Inner Cosmos. I'm
David Eagleman, a neuroscientist at Stanford and this podcast is

(00:48):
all about the intersection of brain science with our lives,
both now and in the future. Today's episode is about
mind reading.

Speaker 1 (01:06):
What does it mean? Where are we with it now?

Speaker 2 (01:10):
Will we be able to read your mind by measuring
something from your brain? So strap in for a wild
ride today. So imagine this twenty years from now. You
walk into the airport to take a flight across the nation.
But there are no metal detectors here anymore, or those
scanners that rotate around you looking for things hidden on

(01:33):
your body. Instead of simply knowing what you have on
your body. You find the airport now has portable brain
scanning such that the authorities can tell who is thinking
about bringing a bomb onto the airplane. And don't picture
a big, clunky fMRI machine like we have now, because

(01:57):
in two decades, brain scanning could certainly be done at
a distance, getting some signature of brain activity of every
person walking through Now could this work? I want to
put aside any legal and ethical conundrums for just a moment,
because I want to ask would this be technically possible?

(02:21):
So let's start with what do we mean by mind reading?
This is a term that's always been talked about by
magicians and FBI agents and forensic psychologists, and what it
means historically is reading somebody's body language or the words
they choose, or their body posture to understand what might

(02:44):
be going on inside their heads under the hood where
you can't really see what's happening. And so the next
step for mind reading is to use technology to allow
us to do this a little bit better than an
observant person would. And one example of this is the
lie detector test. So I'm going to do a separate

(03:06):
episode on lie detection. So I'll just say briefly here
that light detection is used in courts all over the
world it's not actually detecting a lie, it is simply
detecting a stress response that is usually associated with lying.
So this technology started in the early nineteen hundreds when

(03:26):
people realized that often when somebody is lying, there's a
stress response. You can detect when people are stressed about
the lie. Now, this has been researched with breathing, blood pressure,
heart rate, pupil dilation, and the most common measure is
what's called the galvanic skin response, and all these are

(03:49):
external signals that the person is stressed. Now, these aren't
terrible measures, but they're not perfect either, and that's why
they're accepted by some court systems and not by others.
And they're not perfect because, among other things, there are
people that are very good at lying and it doesn't
stress them out in any way to do so, and
so a galvanic skin response or something else is not

(04:11):
going to reveal anything about stress in their situation. So
the point I just want to make here is there's
actually quite a gap between the notion of a lie
and something that you can read off the skin. And
you can see this isn't exactly mind reading here, but
it's something removed by a few degrees, So what about

(04:34):
reading something directly from the brain. Just before I get
into that, I'll just mention that some years ago I
talked with a startup that was using electron cephlography or EEG,
with little electrodes that you stick on the outside of
the head. And this company's market was long distance truckers,

(04:54):
And so what they did is they would sew these
EEG electrodes into a baseball cap so that the trucker
could wear the cap and the system could tell if
he was getting sleepy, and if so, then the system
would do things like alert the driver to pull over.
So it was a pretty straightforward use case, but it
turned out the company was having a very hard time

(05:16):
getting traction for one main reason. The truckers were concerned
that their bosses would be able to read their minds.
Now that struck me as interesting, because we're never going
to get very far in knowing what is happening inside
a brain with a crude technology like EEG. People are

(05:38):
using EEG to let's say, control a wheelchair, because you
can find a few signals that can be reliably distinguished,
and then you can use one signal to mean go
and another one to mean stop and left and right.
But in today's discussion about reading minds, I want to
ask if we can use technology to really level up

(06:00):
simply is somebody having a stress response? Or can we
distinguish four signals to direct a wheelchair? But instead, what
precisely is this person thinking about? Who do we do
mind reading via more direct measurements of the brain and
how close are we to that right now? Now, what
I'm going to tell you about the field that the

(06:21):
media typically calls mind reading started getting the foundations laid
in the nineteen fifties when people first started measuring signals
from the brain. And what they found is that these
brain signals were well correlated with things in the outside world. So,
for example, when someone sees a horizontal line, that activates

(06:43):
particular neurons that makes them pop off in the visual cortex.
So in theory, if you had an electrode dunked into
the visual cortex and you didn't know what the person
was seeing, but then you saw these cells pop off,
you could know that they were seeing a horizon the line.
This is a very crude form of mind reading. Now,

(07:04):
with almost everyone on the planet there are not electrodes
in their brains, so how could you read something meaningful
about their brain activity. So some decades ago, functional magnetic
resonance imaging or fMRI burst onto the scene, and this
technique allows you to non invasively measure the activity across

(07:27):
the whole brain. The problem is that it's really low resolution,
so instead of knowing what individual neurons might be doing,
you now can only see great, big blobs that represent
activity that covers hundreds of thousands or millions of neurons.
So people didn't think that fMRI was going to be

(07:48):
able to tell you much about the details of what
someone was seeing, like exactly what kind of line or
picture or video someone is watching. But some years ago
go researchers in England and in America started doing some
very cool analyzes. And what they did is they looked
at lots of visual inputs and they measured the corresponding

(08:12):
outputs from the fMRI. So you show something and you
measure the brain activity, and then you do the next
one and the next one. And what they found is
that even though the individual blobs can't tell you much,
if you look at the patterns of blobs with lots
and lots of example inputs, you can then use machine
learning here to help you, and you can start pulling out.

Speaker 1 (08:36):
Some pretty good correlations.

Speaker 2 (08:38):
You can start knowing what was presented visually just by
looking at the details of this multi blob output. And
this really hit a new watermark in twenty eleven when
Jack Gallant and his team at Berkeley did an amazing experiment.
He had participants watch videos for hours and hours and

(09:00):
they measured each frame of the video and exactly how
the brain responded, tons and tons of input and output.
And then they could show the participant a video and
just by looking at the fMRI brain activity these patterns
of blobs, the researchers could crudely reconstruct what the video

(09:21):
must have been that.

Speaker 1 (09:22):
Was being looked at.

Speaker 2 (09:24):
So they look at your brain activity and the computer
reconstructs this must have been something like a bird flying
from right to left across the screen, and the real
video was an eagle flying from right to left across
the screen. Or here's the reconstruction of a person talking
or a whale breaching or whatever. And the original videos

(09:45):
that were shown to the participants were really really similar.
It was extremely impressive, and I'd encourage you to watch
their videos, which I've linked at Eagleman dot com slash
podcast now, as you can imagine the media blowing up
at this point, and everyone called this mind reading, and
it's only gotten more impressive and fine tuned from there.

(10:08):
Now the computer also spits out the words the concepts
that are associated with a video while you're watching, and
the Lawns team has reversed the machine learning to show
maps of where concepts sit in the brain. So if
it's the word marriage or job or dog or whatever,
you can see which areas of the brain care about

(10:30):
these concepts.

Speaker 1 (10:32):
Now. I want to note that the maps.

Speaker 2 (10:34):
Of concepts are slightly different for each participant, and I'm
going to come back to this in a bit, but
for now, I just want to emphasize how mind blowing
this all is. And when I first read this paper
about reconstructing videos in twenty eleven, I was so inspired
and immediately had an idea in my lab to see
if we could measure dreams. Because dreams are simply activity

(10:57):
in the primary visual cortex. Teams are experienced as visual
because there's activity in the visual part of your brain,
and your brain interprets that as seeing, so even though
your eyes are closed, you are having full, rich visual experience.
So I figured we should be able to just measure
that and then interview people about what they roughly saw

(11:20):
in their dreams. Or perhaps we give them a couple
of choices about scenes that might have appeared in their
dream and see which one they pick. Anyway, I started
this project, but before I could finish it, my colleagues
in Japan, led by Yukiyaso Komatani, published a beautiful paper
on this same thing. His was the same approach as

(11:40):
what Galant had done at Berkeley. You gather a bunch
of data about people watching videos in the fMRI, and
then you use a machine learning model to do the
neural decoding. Now, with dreaming, it's a little more difficult
to know whether you're actually getting this right, because the
people have to wake up and tell you what they remember,

(12:01):
and then you compare that to what you think you
might have decoded. And of course you can't know the
exact timing of when they saw what. But nonetheless this
gave the rough start to a dream decoder that could
tell you what you were dreaming about, and the same
idea with figuring out what someone might have seen, you

(12:21):
can do the same thing with what someone heard. This
started in twenty twelve in the lab of Robert Knight,
also at Berkeley, who showed that you could reconstruct what
words a person was hearing by using surface electrodes right
on the surface of the brain. This is called intracranial
EEG or IEG. So by presenting lots of spoken words

(12:45):
to them and measuring the neural activity here, they could
reconstruct which word a person was hearing. And a paper
just came out in twenty twenty three showing now that
you can use even more sophisticated machine learning models to
reconstruct the music that someone is listening to. Just from
eavesdropping on the neural activity, you can figure out what

(13:09):
the music must have been. And specifically, this new paper
reconstructs Pink Floyd's song Another Brick in the Wall. So
they play the music to the person and they look
at all this up and down electrical activity on the
surface of the cortex, and they can figure out the
words of the song, and they can figure out all

(13:29):
kinds of other information about the prosity of the music,
the rhythm and intonation and stress and so on, and
they can put this all together into a reconstruction of
what the whole song must have been just from.

Speaker 1 (13:44):
The neural activity.

Speaker 2 (13:55):
So I want you to notice one thing. This is
a totally different measure than what the the other labs did.
This lab is using intracranial EEG, which gives you really
fast small signals, whereas fMRI gives really slow large signals.
But it doesn't matter. For a machine learning model. All

(14:15):
you have to do is give lots of inputs and
measure lots of outputs, and as long as the output
is reliably correlated to that input, it doesn't really matter
what the measuring device is. You can still look at
the output and say, okay, look, this must have been
the input. This must have been the video shown because
it led to this pattern of activity in the fMRI,

(14:39):
or this must have been the music played because it
led to this pattern of activity in the intracranial EEG.
And you can take this same decoding approach with other
parts of the brain too, not just to decode the
input that someone must have seen or heard, but also
to understand what the intention is for an output, in

(15:00):
other words, how the brain wants to move the body.
In other words, you can eavesdrop on the visual cortex
to understand what's in front of someone's eyes, or on
the auditory cortex to understand what's piping into their ears.

Speaker 1 (15:14):
And in the same way, you can decode.

Speaker 2 (15:16):
The signals from their motor system to understand how they're
trying to move. And this is exactly what neuroscience is
doing with people who are paralyzed. Typically, a person who
is paralyzed has damage to their spinal cord such that
the signals in their brain aren't able to plunge down
the back and control the muscles. But their brain is

(15:37):
still generating the signals. It's just that the signals aren't
getting to their final destination because of a broken roadway
or a degenerated roadway. But the key is that their
brain is still generating the signals just fine. So if
you could listen to those signals in the motor cortex,
could you exploit those to make the action happened in

(16:00):
the outside world. Yes, And this kind of incredible work
has been happening for two decades now. It started when
researchers at Emery University implanted a brain computer interface into
a locked in patient named Johnny Ray, and this allowed
him to control a computer cursor on the screen simply

(16:22):
by imagining the movement. His motor cortex couldn't get the
signals passed a damaged spinal cord, but the implant could
listen to those signals and pass along the message to
a computer. And by two thousand and six there was
a former football player named Matt Nagel who was paralyzed,
and he was able to get a brain computer interface,

(16:43):
a little grid of almost one hundred electrodes implanted directly
into his motor cortex, and that allowed him to control
lights and open an email, and play the video game
Pong and draw circles on the screen. And that's mind reading.
His brain imagined moving his muscles, which caused activity in

(17:06):
his motor cortex, and the researchers measured that neural activity
with the electrodes and crudely decoded that to figure out
the intention. And by twenty eleven, Andrew Schwartz and his
colleagues at the University of Pittsburgh built a beautiful robotic arm,
and a woman who had become paralyzed could imagine making

(17:27):
a movement with her arm, and the robotic arm moves. Now,
as I said, when you want to move your arm.
The signals travel from your motor cortex, down your spinal cord,
to your peripheral nerves, and to your muscle fibers. So
with this woman, the signals recorded from the brain just
took a different route, racing along wires connected to motors

(17:49):
instead of neurons connected to muscles. In an interview, this
woman said, I'd so much rather have my brain than
my legs. Why did she say this, Because if you
have the brain, you can read signals from it to
make a robotic body do things. You can decode the
activity in the brain. Now, in episode two, I talked

(18:10):
about many of these examples and a lot more, and
I pointed out that we shouldn't be surprised that we
can figure out how to move robotic arms with our thoughts.
It's the same process by which your brain learns to
control your natural, fleshy limbs. As a baby, you flailed
your appendages around, and you bit your toes, and you

(18:32):
grabbed your crib bars, and you poked yourself in the eye,
and you turn yourself over. For years you did that,
and that's how your muscles learned to mind read the
command center of the brain. Eventually you could walk and
run and do cartwheels and ice skate and so on,
because your brain generates the signals and the muscles do

(18:53):
as commanded, and so the brain naturally reads from these systems,
and we can do so now artificially. So nowadays you
have people who are paralyzed successfully running brain controlled electric wheelchairs.

Speaker 1 (19:08):
It's the same idea.

Speaker 2 (19:09):
You measure brain activity in some way, either with surface
electrodes on the outside, or with higher resolution like electrodes
on the surface of the brain, or even with a
grid of electrodes poked a little bit into the brain
tissue and you read what the motor intention is, like
move forward or turn right, and you control the chair accordingly.

(19:29):
And this sort of work is exploding. A few years ago,
Stanford researchers implanted two little arrays of microelectrodes in a
small part of the brain, the premotor cortex, in a
person who was paralyzed, and then the person was able
to do mind writing, which was turning the signals from
the motor cortex into handwritten letters on the computer screen.

(19:54):
So the person thinks about making the hand motions to
write big letters, and then you see the lets get
drawn on the computer screen and the person could do
about ninety characters a minute.

Speaker 1 (20:06):
And then just in August.

Speaker 2 (20:08):
Twenty twenty three, two amazing papers came out side by
side in the journal Nature, and by the way, I'm
linking all these papers at eagleman dot com slash podcast.

(20:33):
The first paper was from Stanford. They were working with
a person with als. So this person had a growing
paralysis and at this point couldn't speak understandably anymore. So
the team put in two small electrode arrays. These are little,
tiny square grids of electrodes. They're small, They're like three
point two millimeters on his side, way smaller than a penny,

(20:57):
and they dunk these into an area just in front
of the motor cortex, a pre motor area called ventral
area six. Okay, I'm going to skip all the details,
because the important part is that the researcher said, look,
given these signals worth seeing, what is the probability of
the sound the person is trying to say, and also

(21:19):
what are the statistics of the English language, And they
combine those probabilities to figure out the most likely sequence
of words the person is trying to say, just by
looking at signals in the brain, and it worked really well.
If the person tried to say any one of fifty words,

(21:39):
the computer got it right nine out of ten times,
And when the person trained up on a huge vocabulary
of over one hundred thousand words, the computer got it
right three out.

Speaker 1 (21:49):
Of four times.

Speaker 2 (21:50):
This is incredible because the person is not using his
mouth but only brain signals, and the researchers could decode
those and hear the person talking with them. And as
I said, at the exact same time that paper came out,
a second paper in nature came out from UCE San
Francisco and UC Berkeley. It was the same basic idea,

(22:14):
but here they were using surface electrodes on the brain.
This intracordical eeg. So imagine two post it notes up
against the surface of your brain. It sort of looks
like that anyway, Slightly different signals spread over sensory and
motor cortex. And they worked with a woman who had
had a brainstem stroke eighteen years ago, and she can't

(22:37):
speak and she can't type. So just imagine being able
to think clearly but have no capacity for output. It's
a nightmare, right, So they were not only able to
decode her words what she was trying to say, but
they were also able to decode what signals her brain
was trying to send to her facial muscles, and they

(22:58):
made an avatar on the screen that would speak her words.

Speaker 1 (23:03):
And show her facial expressions.

Speaker 2 (23:05):
And this is so amazing because it's taking the signals
that are hidden on the inside of the skull and
surfacing these, exposing these, turning these into something we can
understand on the outside and watch on the computer screen.
So this certainly seems like the kind of thing we
would want when we talk about mind reading. And I

(23:26):
also want to say that the researchers and surgeons who
are doing this are doing incredible pioneering work. But after
we've lived with these mind bending results for a while,
it seems easy and obvious that this should be possible.
We're finding these signals that are often complex and covered
giant swaths of neurons, but you know that they're correlated

(23:49):
with movements or pictures or sounds, and you simply throw
machine learning at it to decode the relationship. And once
you've read the brain signals and correlated them to their output,
you can now figure out the intention. So all this
is amazing news about where we are right now. And
if you were to drop off from this podcast now,

(24:11):
you'd probably say that we have achieved mind reading. But
here comes the plot twist into Act two. It seems
like mind reading is here, and if you judged just
from magazine and news articles, you would be certain that
we're reading minds and that the next step is to

(24:32):
read everyone's mind and their innermost thoughts. But I want
to explain why we are actually quite distant from that.
What I've been telling you about is brain reading, and
it's amazing, but we should reserve the term mind reading
for reading the contents of your mind, your thoughts, And

(24:53):
as stunning as all that current research is, it's not
reading thoughts.

Speaker 1 (24:58):
Why because unlike a.

Speaker 2 (25:00):
Picture or a piece of music, or a decision about
which way to go next, a real thought that crosses
your mind can be a pretty different creature. Now, with
everything I've told you so far, you might say, well, fine,
it seems like reading someone's mind is just the next
layer of resolution, like going from black and white TV
to color TV. But I'm going to suggest that analogy

(25:24):
is a false one and we're going to unpack that now.

Speaker 1 (25:28):
So really take stock of the thoughts.

Speaker 2 (25:31):
That you have in a day, their complexity and their weirdness,
that'll allow you to understand whether and how we could
really do mind reading. So the other day I was
driving on the highway and I glanced at a billboard,
and I really took stock of the three second stream
of thoughts in my head. It went something like this, Oh,

(25:51):
there's the billboard again for that software company. Tommy first
showed me that app six years ago, or maybe it
was seven years ago. Tommy's beard was so bushy, which
looked unusual on such a young face. Whatever happened to
him and that girl that he was dating, what was
her name, Jackie Jane?

Speaker 1 (26:11):
I wonder how.

Speaker 2 (26:12):
Tommy's doing now. Ah, that reminds me. I need to
send him an email, because I think he sent me
one a few weeks ago about a paper he wanted
to co author. Now, if you could read out that thought,
that's what I would call mind reading.

Speaker 1 (26:25):
Not what's the picture you're.

Speaker 2 (26:27):
Seeing or the sound you're hearing, but that kind of thought,
in all of its weirdness and richness. Now you may
be thinking, but wait, didn't I just hear about some
other nature paper in twenty twenty three that seems to
do just that reads people's thoughts sort of. My colleague
Alex huss At Ut Austin and his grad student Jerry

(26:49):
Tang did a wonderful piece of work. He used fMRI
and married it to a large language model GPT one.
They put people in the scanner and they had them
listen over many days to sixteen hours of storytelling podcasts
like The Moth. So the researchers know the words that
were said, and they measure what's happening in the brain

(27:11):
and they throw the AI model at it to get
a clearer and clearer picture of what words cause activation
in what areas. Then you have the person think about
a little story. They say the words in their head,
like quote, he was coming down a hill at me
on a skateboard and he was going really fast and

(27:32):
he stopped just in time. And you measure the brain
activity and you stick the AI model on it to
tell you what words were being thought of. Now, the
way they use GPT one is to narrow down the
word sequences by asking what words are most likely to
come next. So you take all this noisy brain activity

(27:52):
and you force it down a straight and narrow path
that makes it sound sentency. So the decoder doesn't get
all the words, but the idea, the hope, is that
it gets the gist, right, So does it well?

Speaker 1 (28:07):
Sort of?

Speaker 2 (28:08):
It depends how loosely you're willing to interpret things. So,
for the sentence about the skateboard, that internally thought sentence
again is he was coming down a hill at me
on a skateboard and he was going really fast and
he stopped just in time. And the decoder comes up
with he couldn't get to me fast enough. He drove

(28:28):
straight up into my lane and tried to ram me
which is not really like the original sentence at all,
but if you squint really hard, you could say the
gist is similar. Or here's another example. The participant thought quote,
look for a message from my wife saying that she
had changed her mind and that she was coming back,

(28:51):
And the dakoder translated this as to see her. For
some reason, I thought she would come to me and
say she misses me.

Speaker 1 (28:59):
So exciting stuff.

Speaker 2 (29:00):
But it's far from perfect for even a simple sentence.
So let's return to this issue of whether you could
tell if someone is trying to carry a bomb onto
a flight. You might argue that even if the system
is imperfect, you might be able to see something cooking
under the surface about a bomb, and maybe that could

(29:23):
get us close to mind reading. The problem, of course,
is that lots of people in the airport are thinking
about a bomb. For almost everybody, that's I sure hope
there's not a bomb on this plane. And certainly if
you know that your government in twenty years has perfected
remote neuroimaging and is seeing if you're thinking about a bomb.

Speaker 1 (29:46):
Then you can be guaranteed that everyone is going to
be thinking of a bomb.

Speaker 2 (29:49):
It's like asking someone to not think of a yellow bird.
As soon as I ask that, you can't help but
to think about a yellow bird.

Speaker 1 (29:57):
The more you try not to.

Speaker 2 (29:59):
The more your brain is screaming off with yellow feathers.
But what I want to make really clear is that
the problem with mind reading is even far tougher than that.
We need to recognize that our thoughts are much richer
than sequences of words. When you really take into account
all the color and emotion of our thoughtscape most of

(30:23):
it is beyond just the words. So really consider thoughts
that you have. I'll give you another one of mine
from the other day. I walked into a restaurant and
I smelled a syrup being poorn on a pancake, and
that reminded me of the time that I was in
the tenth grade and sat with my friend at the
International House of Pancakes.

Speaker 1 (30:43):
I was wearing.

Speaker 2 (30:44):
A striped sweater and my friend was in his def
Leopard concert t shirt, and we had the thrill of
being away from our parents and out on our own,
just entering our teen years, and we were flirting with
the waitress who had a German accent, who was at
seven years older than us, and she was kind enough
to give us the illusion of flirting back, or so

(31:05):
we thought at the time, but now a better interpretation
would perhaps be that she just wanted a better tip.
And I asked her to help me with a German
vocabulary word that I was working on, and she got
uncomfortable and wouldn't do it. And suddenly I was seized
with the suspicion that she wasn't actually German after all,
but for some unimaginable reason, had been pretending to be German.

(31:30):
Was she in fact American or from somewhere else in Europe?
What was her reason for telling us she was German
and putting on that accent when she clearly didn't know
how to speak the German language. Now that memory, that
moment in time involves territories all over my brain. The
taste of the pancakes, the smell of the coffee, the
sights and colors of the pancake house, the sound of

(31:53):
my friend's fits of laughter, but also things that are
more subtle, more difficult to simply re out, like my
interpretation then and now of the reasons behind her behavior,
my feelings of doubt and skepticism, and for that matter,
of a confused and unrequited pubescent crush and mystery. This

(32:15):
is the type of thing that's not stored in anyone
blob in the brain. It's instead stored in a distributed
fashion that covers all territories of the brain. And more importantly,
it's not just words that I'm thinking of. Most of
it is subtle events and emotions, and I have to
actively work to translate them into words to even communicate

(32:36):
this memory on a podcast. But words is not how
I experience it as it flips through my mind, and
hence the challenge of reading out something like that. So,
despite the tremendous progress that's going on, I want us
to retain this distinction between brain reading guessing what video

(32:57):
you're seeing or song you're hearing, or word you want
to say, and mind reading for reading out a rich
internal experience of some thought or reminiscence or plan for
the future. Now you might say, Okay, look, that's a
tough problem to get the real stream of consciousness kind
of thoughts. But this should become possible as we get

(33:18):
better resolution in measuring the brain. So let's consider whether
that challenge seems realistic. So one thing we might need
would be a technology that could measure that tends to
hundreds of little electrical spikes every second in every single
one of the eighty six billion neurons in the cortex

(33:39):
in real time and record those results. Now, that might
sound like a straightforward technology problem, and in this century
we're used to going out and solving big problems. But
I want to be clear that this would make terraflops
of data every minute, and that's a ton of data.
But let's imagine that we flex all our technical muscles
and figure out how to solve that. Amazing, But that's

(34:03):
not actually the problem. The technical feat is just the
warm up problem. The fundamental problem is the interpretation. Let's
say I handed you this file of the eighty six
billion neurons popping off for two seconds, and I told
you this is a particular memory. So you're looking in
great detail at a pattern, a swirl of billions of neurons.

(34:26):
But imagine that I don't tell you if this was
measured from Fred's brain or from Susie's brain, this exact
pattern among the neurons would mean totally different things in
different heads. Fred grew up in China and is thinking
about the time that he climbed the tree and saw
the panoramic view, and his mother, who suffered from anxiety,

(34:48):
yelled at him because she was scared because she had
lost her younger brother in an accident. While in Susie's
brain this pattern would need something entirely different. She grew
up in New Mexico, and for her it would mean
she's thinking about the time that she went on a
hike and saw wild horses and got in trouble because
her friend had parked his truck behind her mother's car,

(35:10):
and her mother couldn't get out and they had to
run all the way down the mountain to move his truck. Now,
how could such different memories have the same pattern of neurons.
It's because who you are is determined by the exact
wiring of your brain, which is determined by everything that
has come in, all your memories and experiences of your life.

(35:32):
Everything about your identity is stored in the particular configuration
of your tens of billions of neurons and hundreds of
trillions of connection. That's what makes you you. When you
learned that a cloud is the same thing as fog,
just at a different elevation, or you learned that Isaac
Newton was into alchemy, or you learned that the Bissie

(35:56):
breed of dog is from Africa and has a curled tail,
this is all stored by changes in your network. That's
what it means to learn something and to recall it later.
It means your network has changed to store the information.
And so as you go through life and you have
your unique experiences, your brain gets wired up in a

(36:17):
totally different way than someone else's, with the end result
being that your enormous neural forest, your inner cosmos, is
totally different than someone else's. And that means that if
I measured neuron number sixty nine hundred and thirty five
in your head, it would tell me little to nothing

(36:38):
about what that neuron firing means in someone else's head.
And this is of course what the researchers at Berkeley
and Austin and Stanford find. To guess at the words
that someone is trying to communicate, they have to train
the model on each person individually.

Speaker 1 (36:54):
The AI model.

Speaker 2 (36:55):
Trained on one person's brain has no chance of reading
another person's brain. So to do any sort of meaningful

(37:20):
mind reading, we would need to do what's called system identification.
What is system identification, It's a process in engineering of
figuring out the internal structure of a system by sending
in inputs and looking at the outputs. Like if I
hit this button, it beeps, and I hit that button

(37:40):
and it boops, and so on through all the buttons
and all the combinations, and I can get a pretty
good idea of what the system must look like on
the inside. You give lots of inputs, you look at
all the outputs, and that's how you figure out the
parameters of a control system, or the dynamics of a
physical system, or in this case, the structure of a
neural network a person's brain. The key is that if

(38:05):
you want to do meaningful mind reading, you'd really have
to know essentially everything about an individual person.

Speaker 1 (38:14):
So I want to know two things here.

Speaker 2 (38:15):
First, In a sense, this system identification is what long
term relationships are about. Let's say with a spouse, you
see a person in lots of different situations, and in
each you see how they react and how they handle things,
And this is what it means to get to know
someone well. But second, we can never do it with

(38:38):
very high precision. For anyone who's been in long term relationships,
you know that even with years of experience, our models
of another person are always impoverished. One of the things
that I enjoyed the most about being in a relationship
with my spouse is to watch when she has a
funny facial expression or stares at something on you usual

(39:00):
that I wouldn't have expected her to stare at, and
trying to figure out what in the world is going
on inside her head. And like any person in a
long term relationship, I'll tell you that often I have
no idea, or I'll make a guess or have an assumption,
but I am almost certainly doomed to be incorrect. And
this is an example of a more general issue that

(39:22):
I've mentioned in other episodes, which is that all we
have is our own internal model of the world, and
we can never really know what's happening inside someone else's head,
and so we make our guesses, and we say things
to people like, oh, I know exactly how you feel,
and we mean well when we say that, but we're
always off in our assumptions. It is difficult or impossible

(39:46):
to understand exactly how someone.

Speaker 1 (39:48):
Else is feeling. Why.

Speaker 2 (39:51):
It's because we've all had a history of years and
years of deep experience in the world, things that were
scary to us or tiddle or amazing or dumbfounding or
attention grabbing, and all of these form pathways in the
unspeakably large neural forest of our tens of billions of

(40:12):
neurons and hundreds of trillions of connections. So when you
zoom passed the billboard on the highway, the one that
made me think of my student and his beard and
his girlfriend whose name I couldn't remember, and the email
that I owed him, you would have a totally different
cascade of thoughts, same visual input, totally different internal waterfall.

(40:34):
And unless I knew really everything in your life, it
would be impossible for me to decode your particular idiosyncratic
stream of consciousness. Why because I'd be tasked with trying
to get some sense of a neural system that is
built from mind boggling complexity that has been developing over

(40:55):
years and decades of experience, every single experience left, and
its fingerprint in the deep forests of your brain. So
while some general structures tend to be the same, like
where your visual system is, and how your temperature regulation works,
and the hormones that control your digestive system and so on,
all the rest is shaped uniquely, which is what makes

(41:17):
one person different from the next. As you have followed
your thin trajectory of space and time from your parents,
to your hometown, to your relationships, your school, friends, your sports,
your tragedies, your successes, your heart breaks, your life, that's
what has led your brain to be unique. Now you

(41:37):
can get a graduate student in the lab to lie
down in the brain scanner and you can show scene
after seeing to figure out the rough shape of their
semantic network. In other words, which areas tend to be
activated by which words or concepts, And this is what
the labs at Berkeley and U T Austin and others do.
But it's not at all clear how you would get

(41:59):
something meaning from a stranger in the airport whose brain
you didn't already know in detail, Like, how could you
know if that guy is thinking, I disagree with this
political position, and so my goal is to murderously sneak
a bomb onto the airplane. Now, I just want to
be really clear about the assertion I'm making. This challenge

(42:20):
we face is not a scientific challenge of programming better
AI decoders.

Speaker 1 (42:26):
It's not a technical issue.

Speaker 2 (42:28):
Instead, it's about knowing you as an individual. We are
never going to get to real rich mind reading unless
we have something in the future that accompanies you and
tracks every single experience you have from the time you're
an infant, every scary moment, every class you sat through,

(42:48):
every conversation you had, every book you read, every movie
you saw, every sexual encounter. This device would have to
record everything you see and hear in experience, and then
maybe we could combine that device's data with a sophisticated
model of your genetics, or maybe combine that with a
bunch of input output system identification, and then maybe we'd

(43:11):
be close. So you might think, fine, if we could
actually record everything in someone's life, then we'd be able
to do it that way. After all, you might point
out people in the near future might wear portable cameras
every moment of their life. But really the problem of
system identification is even harder than that, and that's because

(43:34):
your internal life isn't just predicated on your inputs and outputs.
So the brain reading tasks that I told you about
at the beginning, the researchers know exactly what stimulus was
presented and know exactly the brain signals that resulted, and
that's what the AI is trained up on. But lots
of things in our inside life don't come with a

(43:57):
clear external input. Example, let's say our recording device detects
that you meet with a friend at a cafe, and
it measures all the conversation and your corresponding brain signals,
but it wouldn't know that you thought about that encounter
later you saw an analogy between something that was said
during that conversation to something else in your life. Two

(44:18):
weeks later, you have a tiny revelation about the behavior
of your date back when you were a kid and
went to the high school prom. So for many of
the things screaming around in your internal cosmos. There's no
external input, there's nothing to film, and this is true
of most of your internal thoughtscape You might see somebody

(44:41):
sitting in the booth next to you, or you might
walk through the shopping mall and you see different people
and you think whatever you think about them. They're attractive,
they're unattractive, they look like my friend from back home,
they look sad, they look pleased.

Speaker 1 (44:55):
Whatever.

Speaker 2 (44:55):
The challenge for training and input output system like the
in the research papers, is that the overwhelming majority of
your thoughts are private. You don't say your thoughts out loud,
and no one from the outside ever knows what's going
on in there, and therefore a researcher trying to train
up an AI model doesn't have the data to train

(45:18):
the model with. You might walk down the street and
think that person looks lost, or that person has weird
shoes and so on, but these are thoughts that you
never express, so we can't suss out the input output mapping.
It's not just a matter of having sophisticated enough algorithms
to measure and store and analyze the patterns. It's that

(45:40):
the patterns are inherently unpredictable because we don't know what
causes most of them and how those patterns lead to
the next. And I just want to mention one more
reason why mind reading won't be done in airports, not
now and not in two centuries from now. You need
the part participants cooperation to train up the decoder and

(46:04):
to apply the decoder. If people didn't want to train
up on this, they could think whatever random thing they
wanted and mess up the whole process. So yet another
reason why system identification will be difficult unless you want
to be system identified. So meaningful mind reading reading the
stream of consciousness isn't going to happen in our lifetimes.

(46:27):
Maybe some new kind of recording technology and brain reading
loop will get developed in a thousand years from now,
but this is not for us or our near term descendants. Instead,
we will continue to live in a world of mysterious
strangers stuck inside their own skulls, each person the sole

(46:51):
inhabitant of his or her internal world.

Speaker 1 (46:56):
So let's wrap up.

Speaker 2 (46:57):
There's been excitement and genuine an incredible progress, but this
should be carefully distinguished as brain reading. When we read
the brain patterns to know what a person is seeing
or what music they're hearing, or read the premotor cortex
to understand what words they want to say. These are

(47:18):
technological feats that would have blown the hairback of the
best neuroscientists fifty years ago.

Speaker 1 (47:25):
But it's not mind reading.

Speaker 2 (47:27):
Mind reading would be an understanding of the universe of
swirling thoughts, which is where we spend most of our lives.
Just look at a person sitting at the bus stop,
or walking silently down the street, or sitting at the library.
Their minds are swirling with an internal world, considering what
they're going to do and what they're going to say,

(47:49):
and maybe reminiscing about the past or feeling regret over
a possible present that could have been. It's not about
the final output of what they're trying to say, or
the input of what's on their eyes or ears.

Speaker 1 (48:03):
It's much deeper than that.

Speaker 2 (48:04):
It's the entire rest of the brain, beyond the little
territories of input and output. To read the rest of
that area, to read the private thoughts of a person
would require an individualized understanding, a system identification that we
need to track essentially every event in their life and

(48:27):
probe every single neural reaction and have some way to
at least guess or impute what all the internal washing
machine of thought is doing. We are centuries away from that,
and in a sense that's good news. It means that
we can enjoy helping people who are mute express themselves,

(48:48):
or a person who's paralyzed move a wheelchair or a robot,
or otherwise make things happen in the world. But happily,
it also means that we retain, for the foreseeable pretty
distant future, sure the sanctity and privacy of the swirling
galaxies in our inner cosmos. Go to eagleman dot com

(49:13):
slash podcast for more information and to find further reading
and to see videos. Send me an email at podcasts
at eagleman dot com with questions or discussion, and I'll
be making mailbag podcasts to address those until next time.
I'm David Eagleman, and this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Betrayal: Season 4

Betrayal: Season 4

Karoline Borega married a man of honor – a respected Colorado Springs Police officer. She knew there would be sacrifices to accommodate her husband’s career. But she had no idea that he was using his badge to fool everyone. This season, we expose a man who swore two sacred oaths—one to his badge, one to his bride—and broke them both. We follow Karoline as she questions everything she thought she knew about her partner of over 20 years. And make sure to check out Seasons 1-3 of Betrayal, along with Betrayal Weekly Season 1.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.